As 2021 settles in with the pandemic in a second wave, dashing hopes for lifting of unprecedented quarantines, our reliance on web to replace basically every aspect of our social lives, and for many, our everyday tasks such as grocery shopping and carrying out or day jobs, has become undeniable. It makes sense then, to hear news of an impending major change to Google's search ranking which will be taking aspects of their speed and performance as search signals. Announced for May 2021, It has everyone in the web space in a scramble to address performance of their websites quickly.
Website speed (or more accurately, the lack of it) has always been a problem for users; it has been demonstrated and statistically proven for years that slow websites mean low conversion, higher bounce rates and bad brand perception. So if it's been a problem for so long, why is it changing now?
The fact is it's not as major a change as many might be perceiving it, it's actually a refinement to the way speed is used for ranking currently. They've used speed as a ranking factor for a decade.
In 2010 Google announced that speed would be a ranking factor for desktop searches they extended this to mobile searches in 2018.
The change made by Core Web Vitals (CWV) is refining which speed metrics are being used to determine whether a website is fast or not. The metrics used have always been more to do with those that have the most impact on the user's experience rather than raw speed, such as DOMContentLoaded and load, this update extends that approach further.
And they are setting the bar really high (or low in terms of seconds! 😉). In a world where most ecommerce websites load in 8+ seconds, CWV is now analysing what fast means using metrics that expect results below 3 seconds. And that's why this change has everyone concerned.
Luckily, here at Beyond The Sketch, we specialise in building high performance websites, so we can help you understand the metrics and challenges.
The Metrics
Core Web Vitals refer to 3 measures of speed that are all user centric. That is, they look at the speed that effect the user experience rather than being arbitrary measures of raw speed such as page load.
These metrics are not new per se, rather they have been refined in their definitions and given more weight as to their contribution to the definition of speed. They are:
- Largest Contentful Paint (LCP)
- First Input Delay (FID)
- Cumulative Layout Shift (CLS)
Lighthouse is a tool for measuring web performance, and the CWV metrics - it's included with the Chrome developer tools so it's easy enough to run an audit on your site, but can also be run from the Page Speed Insights website. However, to really build a well performing website, we first need to understand what these metrics actually tell us and how they are scored. So let's break them down and look at what can effect them, as well as how we can improve them.
Largest Contentful Paint (LCP)
This is how long it takes for the largest element that is either an image (<img> tag, poster image used for video tags, or background image in CSS) or a block element containing text nodes to render within the visible portion of the viewport.
This metric measures the loading performance of the page.
Typically, it will be a hero image right at the top of the page - and where images are involved, you get big files! The bigger the image, the bigger the file, throw in the need for double and triple pixel density images (I.e HiDPI/retina screens) and the images get even bigger - and the growth isn't very linear. The bigger the file, the longer it will take to load and therefore the LCP will be slow.
Scoring
Good LCP occurs within 2.5 seconds of the page starting to load. When it comes to desktop devices, that can be difficult to achieve for large images, but not impossible.
However, it is undoubtedly one of the hardest of the CWV metrics to get into the green in mobile device scenarios. When auditing as mobile, amongst other settings to emulate a mobile device, Lighthouse's settings apply 4x CPU slowdown and network throttling to emulate the typical instability and delays you would get on a mobile data connection. This has a big effect for the LCP score, in our testing, we often found that images that were larger in file size on the desktop audit scored well inside the 'good' threshold, while in mobile audits, images that were smaller in file size resulted in scores in the 'needs improvement' band.
But don't get too caught up on single audits for LCP because load is affected by various factors, in fact, Google actually suggest measuring the 75th percentile of page loads, segmented across mobile and desktop.
Getting It Right
First ensure you generate and serve images at the right display sizes. Use responsive images to ensure that the LCP candidate isn't using a larger image to display in a smaller space - the size used to calculate LCP when it's an image is smallest of either the display size (I.e. after being resized with styling or width & height size attributes) or the intrinsic size, using unnecessarily larger images in smaller spaces will mean longer running LCP time, not that it's acceptable to be done in any case regardless of LCP.
Images also contain quite a bit of invisible metadata that increases their file size, so ensure it is being stripped out, this can be done using apps like Squoosh. If you're using a build pipeline or toolchain, plugins are usually available to optimise images such as webpack image plugin. If you're a designer in charge of manually generating images, ensure you're exporting using your software's options for optimising images for web, although keep in mind these option don't usually strip all the metadata. Alternative solutions could be to use image services that preprocess the images served, but that often carries a hefty price tag.
Another important step is to use the right image formats. Always opt for newer generation image formats when you can, such as WebP, JPEG 2000 and JPEG XR, these all provide better compression that lower file sizes and therefore download times.
Finally consider using vector SVGs wherever one would work - SVGs are currently not considered for LCP (although they may be in the future). Remember though that any raster images embedded inside an SVG will be an LCP candidate and so SVGs with any raster images inside are probably a bad idea as they will be more difficult to optimise that other image formats.
When LCP is a text element, ensure your text is part of the markup and not generated with JavaScript - this is likely a problem for SPAs, but if you're website is using client side driven rich content solutions, you'll be hit by this too; it means the LCP time will be conflated by the time it takes the JavaScript to download and execute too.
Text content should always be made part of the mark up for your website, both for SEO value and performance - it's not a good idea and is very impactful to performance when you rely entirely on client side processing for the main content of your website to be shown - so always invest in getting server side rendering to work when generating content with JavaScript.
Other contributing factors to long LCP are actually the basic causes of slow page load, such as slow server response times (i.e. slow TTFB), an unoptimised critical rendering path and other large or unoptimised resource sizes (CSS, JS, fonts etc). Remember LCP is a measure of loading performance, so get the basics right and your LCP will benefit.
The use of LCP as a search signal sort of brings back the need for the sub 3 second load, so performance needs to be at the forefront of everyone who's working on the website's minds from now on.
First Input Delay (FID)
This is the measure of time between the first interaction being triggered on the page and the page actually responding to it.
It's a metric that assesses the interactivity of the page.
The easiest way to think of FID is the time it takes for your users first interaction.
Technically speaking, it is the time taken from when a user interacts with the site, be that by clicking a button, click a link or do interact with anything that is controlled with JavaScript, to when the browser is able to process that interaction.
Scoring
FIDs good range is 0-100ms, anything 300ms and over is in the red.
That may sound harsh, but it's often easy to underestimate how long a second feels, and a 1/4 second is a long time to wait between doing something and getting any form of feedback.
Getting It Right
The greatest factor contributing to slow FID is large amounts of main thread work. That means the culprit is the websites JavaScript. If you have large amounts of JavaScript and/or you're doing a lot of DOM manipulation, or doing lots of loops over large arrays or objects, particularly early on in the page (such as synchronous scripts in the head), that's where you'll see big impacts in FID. So if FID is a problem, first optimise your script delivery, then look at your scripts themselves - refactor, use less libraries, stay away from the DOM as much as possible and leverage multithreaded JavaScript.
Oh and watch out for the third party scripts you're including! quite often they run ahead of everything else and can cause blocking behaviour.
You will likely find however, that this is one of the metrics most websites can get under control - because modern browsers are fast, and 100ms is very achievable; the key is writing lean code.
Cumulative Layout Shift (CLS)
This is a score that indicates how much the page's layout shifts as it loads; also referred to as the page's visual stability. Basically how much and how badly, do elements bounce around within the viewport during the load - a very important one because in today's world of front end frameworks and dynamic content, many websites will struggle to get this one within limits.
This is the big one for most of us. Many websites today use JavaScript to not only dynamically change their content, but also to generate the content in the first place! You'll see this in websites that use front end frameworks such as Angular, or view libraries such as React and Vue.
Such websites often aim to provide an app like look and feel by loading a single page and generating the content for other pages, on request, on the browser itself with JavaScript.
This poses a problem for SEO in any case, because if JavaScript is turned off and the website loaded, more often than not, you'll be seeing a completely blank page. That is to say, their is no content in the markup at all.
Google does process some JavaScript to index the content - but not all search engines do, and real content is far more valuable because, at the end of the day, markup can be processed faster than JavaScript and speed is a big ranking signal!
Layout shift occurs when elements on the page move from their originally rendered position. This is usually caused by styles changing while the page is loading, or when JavaScript is dynamically changing the content during the load.
Third party injected content, such as ads, or personalised recommendations can also cause layout shift.
Scoring
CLS is a score derived from measuring how much layout shift occurs unexpectedly during load. Unexpectedly is a key term here, where layout shift occurs as a result of user interaction with a UI element- such as a disclosure box or similar - it is not conducive to the CLS score.
The score is the multiplication of two fractions; impact fraction and distance fraction. We won't get into the details of what those fractions are, that's outside the scope of this article.
A good CLS score is 0.1 and under. 0.25 and over is in the red. That's pretty unforgiving, but there is good reason for it. Layout shift is not only unsightly, unstable layouts mean terrible UX and is telling of a lack of quality control for the final product that users will take note of - it's brand damaging. Would you be happy if your phone's wake button moved out of the way after you picked it up and we're about to hit it?
There's a few things you can do to get a lower CLS score.
First and foremost - watch those third party scripts! A lot of the time we see website's rely on third party to inject above the fold content, typically ads and banners, during the page load. If such content is crucial, then ensure that your base layout allows for those elements being injected. This means ensuring container elements are sized with your critical CSS to be the correct size and position that the third party content will be rendered at.
This applies to content that your first party scripts include as well! If you are placing banners or inserting dynamically created text or UI elements - ensure you have styled placeholders in your critical CSS. This will eliminate the shift they cause when added to the DOM.
If your site or application is entirely JavaScript driven using React, Vue, Angular or similar, you may actually experience very little layout shift because of the way CLS is calculated during the page load. However, if your JavaScript driven website loads in your components asynchonously during the page load, or where you have parts using JavaScript while the rest is static in the markup, that's where you'll find CLS climb quite significantly. To mitigate this, server side rendering will go a long way - remember, there is no better alternative than static content for speed.
Next, ensure your embedded images have correct width and height attributes - this will allow the browser to calculate the layout before the image is loaded, meaning the paint will be correct in size/aspect ratio. Also include appropriate styling images in the critical CSS to ensure it's layout is known to the browser before it starts to paint.
Got custom fonts? Well then expect layout shift! It will be small but remember we are working within 0.1 - so every shift counts! The problem with custom fonts is that the browser needs to download them before it can use them - but the browser will start calculating the layout using either the web safe fallback font your styles specify, or the browser's default font.
Eliminating the shift caused by loading custom fonts is practically impossible, to get it right you need to use a font that has the same or at least similar characteristics as your web safe fallback font - such as kerning, character width etc. But as you might imagine, this is not only really, really hard - but might also just make using a custom font useless!
What you can do however, is preload your fonts, this will make it more likely that the font is downloaded before the browser calculates layout the first time. Keep in mind this does not guarantee it won't cause shift.
Another tip is to set font-display: optional
. This means that the custom font has a very, very short window of time to download for it to be used. If that window is missed, the fallback font is used instead, which of course, the browser would already have used to calculate layout - so no re-layout will occur.
Should You Worry?
That's the big question really. Is this change going to destroy your organic SEO and damage your visibility? While it is certain you can expect CWV to result in a change to it, the honest answer can be summed up with one emoji:
🤷♂️
As with most of these types of changes, only after it's happened will the effects be measurable. However, make no mistake that it must not be ignored! Change will occur and if your site is mostly scoring in the amber or red, you will almost certainly be impacted. Performance has always been key to a website's success, it's a big deal to the users, and should therefore be a big deal to webmasters - so irrespective of the CWV changes, addressing your website's performance should be on your roadmap; what CWV has given you, is a priority and timeframe in which you should do it.