At 10up, we understand that website performance is essential for a positive User Experience, Engineering, SEO, Revenue, and Design. It’s not a task that can be postponed but rather a continuous and evolving process that requires strategic planning, thoughtful consideration, and extensive experience in creating high-quality websites.
10up engineers prioritize performance optimization when building solutions, using the latest best practices to ensure consistent and healthy performance metrics. We aim to develop innovative and dynamic solutions to reduce latency, bandwidth, and page load time.
At 10up, we pay close attention to Largest Contentful Paint, Cumulative Layout Shift, and First Input Delay. Collectively, these three metrics are known as Core Web Vitals.
We closely monitor Core Web Vitals during development to ensure a high-quality user experience. Maintaining healthy Web Vitals throughout the build and maintenance process is crucial, which requires a shift in building and supporting components. Achieving healthy Web Vitals requires a cross-disciplinary approach spanning Front-end engineering, Back-end engineering, Systems, Audience and Revenue, and Visual design.
Images are typically the most significant resource on a webpage and can drastically affect load times. Therefore, optimizing images while maintaining quality to enhance user experience is crucial. To achieve this, consider the following aspects when working with website images. Combining all suggestions below can improve page load times and perceived performance.
Using responsive images in web development means that the most suitable image size for the device or viewport will be served, which saves bandwidth and complements responsive web design. Many platforms, such as WordPress and NextJS, provide responsive image markup out of the box using specific APIs or components. Google Lighthouse will indicate if further optimization of your responsive images is necessary. You can also use the Responsive Breakpoints Generator or RespImageLint - Linter for Responsive Images to help you generate or debug responsive image sizes.
<img
alt="10up.com Logo"
height="90px"
srcset="ten-up-logo-480w.jpg 480w, ten-up-logo-800w.jpg 800w"
sizes="(max-width: 600px) 480px,
800px"
src="ten-up-logo-800w.jpg"
width="160px"
/>
Using a Content Delivery Network (CDN) to serve images can significantly enhance the loading speed of resources. Additionally, CDNs can provide optimized images using contemporary formats and compression techniques. Nowadays, CDNs are regarded as a fundamental element for optimizing performance. Here are some CDN suggestions with proven track records:
Efforts are currently focused on optimizing image compression algorithms to deliver high-quality, low-bandwidth images. Although several proposals have been made, only a few have been successful. WebP is the most widely used contemporary format for this purpose. When using a CDN, images can quickly be served as .webp by making a simple configuration change. The WordPress Performance Lab plugin has also added .webp functionality for uploads, which can be utilized if necessary.
Generally, it’s recommended to use the .webp image format as much as possible in projects due to its performance benefits. Doing so can help pass the “Serve images in modern formats” audit in Google Lighthouse.
We can use lazy loading for images to prioritize the rendering of critical above-the-fold content. This reduces the number of requests needed to render important content. Use the “loading” attribute on images to achieve this.
The “decoding” attribute can enhance performance by enabling parallel image loading. WordPress Core offers this functionality by default.
/* Above the Fold */
<img
alt="10up.com Logo"
decoding="sync"
loading="eager"
height="90px"
src="ten-up-logo-800w.jpg"
width="160px"
/>
/* Below the Fold */
<img
alt="10up.com Logo"
decoding="async"
loading="lazy"
height="90px"
src="ten-up-logo-800w.jpg"
width="160px"
/>
To improve your website’s loading time, you’ll likely need to optimize the LCP element, which is typically the most prominent and first image on the page. Factors such as First Contentful Paint, Time to First Byte, and render-blocking CSS/JS can cause an image to be flagged as the LCP element.
We can set a fetch priority on the resource to load the image faster. The attribute hints to the browser that it should prioritize the fetch of the image relative to other images. The Performance Lab plugin offers this functionality as a experimental option.
<img
alt="10up.com Logo"
decoding="async"
loading="eager"
fetchpriority="high"
height="90px"
src="ten-up-logo-800w.jpg"
width="160px"
/>
Adding width and height attributes to all <img />
elements on a page is essential to prevent CLS. If these attributes are missing, two problems can occur:
This can also cause the Properly size images audit Lighthouse to flag errors.
<img
alt="10up.com Logo"
height="90px"
src="ten-up-logo-800w.jpg"
width="160px"
/>
Reducing the number of image requests a webpage makes is the best approach to improve image performance. This requires effective design and UX decisions. Additionally, using SVGs instead of images for icons, decorative elements, animations, and other site elements can improve bandwidth and page load time.
Use tools like SVGOMG to optimize and minify SVG elements through build managers or manually in the browser where needed.
To improve page loading speed, use the loading=”lazy” attribute for rich media that use iframes, like Youtube, Vimeo, or Spotify. This will delay the loading of assets until after the initial page load, which can save time and improve the user experience. Note: To lazy load the video and audio HTML5 elements, you’ll need to use either Load on Visibility or Load using the Facade Pattern.
To reduce the page load time, you can use the Facade pattern to render a low-cost preview of heavy and expensive assets, such as videos, and then import the actual asset when a user requests it. This technique enables loading the asset on demand while attempting to retain the overall user experience.
Recently, this methodology has received a fair amount of attention; there are already packages that help achieve implementation for several services:
Follow these instructions to modify the core block output to support the facade pattern.
You can use the script-loader snippet (outdated) to create a more general Facade mechanism. This snippet uses promises and can load necessary scripts when the user interacts with the user interface.
const playBtn = document.querySelector("#play");
playBtn.addEventListener("click", () => {
const scriptLoaderInit = new scriptLoader();
scriptLoaderInit
.load(["https://some-third-party-script.com/library.js"])
.then(() => {
console. log(`Script loaded!`);
});
});
If your environment supports ES6 you can also use dynamic import:
const btn = document.querySelector("button");
btn.addEventListener("click", (e) => {
e.preventDefault();
import("third-party.library")
.then((module) => module.default)
.then(executeFunction()) // use the imported dependency
.catch((err) => {
console.log(err);
});
});
To load non-critical resources on demand, engineers can only load assets when they become visible. The Intersection Observer API can trigger a callback when elements become visible, enabling the selective loading of potentially heavy resources.
Read this Smashing Magazine article for an in-depth understanding of how the Observer APIs work. See the example code below for an idea of how to go about implementation:
const images = document.querySelectorAll('.lazyload');
function handleIntersection(entries) {
entries.map((entry) => {
if (entry.isIntersecting) {
entry.target.src = entry.target.dataset.src;
entry.target.classList.add('loaded')
observer.unobserve(entry.target);
}
});
}
const observer = new IntersectionObserver(handleIntersection);
images.forEach(image => observer.observe(image));
JavaScript files tend to be large and can block other resources from loading, slowing down page load times. Optimizing JavaScript without compromising its functionality is essential to enhance user experience and ensure a fast and responsive website.
To make your JavaScript code load faster and prevent it from blocking the rendering of your web page, it’s important to minimize and compress the JavaScript payloads to reduce bandwidth usage. 10up Toolkit does this out-of-the-box using Webpack.
Files should also be Gzip or Brotli compressed. All major hosts already do this.
Remember: it’s far more performant to load many smaller JS files than fewer larger JS files.
To provide an optimal user experience, it’s crucial only to load critical JavaScript code during the initial page load. JavaScript that doesn’t contribute to above-the-fold functionality should be deferred.
Several approaches, outlined in sections 3.3 (Remove unused JavaScript code) and 3.4 (Leverage Code Splitting), can help achieve this. As an engineer, it’s essential to determine what JavaScript is critical versus non-critical for each project. Generally, any JavaScript related to above-the-fold functionality is considered critical, but this may vary depending on the project.
The script_loader_tag filter allows you to extend wp_enqueue_script to apply async or defer attributes on rendered script tags. This functionality can be found in 10up/wp-scaffold.
To optimize the performance of your website, it’s crucial to manage JavaScript bloat. Start by being mindful of what is requested during page load and the libraries and dependencies added to your JavaScript bundles.
First, remove unnecessary libraries and consider building custom solutions for required functionality instead of relying on bloated JS libraries that cover all edge cases.
Next, use the Chrome Coverage tool to analyze the percentage of unused code within your bundles. This provides insights into how much code is being used.
Finally, leverage BundleAnalyzer tools to understand better what is bundled into your JavaScript payloads. This allows you to clearly understand your bundler’s output, leading to more effective code splitting and optimizing your website’s performance.
Code splitting is a powerful technique that can significantly improve performance. It involves breaking JavaScript bundles into smaller chunks that can be loaded on demand or in parallel, especially as sites and applications scale.
This is important because JavaScript affects Core Web Vitals, and code splitting can easily improve scores with minimal engineering effort. Tools like Webpack make it easy to set up code splitting, and dynamic imports further support its implementation.
With code splitting, all chunks are lazy-loaded, reducing the likelihood of long tasks and render-blocking behavior and providing a streamlined implementation for better performance.
As an added benefit, if you know which chunks will be required on demand, you can preload them to be available sooner. To set up code splitting, you can use the Webpack SplitChunksPlugin.
A long task refers to any JavaScript task on the main thread that takes longer than 50ms to execute. Long tasks are problematic as they block the main thread, preventing it from moving on to its next task in the pipeline and negatively impacting performance.
To improve performance, engineers must find ways to break up long tasks and prevent them from blocking the main thread. However, this can be challenging, especially when dealing with third-party vendor code that cannot be optimized directly.
This underscores the importance of practicing effective task management to mitigate the impact of long tasks on performance. The subject of task management is complex and requires an in-depth understanding of JavaScript and browser mechanics.
However, you can look into the following strategies if your code is causing long task behavior:
Large and complex CSS files can slow page load times and affect interactivity. Optimizing CSS without sacrificing design quality is essential to ensure a fast and responsive website.
Minifying CSS is a common practice that reduces data transfer between the server and browser. Since modern web pages have a lot of CSS, compressing them is crucial to reduce bandwidth consumption.
Use build tools like Webpack or CDNs with query string parameters to minify CSS. Smaller CSS files can speed up download times and reduce render-blocking activity in the browser. 10up Toolkit does all this out of the box.
Purge CSS optimises the browser rendering pipeline and prioritize critical resources. PurgeCSS is a library that minimizes unused CSS on a web page. It analyzes your content and CSS files, matches the selectors used in your files with the ones in your content files, and removes unused selectors from your CSS.
This results in smaller CSS files, but it may not work for dynamic class name values that can change at build time. If you have primarily static CSS on your site, PurgeCSS is an excellent tool.At any time, you can leverage the Code Coverage tool in Google Chrome to determine how much of your CSS is unused.
Render-blocking CSS is a significant performance issue that engineers must tackle to ensure a positive user experience. Not all CSS required for the page should be loaded during the initial load. Some can be deferred or prioritized, depending on the use case. There are various ways to solve render-blocking CSS on a webpage, each with its caveats and variability. Here are some potential options:
<style>
tags. This can be a performant way to improve user experience. You may want to place CSS related to fonts, globals, or elements that appear above the fold so that the browser renders them faster.<link
rel="preload"
href="styles.css"
as="style"
onload="this.onload=null;this.rel='stylesheet'"
>
<noscript><link rel="stylesheet" href="styles.css"></noscript>
If you would like to identify and debug render-blocking CSS and JS you can toggle on ct.css in 10up/wp-scaffold. By appending ?debug_perf=1
to any URL in your theme, all render-blocking assets in the head will be highlighted on the screen.
How we write CSS can significantly affect how quickly the browser parses the CSSOM and DOM. The more complex the rule, the longer it takes the browser to calculate how it should be applied. Clean and optimized CSS rules that are not overly specific can help deliver a great user experience.
When writing performant CSS, there are certain guidelines to consider to ensure optimal performance. Here are some of the most important ones:
Use CSS Triggers to determine which properties affect the Browser Rendering Pipeline.
Loading multiple or large font files can increase page load times and affect overall site performance. Optimizing fonts while maintaining design integrity is essential for a fast and responsive website.
Choosing the right font-display property can significantly affect how fast fonts are rendered. To get the fastest loading time, it’s recommended to use font-display: swap since it blocks the rendering process for the shortest amount of time compared to other options.
@font-face {
font-family: 'WebFont';
font-display: swap;
src: url('myfont.woff2') format('woff2'),
url('myfont.woff') format('woff');
}
Note: For TypeKit, the font-display setting can be configured on the type settings page. For Google Fonts, you can append &display=swap to the end of the URL of the font resource.
Font foundries like Google Fonts and TypeKit use CDNs to serve fonts. This speeds up load times and allows font files and CSS to be delivered to the browser faster. Since fonts are essential to the rendering process, ensuring the browser can fetch them early is crucial. To do this, preconnect to the necessary origins and use dns-prefetch as a fallback for older browsers:
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="dns-prefetch" href="https://fonts.googleapis.com" crossorigin>
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link rel="dns-prefetch" href="https://fonts.gstatic.com" crossorigin>
To achieve fast font rendering, serving fonts locally can be very effective. This can be done using the CSS @font-face declaration to specify a custom font using a local path. The browser will first check if the font exists on the user’s operating system, and if it does, it will use the local version. This is the quickest way to serve a font - but only if the user has it installed.
@font-face {
font-family: 'WebFont';
font-display: swap;
src: local('myfont.woff2'),
url('myfont.woff2') format('woff2'),
url('myfont.woff') format('woff');
}
Subsetting is a valuable technique for reducing the font size. When we add fonts to a site, we sometimes include subsets containing glyphs for languages that have yet to be used. This unnecessarily increases page load times. However, the @font-face unicode-range descriptor can be used to filter out unnecessary glyph ranges based on the specific requirements of your website.
@font-face {
font-family: 'WebFont';
font-display: swap;
src: local('myfont.woff2'),
url('myfont.woff2') format('woff2'),
url('myfont.woff') format('woff');
unicode-range: U+0025-00FF;
}
To reduce the impact of Cumulative Layout Shift (CLS) on web pages, the size-adjust property can be helpful. It helps normalize document font sizes and prevents layout shifts when switching fonts. This is especially important on slow connections, where a fallback font is initially rendered before the desired font is loaded.
// Desired Font
@font-face {
font-family: 'WebFont';
font-display: swap;
src: local('myfont.woff2'),
url('myfont.woff2') format('woff2'),
url('myfont.woff') format('woff');
}
// Fallback
@font-face {
font-family: 'Arial';
src: local('Arial.woff2');
font-family: 90%;
}
When loading fonts, there are a few other things to keep in mind:
Poorly optimized resource networking can lead to slow page load times and a poor user experience. To improve Core Web Vital’s performance, maximizing resource networking is crucial by minimizing the number of requests made, reducing the file size of resources, and utilizing browser caching.
In section 1.2, “Serve images from a CDN,” it was mentioned that using a CDN can help deliver more than just images. It can also serve CSS, JS, Fonts, and Rich Media from nearby global data centers.
To reduce latency and round trips through DNS servers, serve all static assets over a CDN as much as possible. Additionally, you can establish an early connection to the CDN origin by pre-connecting to ensure faster page load times.
To establish an early connection and improve the loading speed of resources critical to above-the-fold content rendering and interactivity, add the appropriate <link rel="preconnect" />
tag to the <head>
.
By placing the tag in the <head>
, the browser is notified to establish a connection as soon as possible. This approach, also called Resource Hints, can speed up resource times by 100-500ms. It’s crucial, however, to only preconnect to some third-party origins on a page. Only preconnect to origins essential for the start and completion of page rendering.
Some examples of origins you may want to connect to are:
Browsers try to prioritize scripts and resources as best as possible, but sometimes human intervention is needed. Engineers can influence the browser’s decisions using specific attributes and hints.
You can view the assigned priority for each resource in the Chrome Dev Tools Network tab. The browser has three solutions for influencing priorities.
Here are three scenarios where it would make the most sense to implement the <link rel="preload" />
tag:
Add a <link />
tag in the <head>
to preload a resource. The as attribute tells the browser how best to set the resource’s prioritization, headers, and cache status.
<link rel="preload" as="style" href="/path/to/resource" />
Although TTFB (Time to First Byte) is not classified as a Core Web Vital, it is a critical indicator of site speed. TTFB is the first metric before every other measurable metric regarding web performance. If your TTFB is extremely slow, it will impact FCP and LCP.
Achieving the lowest possible TTFB value is essential to ensure that the client-side markup can start rendering as quickly as possible. To improve TTFB, you need to focus less on the client side and more on what occurs before the browser begins to paint content. Factors to consider include:
It’s highly advisable to leverage a strategic caching strategy in order to optimize Time to First Byte. Suggested caching strategies are documented in the PHP Performance Best Practices.
You can read more about Optimizing TTFB on web.dev.
You can use adaptive serving to help improve performance when network conditions are poor. This functionality can be implemented using the Network Information API to return data about the current network.
Adaptive serving allows you to decide on behalf of the user to handle circumstances like:
navigator.connection.addEventListener("change", () => {
sendBeacon(); // Send to Analytics
if (navigator.connection.effectiveType === "2g" ) {
body.classList.add("low-data-mode")
}
});
Third-party scripts can significantly impact website performance and negatively affect Core Web Vitals metrics. Optimizing and managing third-party scripts is essential to ensure they don’t negatively impact user experience.
The best way to flag slow-performing third-party scripts is to use three Chrome DevTools features: Lighthouse, Network Request Blocking, and the Coverage tool.
Two audits will fail if you have third-party scripts causing performance to degrade:
To get more information on third-party scripts’ impact on performance, you can check the Third-party usage report generated after Lighthouse finishes auditing the site. You can also use the Coverage tool to identify scripts that load on the page but are mostly unused.
To demonstrate the impact of third-party scripts, you can use Chrome’s Network Request Blocking feature to prevent them from loading on the page. After blocking them, run a Lighthouse audit on the page to see the performance improvement.
Follow the instructions provided to use this feature.
To optimize the critical rendering path, prioritize loading third-party scripts essential for rendering a valuable and interactive page experience above the fold content.
Only load a minimal amount of critical JavaScript during the initial page load. Defer any non-critical scripts until after the page load. Note that third-party scripts running JavaScript have the potential to obstruct the main thread, causing delays in page load if not fetched, parsed, compiled, and evaluated properly. As an engineer, you must decide which scripts to postpone.
Usually, deferring interactive chat or social media embed scripts will see performance benefits. Ad-tech and cookie consent libraries are unsuitable for this approach and should be loaded immediately.
It’s also possible to load third-party scripts (depending on the use case) on UI interaction. You can follow the guidance provided in 2.3 Load using the Facade Pattern or 2.4 Load on Visibility for details on achieving that functionality.
Using a service worker is one way to cache scripts and improve performance, but there are some critical considerations. Setting up a service worker can be challenging, and your site must use HTTPS. Additionally, the resource you are caching must support CORS. If it does not, you will need to use a proxy server.
Use Google’s Workbox solution for implementing caching strategies with service workers. Service workers can cache static assets such as fonts, JS, CSS, and images. The service worker may not fetch the latest version if you are caching scripts that update frequently. In this instance, you must account for that using a network-first approach.
Google Tag Manager (GTM) is a commonly used tool for implementing tags and third-party snippets. While the GTM script is not always an issue, the amount of third-party code it injects can significantly impact performance.
In general, the impact on performance based on tag type is as follows: image tags (pixels), custom templates, and custom HTML. If you inject a vendor tag, the impact will depend on the functionality they add to the site.
Do inject scripts with any visual or functional side effects during page load. Custom HTML tags get injected into the DOM and can cause unexpected issues. Avoid using Custom HTML that forces the browser to recalculate the layout or could trigger a layout shift.
Injecting scripts can also negatively impact performance. Inject all scripts via a Tag Manager before the closing body tag instead of the <head>
. Triggers, custom events, and variables add extra code to the parsing and evaluation of the tag manager script.
Loading the tag manager script appropriately is essential to fire triggers and events properly. Loading the script later in the DOM can help with this. It’s also important to periodically audit the container to ensure no duplicated or unused tags and to keep it up-to-date with current business requirements.
Ad technologies can generate significant revenue for publishing clients, making it crucial to optimize for performance. However, complex ad implementations can negatively impact performance, primarily when ad configurations fire during the initial page load.
Implementing ad configurations requires an in-depth understanding of ad exchanges and scripts. Here are high-level guidelines to effectively implement ads on publisher sites:
<head>
as early as possible to ensure the browser parses and executes it promptly. To further improve this, you can preload the ad script so that it’s available to the browser sooner.<head>
. Never build the script asynchronously using the createElement API. This can lead to latency issues and delayed execution of ad calls.