Performance

Performance Culture #performance-culture back to top

At 10up, we understand that website performance is essential for a positive User Experience, Engineering, SEO, Revenue, and Design. It’s not a task that can be postponed but rather a continuous and evolving process that requires strategic planning, thoughtful consideration, and extensive experience in creating high-quality websites.

10up engineers prioritize performance optimization when building solutions, using the latest best practices to ensure consistent and healthy performance metrics. We aim to develop innovative and dynamic solutions to reduce latency, bandwidth, and page load time.

Core Web Vitals #core-web-vitals back to top

At 10up, we pay close attention to Largest Contentful Paint, Cumulative Layout Shift, and First Input Delay. Collectively, these three metrics are known as Core Web Vitals.

We closely monitor Core Web Vitals during development to ensure a high-quality user experience. Maintaining healthy Web Vitals throughout the build and maintenance process is crucial, which requires a shift in building and supporting components. Achieving healthy Web Vitals requires a cross-disciplinary approach spanning Front-end engineering, Back-end engineering, Systems, Audience and Revenue, and Visual design.

Quick Links

  1. Optimising Images
  2. Optimising Rich Media
  3. Optimising JavaScript
  4. Optimising CSS
  5. Optimising Fonts
  6. Optimising Resource Networking
  7. Optimising Third-party Scripts

1. Optimising Images #optimising-images

Images are typically the most significant resource on a webpage and can drastically affect load times. Therefore, optimizing images while maintaining quality to enhance user experience is crucial. To achieve this, consider the following aspects when working with website images. Combining all suggestions below can improve page load times and perceived performance.

1.1 Serve responsive images

Using responsive images in web development means that the most suitable image size for the device or viewport will be served, which saves bandwidth and complements responsive web design. Many platforms, such as WordPress and NextJS, provide responsive image markup out of the box using specific APIs or components. Google Lighthouse will indicate if further optimization of your responsive images is necessary. You can also use the Responsive Breakpoints Generator or RespImageLint - Linter for Responsive Images to help you generate or debug responsive image sizes.

<img
  alt="10up.com Logo"
  height="90px"
  srcset="ten-up-logo-480w.jpg 480w, ten-up-logo-800w.jpg 800w"
  sizes="(max-width: 600px) 480px,
         800px"
  src="ten-up-logo-800w.jpg"
  width="160px"
/>

1.2 Serve images from a CDN

Using a Content Delivery Network (CDN) to serve images can significantly enhance the loading speed of resources. Additionally, CDNs can provide optimized images using contemporary formats and compression techniques. Nowadays, CDNs are regarded as a fundamental element for optimizing performance. Here are some CDN suggestions with proven track records:

  1. Cloudinary
  2. Cloudflare
  3. Fastly
  4. WordPress VIP

1.3 Serve images in modern formats

Efforts are currently focused on optimizing image compression algorithms to deliver high-quality, low-bandwidth images. Although several proposals have been made, only a few have been successful. WebP is the most widely used contemporary format for this purpose. When using a CDN, images can quickly be served as .webp by making a simple configuration change. The WordPress Performance Lab plugin has also added .webp functionality for uploads, which can be utilized if necessary.

Generally, it’s recommended to use the .webp image format as much as possible in projects due to its performance benefits. Doing so can help pass the “Serve images in modern formats” audit in Google Lighthouse.

1.4 Lazy-load and decode images below the fold

We can use lazy loading for images to prioritize the rendering of critical above-the-fold content. This reduces the number of requests needed to render important content. Use the “loading” attribute on images to achieve this.

The “decoding” attribute can enhance performance by enabling parallel image loading. WordPress Core offers this functionality by default.

/* Above the Fold */
<img
  alt="10up.com Logo"
  decoding="sync"
  loading="eager"
  height="90px"
  src="ten-up-logo-800w.jpg"
  width="160px"
/>

/* Below the Fold */
<img
  alt="10up.com Logo"
  decoding="async"
  loading="lazy"
  height="90px"
  src="ten-up-logo-800w.jpg"
  width="160px"
/>

1.5 Use fetchpriority for LCP images

To improve your website’s loading time, you’ll likely need to optimize the LCP element, which is typically the most prominent and first image on the page. Factors such as First Contentful Paint, Time to First Byte, and render-blocking CSS/JS can cause an image to be flagged as the LCP element.

We can set a fetch priority on the resource to load the image faster. The attribute hints to the browser that it should prioritize the fetch of the image relative to other images. The Performance Lab plugin offers this functionality as a experimental option.

<img
  alt="10up.com Logo"
  decoding="async"
  loading="eager"
  fetchpriority="high"
  height="90px"
  src="ten-up-logo-800w.jpg"
  width="160px"
/>

1.6 Ensure images have a width and height

Adding width and height attributes to all <img /> elements on a page is essential to prevent CLS. If these attributes are missing, two problems can occur:

  1. the browser cannot reserve the correct space needed for the image,
  2. the browser cannot calculate the aspect ratio of the image.

This can also cause the Properly size images audit Lighthouse to flag errors.

<img
  alt="10up.com Logo"
  height="90px"
  src="ten-up-logo-800w.jpg"
  width="160px"
/>

1.7 Reduce image requests

Reducing the number of image requests a webpage makes is the best approach to improve image performance. This requires effective design and UX decisions. Additionally, using SVGs instead of images for icons, decorative elements, animations, and other site elements can improve bandwidth and page load time.

Use tools like SVGOMG to optimize and minify SVG elements through build managers or manually in the browser where needed.

2. Optimising Rich Media #optimising-rich-media

2.1 Lazy-load iframes below the fold

To improve page loading speed, use the loading=”lazy” attribute for rich media that use iframes, like Youtube, Vimeo, or Spotify. This will delay the loading of assets until after the initial page load, which can save time and improve the user experience. Note: To lazy load the video and audio HTML5 elements, you’ll need to use either Load on Visibility or Load using the Facade Pattern.

2.2 Load using the Facade Pattern

To reduce the page load time, you can use the Facade pattern to render a low-cost preview of heavy and expensive assets, such as videos, and then import the actual asset when a user requests it. This technique enables loading the asset on demand while attempting to retain the overall user experience.

Recently, this methodology has received a fair amount of attention; there are already packages that help achieve implementation for several services:

  1. Lite Youtube Embed
  2. Lite Vimeo Embed
  3. Intercom Chat Facade

Follow these instructions to modify the core block output to support the facade pattern.

You can use the script-loader snippet (outdated) to create a more general Facade mechanism. This snippet uses promises and can load necessary scripts when the user interacts with the user interface.

const playBtn = document.querySelector("#play");

playBtn.addEventListener("click", () => {
  const scriptLoaderInit = new scriptLoader();
  scriptLoaderInit
     .load(["https://some-third-party-script.com/library.js"])
    .then(() => {
      console. log(`Script loaded!`);
    });
});

If your environment supports ES6 you can also use dynamic import:

const btn = document.querySelector("button");
 
btn.addEventListener("click", (e) => {
  e.preventDefault();
  import("third-party.library")
    .then((module) => module.default)
    .then(executeFunction()) // use the imported dependency
    .catch((err) => {
      console.log(err);
    });
});

2.3 Load on Visibility

To load non-critical resources on demand, engineers can only load assets when they become visible. The Intersection Observer API can trigger a callback when elements become visible, enabling the selective loading of potentially heavy resources.

Read this Smashing Magazine article for an in-depth understanding of how the Observer APIs work. See the example code below for an idea of how to go about implementation:

const images = document.querySelectorAll('.lazyload');

function handleIntersection(entries) {
  entries.map((entry) => {
    if (entry.isIntersecting) {
      entry.target.src = entry.target.dataset.src;
      entry.target.classList.add('loaded')
      observer.unobserve(entry.target);
    }
  });
}

const observer = new IntersectionObserver(handleIntersection);
images.forEach(image => observer.observe(image));

3. Optimising JavaScript #optimising-javascript

JavaScript files tend to be large and can block other resources from loading, slowing down page load times. Optimizing JavaScript without compromising its functionality is essential to enhance user experience and ensure a fast and responsive website.

3.1 Minify and compress payloads

To make your JavaScript code load faster and prevent it from blocking the rendering of your web page, it’s important to minimize and compress the JavaScript payloads to reduce bandwidth usage. 10up Toolkit does this out-of-the-box using Webpack.

Files should also be Gzip or Brotli compressed. All major hosts already do this.

Remember: it’s far more performant to load many smaller JS files than fewer larger JS files.

3.2 Defer non-critical JavaScript

To provide an optimal user experience, it’s crucial only to load critical JavaScript code during the initial page load. JavaScript that doesn’t contribute to above-the-fold functionality should be deferred.

Several approaches, outlined in sections 3.3 (Remove unused JavaScript code) and 3.4 (Leverage Code Splitting), can help achieve this. As an engineer, it’s essential to determine what JavaScript is critical versus non-critical for each project. Generally, any JavaScript related to above-the-fold functionality is considered critical, but this may vary depending on the project.

The script_loader_tag filter allows you to extend wp_enqueue_script to apply async or defer attributes on rendered script tags. This functionality can be found in 10up/wp-scaffold.

3.3 Remove unused JavaScript code

To optimize the performance of your website, it’s crucial to manage JavaScript bloat. Start by being mindful of what is requested during page load and the libraries and dependencies added to your JavaScript bundles.

First, remove unnecessary libraries and consider building custom solutions for required functionality instead of relying on bloated JS libraries that cover all edge cases.

Next, use the Chrome Coverage tool to analyze the percentage of unused code within your bundles. This provides insights into how much code is being used.

Finally, leverage BundleAnalyzer tools to understand better what is bundled into your JavaScript payloads. This allows you to clearly understand your bundler’s output, leading to more effective code splitting and optimizing your website’s performance.

3.4 Leverage code-splitting

Code splitting is a powerful technique that can significantly improve performance. It involves breaking JavaScript bundles into smaller chunks that can be loaded on demand or in parallel, especially as sites and applications scale.

This is important because JavaScript affects Core Web Vitals, and code splitting can easily improve scores with minimal engineering effort. Tools like Webpack make it easy to set up code splitting, and dynamic imports further support its implementation.

With code splitting, all chunks are lazy-loaded, reducing the likelihood of long tasks and render-blocking behavior and providing a streamlined implementation for better performance.

As an added benefit, if you know which chunks will be required on demand, you can preload them to be available sooner. To set up code splitting, you can use the Webpack SplitChunksPlugin.

3.5 Identify and flag long tasks

A long task refers to any JavaScript task on the main thread that takes longer than 50ms to execute. Long tasks are problematic as they block the main thread, preventing it from moving on to its next task in the pipeline and negatively impacting performance.

To improve performance, engineers must find ways to break up long tasks and prevent them from blocking the main thread. However, this can be challenging, especially when dealing with third-party vendor code that cannot be optimized directly.

This underscores the importance of practicing effective task management to mitigate the impact of long tasks on performance. The subject of task management is complex and requires an in-depth understanding of JavaScript and browser mechanics.

However, you can look into the following strategies if your code is causing long task behavior:

  1. Use setTimeOut: leverage setTimeOut to yield to the main thread. See example.
  2. Use async/await: you can use async/await to yield to the main thread. See example.
  3. Use isInputPending: this method allows you to yield to the main thread if the user interacts with the page. See example.
  4. Use postTask: The Scheduler API allows you to set a priority on functions. It’s only supported in Chrome now but provides fine-grained control over how and when tasks execute on the main thread. See example.

4. Optimising CSS #optimising-css

Large and complex CSS files can slow page load times and affect interactivity. Optimizing CSS without sacrificing design quality is essential to ensure a fast and responsive website.

4.1 Minify CSS

Minifying CSS is a common practice that reduces data transfer between the server and browser. Since modern web pages have a lot of CSS, compressing them is crucial to reduce bandwidth consumption.

Use build tools like Webpack or CDNs with query string parameters to minify CSS. Smaller CSS files can speed up download times and reduce render-blocking activity in the browser. 10up Toolkit does all this out of the box.

4.2 Leverage Purge CSS

Purge CSS optimises the browser rendering pipeline and prioritize critical resources. PurgeCSS is a library that minimizes unused CSS on a web page. It analyzes your content and CSS files, matches the selectors used in your files with the ones in your content files, and removes unused selectors from your CSS.

This results in smaller CSS files, but it may not work for dynamic class name values that can change at build time. If you have primarily static CSS on your site, PurgeCSS is an excellent tool.At any time, you can leverage the Code Coverage tool in Google Chrome to determine how much of your CSS is unused.

4.3 Avoid render-blocking CSS

Render-blocking CSS is a significant performance issue that engineers must tackle to ensure a positive user experience. Not all CSS required for the page should be loaded during the initial load. Some can be deferred or prioritized, depending on the use case. There are various ways to solve render-blocking CSS on a webpage, each with its caveats and variability. Here are some potential options:

  1. Embed critical and anticipated CSS using <style> tags. This can be a performant way to improve user experience. You may want to place CSS related to fonts, globals, or elements that appear above the fold so that the browser renders them faster.
  2. Use the Asynchronous CSS Technique to defer non-critical CSS. Only use this for CSS that may appear below the fold.
<link
 rel="preload"
 href="styles.css"
 as="style"
 onload="this.onload=null;this.rel='stylesheet'"
>
<noscript><link rel="stylesheet" href="styles.css"></noscript>

If you would like to identify and debug render-blocking CSS and JS you can toggle on ct.css in 10up/wp-scaffold. By appending ?debug_perf=1 to any URL in your theme, all render-blocking assets in the head will be highlighted on the screen.

4.4 Adhere to CSS Best Practices

How we write CSS can significantly affect how quickly the browser parses the CSSOM and DOM. The more complex the rule, the longer it takes the browser to calculate how it should be applied. Clean and optimized CSS rules that are not overly specific can help deliver a great user experience.

When writing performant CSS, there are certain guidelines to consider to ensure optimal performance. Here are some of the most important ones:

  1. Avoid importing base64 encoded images: this dramatically increases the file size of your CSS.
  2. Avoid importing one CSS file into another CSS file using @import: this can trigger knock network requests and lead to latency.
  3. Be cautious when animating elements, and avoid overusing them without careful consideration.
  4. Avoid animating or transitioning margin, padding, width, and height properties.
  5. Be mindful of elements that trigger a re-paint and reflow.

Use CSS Triggers to determine which properties affect the Browser Rendering Pipeline.

5. Optimising Fonts #optimising-fonts

Loading multiple or large font files can increase page load times and affect overall site performance. Optimizing fonts while maintaining design integrity is essential for a fast and responsive website.

5.1 Always set a font-display

Choosing the right font-display property can significantly affect how fast fonts are rendered. To get the fastest loading time, it’s recommended to use font-display: swap since it blocks the rendering process for the shortest amount of time compared to other options.

@font-face {
  font-family: 'WebFont';
  font-display: swap;
  src:  url('myfont.woff2') format('woff2'),
        url('myfont.woff') format('woff');
}

Note: For TypeKit, the font-display setting can be configured on the type settings page. For Google Fonts, you can append &display=swap to the end of the URL of the font resource.

5.2 Preconnect and serve fonts from a CDN

Font foundries like Google Fonts and TypeKit use CDNs to serve fonts. This speeds up load times and allows font files and CSS to be delivered to the browser faster. Since fonts are essential to the rendering process, ensuring the browser can fetch them early is crucial. To do this, preconnect to the necessary origins and use dns-prefetch as a fallback for older browsers:

<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="dns-prefetch" href="https://fonts.googleapis.com" crossorigin>
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link rel="dns-prefetch" href="https://fonts.gstatic.com" crossorigin>

5.3 Serve fonts locally

To achieve fast font rendering, serving fonts locally can be very effective. This can be done using the CSS @font-face declaration to specify a custom font using a local path. The browser will first check if the font exists on the user’s operating system, and if it does, it will use the local version. This is the quickest way to serve a font - but only if the user has it installed.

@font-face {
  font-family: 'WebFont';
  font-display: swap;
  src:  local('myfont.woff2'),
        url('myfont.woff2') format('woff2'),
        url('myfont.woff') format('woff');
}

5.4 Subset fonts

Subsetting is a valuable technique for reducing the font size. When we add fonts to a site, we sometimes include subsets containing glyphs for languages that have yet to be used. This unnecessarily increases page load times. However, the @font-face unicode-range descriptor can be used to filter out unnecessary glyph ranges based on the specific requirements of your website.

@font-face {
  font-family: 'WebFont';
  font-display: swap;
  src:  local('myfont.woff2'),
        url('myfont.woff2') format('woff2'),
        url('myfont.woff') format('woff');
  unicode-range: U+0025-00FF;
}

5.5 Leverage size-adjust

To reduce the impact of Cumulative Layout Shift (CLS) on web pages, the size-adjust property can be helpful. It helps normalize document font sizes and prevents layout shifts when switching fonts. This is especially important on slow connections, where a fallback font is initially rendered before the desired font is loaded.

// Desired Font
@font-face {
  font-family: 'WebFont';
  font-display: swap;
  src:  local('myfont.woff2'),
        url('myfont.woff2') format('woff2'),
        url('myfont.woff') format('woff');
}

// Fallback
@font-face {
  font-family: 'Arial';
  src:  local('Arial.woff2');
  font-family: 90%;
}

5.6 Font Delivery Considerations

When loading fonts, there are a few other things to keep in mind:

  1. Use WOFF or WOFF2 formats instead of EOT and TTF, as the latter is no longer necessary.
  2. Load fewer web fonts whenever possible. Where possible, use a maximum of 2 fonts.
  3. Consider using variable fonts if your site requires many font weights or styles.

6. Optimising Resource Networking #optimising-resource-networking

Poorly optimized resource networking can lead to slow page load times and a poor user experience. To improve Core Web Vital’s performance, maximizing resource networking is crucial by minimizing the number of requests made, reducing the file size of resources, and utilizing browser caching.

6.1 Use a CDN for static assets

In section 1.2, “Serve images from a CDN,” it was mentioned that using a CDN can help deliver more than just images. It can also serve CSS, JS, Fonts, and Rich Media from nearby global data centers.

To reduce latency and round trips through DNS servers, serve all static assets over a CDN as much as possible. Additionally, you can establish an early connection to the CDN origin by pre-connecting to ensure faster page load times.

6.2 Establish network connectivity early

To establish an early connection and improve the loading speed of resources critical to above-the-fold content rendering and interactivity, add the appropriate <link rel="preconnect" /> tag to the <head>.

By placing the tag in the <head>, the browser is notified to establish a connection as soon as possible. This approach, also called Resource Hints, can speed up resource times by 100-500ms. It’s crucial, however, to only preconnect to some third-party origins on a page. Only preconnect to origins essential for the start and completion of page rendering.

Some examples of origins you may want to connect to are:

6.3 Prioritise and preload critical resources

Browsers try to prioritize scripts and resources as best as possible, but sometimes human intervention is needed. Engineers can influence the browser’s decisions using specific attributes and hints.

You can view the assigned priority for each resource in the Chrome Dev Tools Network tab. The browser has three solutions for influencing priorities.

  1. preconnect - which hints to the browser that you’d like to establish a connection with another origin early.
  2. prefetch - hints to the browser that something should happen with a non-critical resource.
  3. preload - allows you to request critical resources ahead of time.

Here are three scenarios where it would make the most sense to implement the <link rel="preload" /> tag:

  1. Any resource defined in a CSS file, i.e., an image or font, should be preloaded.
  2. Any non-critical CSS should be preloaded to mitigate render-blocking JavaScript if implementing Critical CSS.
  3. Preloading critical JavaScript chunk files.

Add a <link /> tag in the <head> to preload a resource. The as attribute tells the browser how best to set the resource’s prioritization, headers, and cache status.

<link rel="preload" as="style" href="/path/to/resource" />

6.4 Optimise TTFB (Server Response)

Although TTFB (Time to First Byte) is not classified as a Core Web Vital, it is a critical indicator of site speed. TTFB is the first metric before every other measurable metric regarding web performance. If your TTFB is extremely slow, it will impact FCP and LCP.

Achieving the lowest possible TTFB value is essential to ensure that the client-side markup can start rendering as quickly as possible. To improve TTFB, you need to focus less on the client side and more on what occurs before the browser begins to paint content. Factors to consider include:

It’s highly advisable to leverage a strategic caching strategy in order to optimize Time to First Byte. Suggested caching strategies are documented in the PHP Performance Best Practices.

You can read more about Optimizing TTFB on web.dev.

6.5 Leverage Adaptive Serving

You can use adaptive serving to help improve performance when network conditions are poor. This functionality can be implemented using the Network Information API to return data about the current network.

Adaptive serving allows you to decide on behalf of the user to handle circumstances like:

  1. Serving high-definition vs. low-definition resources
  2. Whether or not to preload certain resources
  3. Warn users about poor network conditions to improve user experience.
  4. Send data to analytics to determine what percentage of your traffic uses your website under poor connectivity scenarios.
navigator.connection.addEventListener("change", () => {
   sendBeacon(); // Send to Analytics
   if (navigator.connection.effectiveType === "2g" ) {
     body.classList.add("low-data-mode") 
   }
});

7. Optimising Third-party Scripts #optimising-third-party-scripts

Third-party scripts can significantly impact website performance and negatively affect Core Web Vitals metrics. Optimizing and managing third-party scripts is essential to ensure they don’t negatively impact user experience.

7.1 Identify slow third-party scripts

The best way to flag slow-performing third-party scripts is to use three Chrome DevTools features: Lighthouse, Network Request Blocking, and the Coverage tool.

Two audits will fail if you have third-party scripts causing performance to degrade:

  1. Reduce JavaScript execution time: highlights scripts that take a long time to parse, compile, or evaluate.
  2. Avoid enormous network payloads: identifies network requests—including those from third parties—that may slow down page load time.

To get more information on third-party scripts’ impact on performance, you can check the Third-party usage report generated after Lighthouse finishes auditing the site. You can also use the Coverage tool to identify scripts that load on the page but are mostly unused.

To demonstrate the impact of third-party scripts, you can use Chrome’s Network Request Blocking feature to prevent them from loading on the page. After blocking them, run a Lighthouse audit on the page to see the performance improvement.

Follow the instructions provided to use this feature.

7.2 Prioritize critical third-party scripts

To optimize the critical rendering path, prioritize loading third-party scripts essential for rendering a valuable and interactive page experience above the fold content.

Only load a minimal amount of critical JavaScript during the initial page load. Defer any non-critical scripts until after the page load. Note that third-party scripts running JavaScript have the potential to obstruct the main thread, causing delays in page load if not fetched, parsed, compiled, and evaluated properly. As an engineer, you must decide which scripts to postpone.

Usually, deferring interactive chat or social media embed scripts will see performance benefits. Ad-tech and cookie consent libraries are unsuitable for this approach and should be loaded immediately.

7.3 Lazy-load scripts on interaction

It’s also possible to load third-party scripts (depending on the use case) on UI interaction. You can follow the guidance provided in 2.3 Load using the Facade Pattern or 2.4 Load on Visibility for details on achieving that functionality.

7.4 Leverage service workers

Using a service worker is one way to cache scripts and improve performance, but there are some critical considerations. Setting up a service worker can be challenging, and your site must use HTTPS. Additionally, the resource you are caching must support CORS. If it does not, you will need to use a proxy server.

Use Google’s Workbox solution for implementing caching strategies with service workers. Service workers can cache static assets such as fonts, JS, CSS, and images. The service worker may not fetch the latest version if you are caching scripts that update frequently. In this instance, you must account for that using a network-first approach.

7.5 Tag Manager implications

Google Tag Manager (GTM) is a commonly used tool for implementing tags and third-party snippets. While the GTM script is not always an issue, the amount of third-party code it injects can significantly impact performance.

In general, the impact on performance based on tag type is as follows: image tags (pixels), custom templates, and custom HTML. If you inject a vendor tag, the impact will depend on the functionality they add to the site.

Do inject scripts with any visual or functional side effects during page load. Custom HTML tags get injected into the DOM and can cause unexpected issues. Avoid using Custom HTML that forces the browser to recalculate the layout or could trigger a layout shift.

Injecting scripts can also negatively impact performance. Inject all scripts via a Tag Manager before the closing body tag instead of the <head>. Triggers, custom events, and variables add extra code to the parsing and evaluation of the tag manager script.

Loading the tag manager script appropriately is essential to fire triggers and events properly. Loading the script later in the DOM can help with this. It’s also important to periodically audit the container to ensure no duplicated or unused tags and to keep it up-to-date with current business requirements.

7.6 Ad Script Best Practices

Ad technologies can generate significant revenue for publishing clients, making it crucial to optimize for performance. However, complex ad implementations can negatively impact performance, primarily when ad configurations fire during the initial page load.

Implementing ad configurations requires an in-depth understanding of ad exchanges and scripts. Here are high-level guidelines to effectively implement ads on publisher sites:

  1. To minimize CLS, ensure space is reserved for ad units above the fold. Follow Google’s recommendation to determine the most frequently served ad unit in a slot and set the height of the reserved space accordingly.
  2. Load the ad script in the <head> as early as possible to ensure the browser parses and executes it promptly. To further improve this, you can preload the ad script so that it’s available to the browser sooner.
  3. Ensure that the ad script is loaded asynchronously. This can be achieved by placing the async=”true” attribute on the script. This allows the browser to fetch the resource while parsing and evaluating the script.
  4. Always load the script statically: place the evaluated script in the <head>. Never build the script asynchronously using the createElement API. This can lead to latency issues and delayed execution of ad calls.
  5. The most important consideration for ensuring performant ad implementations is a healthy page experience. An optimized critical rendering path and low FCP and TTFB can improve ad performance.