Inlining critical CSS for first-time visits

After listening to Scott rave on about how much of a perceived-performance benefit he got from inlining critical CSS on first load, I thought I’d give it a shot over at The Session. On the chance that this might be useful for others, I figured I’d document what I did.

The idea here is that you can give a massive boost to the perceived performance of the first page load on a site by putting the most important CSS in the head of the page. Then you cache the full stylesheet. For subsequent visits you only ever use the external stylesheet. So if you’re squeamish at the thought of munging your CSS into your HTML (and that’s a perfectly reasonable reaction), don’t worry—this is a temporary workaround just for initial visits.

My particular technology stack here is using Grunt, Apache, and PHP with Twig templates. But I’m sure you can adapt this for other technology stacks: what’s important here isn’t the technology, it’s the thinking behind it. And anyway, the end user never sees any of those technologies: the end user gets HTML, CSS, and JavaScript. As long as that’s what you’re outputting, the specifics of the technology stack really don’t matter.

Generating the critical CSS

Okay. First question: how do you figure out which CSS is critical and which CSS can be deferred?

To help answer that, and automate the task of generating the critical CSS, Filament Group have made a Grunt task called grunt-criticalcss. I added that to my project and updated my Gruntfile accordingly:

grunt.initConfig({
    // All my existing Grunt configuration goes here.
    criticalcss: {
        dist: {
            options: {
                url: 'http://thesession.dev',
                width: 1024,
                height: 800,
                filename: '/path/to/main.css',
                outputfile: '/path/to/critical.css'
            }
        }
    }
});

I’m giving it the name of my locally-hosted version of the site and some parameters to judge which CSS to prioritise. Those parameters are viewport width and height. Now, that’s not a perfect way of judging which CSS matters most, but it’ll do.

Then I add it to the list of Grunt tasks:

// All my existing Grunt tasks go here.
grunt.loadNpmTasks('grunt-criticalcss');

grunt.registerTask('default', ['sass', etc., 'criticalcss']);

The end result is that I’ve got two CSS files: the full stylesheet (called something like main.css) and a stylesheet that only contains the critical styles (called critical.css).

Cache-busting CSS

Okay, this is a bit of a tangent but trust me, it’s going to be relevant…

Most of the time it’s a very good thing that browsers cache external CSS files. But if you’ve made a change to that CSS file, then that feature becomes a bug: you need some way of telling the browser that the CSS file has been updated. The simplest way to do this is to change the name of the file so that the browser sees it as a whole new asset to be cached.

You could use query strings to do this cache-busting but that has some issues. I use a little bit of Apache rewriting to get a similar effect. I point browsers to CSS files like this:

<link rel="stylesheet" href="/css/main.20150310.css">

Now, there isn’t actually a file named main.20150310.css, it’s just called main.css. To tell the server where the actual file is, I use this rewrite rule:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+).(d+).(js|css)$ $1.$3 [L]

That tells the server to ignore those numbers in JavaScript and CSS file names, but the browser will still interpret it as a new file whenever I update that number. You can do that in a .htaccess file or directly in the Apache configuration.

Right. With that little detour out of the way, let’s get back to the issue of inlining critical CSS.

Differentiating repeat visits

That number that I’m putting into the filenames of my CSS is something I update in my Twig template, like this (although this is really something that a Grunt task could do, I guess):

{% set cssupdate = '20150310' %}

Then I can use it like this:

<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

I can also use JavaScript to store that number in a cookie called csscached so I’ll know if the user has a cached version of this revision of the stylesheet:

<script>
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

The absence or presence of that cookie is going to be what determines whether the user gets inlined critical CSS (a first-time visitor, or a visitor with an out-of-date cached stylesheet) or whether the user gets a good ol’ fashioned external stylesheet (a repeat visitor with an up-to-date version of the stylesheet in their cache).

Here are the steps I’m going through:

First of all, set the Twig cssupdate variable to the last revision of the CSS:

{% set cssupdate = '20150310' %}

Next, check to see if there’s a cookie called csscached that matches the value of the latest revision. If there is, great! This is a repeat visitor with an up-to-date cache. Give ‘em the external stylesheet:

{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

If not, then dump the critical CSS straight into the head of the document:

{% else %}
<style>
{% include '/css/critical.css' %}
</style>

Now I still want to load the full stylesheet but I don’t want it to be a blocking request. I can do this using JavaScript. Once again it’s Filament Group to the rescue with their loadCSS script:

 <script>
    // include loadCSS here...
    loadCSS('/css/main.{{ cssupdate }}.css');

While I’m at it, I store the value of cssupdate in the csscached cookie:

    document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

Finally, consider the possibility that JavaScript isn’t available and link to the full CSS file inside a noscript element:

<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

And we’re done. Phew!

Here’s how it looks all together in my Twig template:

{% set cssupdate = '20150310' %}
{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
{% else %}
<style>
{% include '/css/critical.css' %}
</style>
<script>
// include loadCSS here...
loadCSS('/css/main.{{ cssupdate }}.css');
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>
<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

You can see the production code from The Session in this gist. I’ve tweaked the loadCSS script slightly to match my preferred JavaScript style but otherwise, it’s doing exactly what I’ve outlined here.

The result

According to Google’s PageSpeed Insights, I done good.

Optimising https://thesession.org/

Have you published a response to this? :

Responses

5880.me

inlining CSS I loved Jeremy Keith’s post about inlining CSS – which I’m doing on this site, but not nearly as well as he spells out.

That post also gave a good tip about making a small .htaccess change to cache-bust CSS and JS files without using query strings or coupling a version-number-in-a-file to the HTML. The way I’d been doing it required changing at least two things every time I updated the CSS here.

However, I found that for my .htaccess, I needed to do something a little different from the rewrite he suggests:

RewriteRule ^(.).(d).(js|css)$ $1.$3 [L] # His

RewriteRule ^(.*).([0-9-]+).(js|css)$ $1.$3 [L] # Mine

The whole .htaccess is in this gist, for clarity’s sake. Please do make suggestions for improvements.

# Wednesday, March 11th, 2015 at 8:29pm

Aaron T. Grogg

Last updated September 29, 2025.

In this series, as I “get to know” some technology, I collect and share resources that I find helpful, as well as any tips, tricks and notes that I collected along the way.

My goal here is not to teach every little thing there is to learn, but to share useful stuff that I come across and hopefully offer some insight to anyone that is getting ready to do what I just finished doing.

As always, I welcome any thoughts, notes, pointers, tips, tricks, suggestions, corrections and overall vibes in the Comments section below

TOC

What

So, WPO is just a massive beast. There are so many parts, strewn across so many branches of tech and divisions and teams, that each “part” really deserves its own “Getting to Know” series. Maybe some day.

But for now, I am going to cover what I consider to be the most important high-level topics, drilling down into each topic a little bit, offering best practices, suggestions, options, tips and tricks that I have collected from around the Interwebs!

So, let’s get to know… WPO!

TOC ⇪

Why Okay, so this is not really me getting to first-know WPO, but more like me getting to re-know WPO. This is a topic that I have been quite passionate about for some time, but, as with all things web-related, stuff changes, so I decided to dig in and find out how much of what I already knew is still valid, how much has changed, and how much new stuff there is out there.

The first couple things to understand is that a) “web performance” is not just about making a page load faster so someone can get to your shopping cart faster, and b) not everyone has blazing fast Internet and high-powered devices.

The web is more than just looking at cat pics, sharing recent culinary conquests or booking upcoming vacations. People also use the web for serious life issues, like applying for public assistance, shopping for groceries, dealing with healthcare, education and much more.

And for the 2021 calendar year, GlobalStats puts Internet users at 54.86% Mobile, 42.65% Desktop and 2.49% Tablet, and of those mobile users, 29.24% are Apple and 26.93% are Samsung, with average worldwide network speeds in November 2021 of 29.06 Mbps Download and 8.53 Mbps Upload.

And remember, those are averages, skewed heavily by the highly-populated regions of the industrialized world. Rural areas and developing countries are lucky to get connections at all.

So for people that really depend on the Internet, and may not have the greatest connection, nor the most powerful device, let’s see what we can do about getting them the content they want/need, as fast and reliably as possible.

TOC ⇪

Getting Started

This was a tough one to get started on, and certainly to collect notes for, because, as I mentioned above, the topics are so wide, that it took a lot to try to pull them all together…

WPO touches on server settings, CDNs, cache control, build tools, HTML, CSS, JS, file optimizations, Service Workers and more.

In most organizations, this means pulling together several teams, and that means getting someone “up the ladder” to buy into all of this to help convince department heads to allocate resources (read: people, so read: money)…

Luckily, there have been a LOT of success stories, and they tend to want to brag (rightfully so!), so it has actually never been easier to convince bosses to at least take a look at WPO as a philosophy!

TOC ⇪

BLUF

You’ll find details for all of these below, but here are the bullets, more-or-less in order…

  1. HTTP2
  2. Cache-Control header, not Expire
  3. CDN
  4. preconnect to third-party domains
  5. preload important files coming up later in page
  6. prefetch resources for next page
  7. prerender pages likely to navigate to next (deprecated) Speculation Rules for pages likely to navigate to next
  8. Split CSS into components/@media sizes, load conditionally
  9. Inline critical CSS, load full CSS after
  10. Replace JS libraries with native HTML, CSS and JS, when possible
  11. Replace JS functionality with native HTML and CSS, when possible
  12. async / defer JS, especially 3rd party
  13. Split JS into components, load conditionally / if needed
  14. Avoid Data-URIs unless very small code
  15. Embedded SVG before icon fonts, icon fonts before image sprites, image sprites before numerous icon files
  16. WOFF2 before WOFF
  17. font-display: swap
  18. AVIF before WEBP, WEBP before JPG/PNG
  19. Multiple crops for various screen sizes / connection speeds
  20. srcset / sizes attributes for automated size swap
  21. media attribute for manual size swap
  22. loading="lazy" for below-the-fold images
  23. WEBM before MP4, MP4 before GIF
  24. preload="none" for below-the-fold videos
  25. width / height attributes on media and embeds, use CSS to make responsive
  26. Optimize all media assets
  27. Lazy-load below the fold content
  28. Reserve space for delayed-loading content, like ads and 3rd-party widgets
  29. Create flat/static versions of dynamic content
  30. Minify / compress text-based files (HTML, CSS, JS, etc.)
  31. requestIdleCallback/requestAnimationFrame/scheduler.yield to pause CPU load / find better time to run tasks
  32. Service Worker to cache / reduce requests, swap content before requesting

Note that the above are all options, but might not all work / be possible everywhere. And each should be tested to see if it helps in your situation.

TOC ⇪

Glossary

We need to define some acronyms and terms to ensure we’re all speaking the same language… In alpha-order…

CLS (Cumulative Layout Shift) Visible layout shift as a page loads and renders Core Web Vitals

Three metrics that score a page load:

  1. LCP: ideally < 2.5s
  2. INP: ideally < 200ms
  3. CLS: ideally < 0.10
Critical Resource Any resource that blocks the critical rendering path CRP (Critical Rendering Path) Steps a browser must complete to render page CrUX (Chrome User Experience) Perf data gathered from RUM within Chrome DSA (Dynamic Site Acceleration) Store dynamic content on CDN or edge server FCP (First Contentful Paint) First DOM content paint is complete FMP (First Meaningful Paint) Primary content paint is complete; deprecated in favor of LCP FID (First Input Delay) Time a user must wait before they can interact with the page (still a valid KPI, but use INP instead) FP (First Paint) First pixel painted (not really used anymore, use LCP instead) INP (Interaction to Next Paint) Longest time for user interaction to complete LCP (Largest Contentful Paint) Time for largest page element to render Lighthouse Google lab testing software. Analyzes and evaluates site, returns score and possible improvements LoAF (Longest Animation Frame) Longest render task PoP (Points of Presence) CDN data centers Rel mCvR (Relative Mobile Conversion Rate) Desktop Conversion Rate / Mobile Conversion Rate RUM (Real User Monitoring) Data collected from real user experiences, not simulated SI (Speed Index) Calculation for how quickly page content is visible SPOF (Single Point of Failure) Critical component that can break everything if it fails or is missing Synthetic Monitoring Data collected from similuated lab tests, not real user experiences TBT (Total Blocking Time) Time between FCP and TTI TTFB (Time to First Byte) Time until first byte is received by browser TTI (Time to Interactive) Time until entire page is interactive Tree Shaking Removing dead code from code base WebpageTest Synthetic monitoring tool, offers free and paid versions

TOC ⇪

Notes

  • Three cardinal rules
    1. Reduce bytes: the fewer bytes, the faster the download.
    2. Reduce critical resources:
       HTML = 1 each CSS += 1 each JS += 1 // unless `async` or `defer` 
    3. Reduce CRP length: the browser can download CSS/JS at the same time, but each HTTP request maxes at 8kb, so if:
       HTML = 5kb CSS = 4kb JS = 2kb ---------------- TOTAL = 11kb 

      CRP length = 2 (1 for HTML, 1 for CSS/JS)

  • Three levels of UX

    1. Is anything happening?

      Once a user clicks something, if there is no visual indicator that something is happening, they wonder if something broke, and so the experience suffers.

    2. Is it useful?

      Once stuff does start to appear, if all they see is placeholders, or partial content, it is not useful yet, so the experience suffers.

    3. Is it usable?

      Finally, once everything looks ready, is it? Can they read anything, or interact yet? If not, the experience suffers.

  • Three Core Web Vitals

    The three questions above drive Google’s Core Web Vitals: they are an attempt to quantify the user experience.

    1. LCP (Largest Contentful Paint)

      When the primary content section becomes visible to the user. Considerations:

      • TTFB is included in LCP (see below).
      • CSS, JS, custom fonts, images, can all delay LCP.
      • Not just download time, but also processing time: DOM has to be rendered, CSS & possibly JS processed, etc.
      • Remember, each asset can request additional assets, creating a snowball effect.

    2. INP (Interaction to Next Paint)

      Time for the page to complete a user’s interaction. Considerations:

      • How busy the browser already is when the user interacts.
      • How resource-intensive the interaction is.
      • Whether the interaction requires a fetch before it can respond.

    3. CLS (Cumulative Layout Shift)

      Layout shifts as a page loads and renders. Considerations:

      • Delay in loading critical CSS and/or font files.
      • JS updates to the page after the initial render.
      • Dynamic or delayed content loading into the page.

    And because TTFB is included in LCP, and might indicate issues related to INP, I consider it to be the D’Artagnon of the three Core Web Vitals (sort of their fourth Musketeer)… ;-)

    1. TTFB (Time to First Byte)

      Time between the browser requesting, and receiving, the asset’s first byte. Considerations:

      • Connection speed, network latency, server speed, database speed.
      • Distance between user and server (even at near light speed, it still takes time to travel around the world).
      • Static assets are always faster than dynamic ones.
      • Usually discussed relating to the page/document, but also part of every asset request, including 3rd party.

  • Analysis Prep

    The first step to testing is analysis.

    1. Find out who your audience is

      • Where they are geographically, what type of network connection they typically have, what devices they use.
      • This is “field data”, coming from your real-life market, ideally via analytics.

    2. Look for key indicators

      • Any problems your project has, to determine goals.
      • Perhaps your site has a TTFB of 3.5 seconds, and a LCP of 4.5 seconds, and a CLS of 0.34.
      • All of these should be able to be improved, so they are great candidates for goals.

  • Goals and Budgets

    A goal is a target that you set and work toward, such as “TTFB should be under 300ms” or “LCP should be under 2s”.

    A budget is a limit you set and try to stay under, such as “no more than 100kb of JS per page” or “hero image must be under 200kb”.

    • How to choose goals?

      • Maybe compare your site against your competition’s.
      • Google’s Core Web Vitals could be another consideration.
      • Goals have to be achievable: If your current KPIs are too far from your long-term goals, consider creating less-aggressive short-term goals; you can always re-evaluate periodically.

    • How to create budgets?

      • Similar to goals, compare against competition, research current stats, or look for problem areas and set limits to control them.
      • For existing projects, starting budget can be “no worse than it is right now”…
      • For new sites, can start with “talking points” for the team, to help set limits on a project, then refine as needed.
      • Budgets can change as the site changes; Reach a new goal? Adjust budget to reflect that. Adding a new section/feature? That will likely affect the budget.

    • How to stick to budgets?

      • Lighthouse CI integrates with GitHub to test changes during builds, stopping deployments.
      • Speedcurve dashboard sets budgets, monitors, and notifies team of failures.
      • Calibre estimates how 3rd party add-ons will affect site, or how ad-blockers will affect page loads.

    • What if a change breaks the budgets?

      • Optimize feature to get back under budget.
      • Remove some other feature to make room for the new one.
      • Don’t add the new feature.

    These are tough decisions and require buy-in from all parties involved. the goal is to improve your site, not create civil wars. Both goals and budgets need to be realistic, or no one is going to be able to meet them, and then they just become a source of friction, and will soon just be ignored.

  • Analysis process 6e7fe6433a6f9186f6f7fd2b1ca32209

    But remember, Synthetic and RUM will never be identical; the lab has constraints that the real world doesn’t, and the real world has variables that the lab doesn’t.

    These are two different testing environments, each with its pros and cons, intended for two very different purposes.

    I always say “Synthetic is what should be, and RUM is what is“; we can only control so much in the real world, that’s why we try to polish everything as much as possible, hoping that when real world variables happen, our site is stable enough that they won’t affect things too badly.

  • Analysis tools

    Some tools allow you to test in live browsers, on a variety of devices, alter connection speeds, run multiple tests, and receive reports of request, delivery, and processing times for all of it.

    Other tools allow you to run your site against a defined set of benchmarks, receiving estimations of speeds and reports with improvement suggestions.

    Still other tools let you work right in your browser, testing live or local sites, hunting for issues, testing quickly.

    (There aare numerous RUM and Synthetic providers out there, but below I will focus on those that are, or at least offer, free levels. If you think I should add something, let me know.)

    • CrUX
      • RUM data, limited to opt-in Chrome users only, if your site gets sufficient traffic.
      • Shows Core Web Vitals and other KPI data, offers filters (Controls) and drill-down into each KPI.
      • The data collected by CrUX is also available via API, BigQuery, and is ingested into several other tools, such as PageSpeed Insights (see below), DebugBear, GTMetrix and more.

    • PageSpeed Insights

      • Pure lab testing, no real devices, just assumptions based on the code.
      • First split for Mobile & Desktop, then nice score up front, including grades on the Core Web Vitals, followed by things that you could try to improve these scores, and finally things that went well.
      • This tool is powered by the same tech that powers DevTools’ Lighthouse tab and Chrome extension.

    • WebpageTest

      • Actual run-tests, in real browsers on real devices, in locations around the globe.
      • Device and browser options vary depending on location.
      • Advanced Settings vary depending on device and browser.
      • Advanced Settings tabs allow you to Disable JS, set UA string, set custom headers, inject JS, send username/password, add Script steps to automate actions on the page, block requests that match substrings or domains, set SPOF, and more.
      • “Performance Runs” displays median runs, if you do multiple runs, for both first and repeat runs.
      • Waterfall reports:
        • Note color codes above report.
        • Start Render, First Paint, etc. draw vertical lines down the entire waterfall, so you can see what happens before & after these events, as well as which assets affect those events.
        • Wait, DNS, etc. show in the steps of a connection.
        • Light | Dark color for assets indicate Request | Download time.
        • Click each request for details in a scrollable overlay; also downloadable as JSON.
        • JS execution tells how long each file takes to process.
        • Bottom of waterfall, Main Thread is flame chart showing how hard the browser was working across the timeline.
        • To right of each waterfall is filmstrip to help view TTFB, LCP; Timeline compares filmstrip with waterfall, so you see how waterfall becomes visible, and how assets affect it.
        • Check all tabs across top (Performance Review, Content Breakdown, Processing Breakdown, etc.) for many more features.

      • The paid version offers several beneficial features, but the free version is likely enough to at least get you started.

    • BrowserStack

      • Test real browsers on remote servers.
      • Also offer automated testing, including pixel-perfect testing.
      • Free version has usage limits.

    • Browser DevTools

      • All modern browsers have one, and all vary slightly.
      • My personal opinion is that Chrome’s DevTools is the standard, with it’s Performance tab and Lighthouse extension.
      • Other browsers also have strengths that Chrome lacks, such as Firefox’s great CSS grid visualizer and Safari’s ability to connect directly to an iPhone and inspect what is currently loaded in the device’s Safari browser.
      • I do not use any other browser enough to have an opinion, but would welcome any notes you care to share in the Comments section below. ;-)

  • Analysis Example

    For this process, I recommend Chrome, if only for the Lighthouse integration:

    1. Open site in Incognito
    2. Open DevTools
    3. Go to Lighthouse tab
    4. Under “Category”, only check “Performance”
    5. Under “Device” check “Mobile”
    6. Check “Simulated throttling”(click “Clear storage” to simulate a fresh cache)
    7. Click “Generate Report”
    8. Look for issues
    9. Fix one issue
    10. Re-run audit
    11. Evaluate audit for that one change
    12. Determine if change was improvement, decide to keep or revert
    13. Repeat from step 7, until no issues or happy with results

    Chrome > DevTools > Lighthouse

  • TOC ⇪

    Tips

    • Server

      • Use HTTP/2

        Created in 2015, primarily focused on mobile and server-intensive graphics/videos, it is based on Google’s SPDY protocol, which focuses on compression, multiplexing, and prioritization.

        Key differences:

        • Binary, instead of textual.
        • Fully multiplexed, instead of ordered and blocking.
        • Can use one connection for parallelism.
        • Uses header compression to reduce overhead.
        • allows servers to “push” responses proactively into client cachesThis has been removed from the spec.
        • If a device does not support HTTP/2, it will automatically degrade to HTTP/1.x.

        All major browsers support it; IE11 works on Win10 only; IE<11 not supported at all.

      • HTTP/3

        Not widely adopted yet, but growing. Supported all across all major browsers, considered Baseline.

        There are numerous differences, but the key improvements it uses QUIC instead of TCP, which is faster and more secure.

      • TLS

        Upgrade to 1.3 to benefit from reduced handshakes for renewed sessions, but be sure to follow RFC 8446 best practices for possible security concerns.

      • Cache Control

        While not helping with the initial page load, setting a far-future cache control header for files that do not change often, tells the browser it does not even need to ask for the file.

        This is in contrast to the Expire header that we used to use, which would prevent sending a file that had not yet expired, but still required the browser to ask the server about it.

        And there is no faster response than one not made…

    • Website (CMS, Site Generator, etc.)

      Nothing could ever be faster than static HTML, but it is not very scalable, unless you really like hand-cranking HTML.

      Assuming you are not doing that…

      Database-driven

      • Any site that uses a database, like WordPress, Drupal or some other CMS, suffers from the numerous database requests required to build the HTML before it can be sent to the user.
      • The best thing you can do here is caching the site pages, likely via a caching plugin.
      • Caching plugins pre-process all possible pages of a site (dynamic pages, like Search are hard to do) and create flat HTML versions which it then sends to users.
      • This (mostly) bypasses the database per page request, delivering as close to the static HTML experience as possible.

      SSG

      • SSG sites, like Jekyll, Gatsby, Hugo, Eleventy, etc., leave little work to do in this section.
      • As they are already pre-processed, static HTML and have no database connections, there is not much to do aside from the sections covered below under “Frontend“.

      CSR

      • CSR sites, like Angular, React, Vue, etc., have an initial advantage of delivering typically very small HTML files to the user, because a CSR site typically starts with a “shell” of a page, but no content. This makes for a very fast TTFB!
      • But then the page needs to download, and process, a ton of JS before it can even begin to build the page and show something to the user. This often makes for a pretty terrible LCP and INP, and usually CLS.
      • Aside from framework-specific performance optizations, there is not much to do aside from the sections covered below under “Frontend“.
      SSR
      • In an attempt to solve their LCP and INP issues, CSRs realized they could also render the initial content page on the server, deliver that, then handle frontend interactions as a typical CSR site.
      • Depending on the server speed and content interactions, this should solve the LCP and hopefully CLS issues, but even the SSR often needs to download, and process, a lot of JS, in order to be interactive. Therefore, INP can still suffer.
      • Aside from framework-specific performance optizations, there is not much to do aside from the sections covered below under “Frontend“.
      SSR w/ Hydration

      • Another wave of JS-created sites arrived, like Svelte, that realized they could benefit by using a build system to create “encapsulated” pages and code base.
      • Rather than delivering all of the JS needed for the entire app experience, these sites package their code in a way that allows it to deliver “only the code that is needed for this page”.
      • This method typically maintains the great TTFB of its predecessors, but also takes a great leap toward better LCP and INP, and possibly CLS.
      • Aside from framework-specific performance optimizations, there is not much to do aside from the sections covered below under “Frontend“.

    • Database

      • Create indexes on tables.
      • Cache queries.
      • When possible, create static cached pages to reduce database hits.

    • CDN

      • Distributed assets = shorter latency, quicker response.
      • Load-balancers increase capacity and reliability.
      • Typically offer automated caching patterns.
      • Some also offer automates media compression.
      • Some also offer dynamic content acceleration.
      • All the ones I know of offer at least some level of reporting.
      • Enterprise options can be quite expensive, but for personal sites, you can find free options.

    • Frontend

      The basics here read like a modified “Reduce, Reuse, Recycle”: “Reduce, Minify, Compress, Cache”.

      Reduce

      Our first goal should always be to reduce whatever we can: the less we send through the Intertubes, the faster it can get to the user.

      • HTML/XML/SVG 452c9927004155cd5b4a72533ddacf5f Minify / Optimize

        If something must be sent to the browser, remove everything from it that you can.

        • HTML/XML/SVG 5af7508c2af01633f64d069ccd46046d Compress

          Once only the absolutely necessary items are being sent, and everything is as small as it can be, it is time to compress it before sending.

          The good thing here is that compression is mostly a set-it-and-forget-it technique. Once you know what works for your users, set it up,make sure it is working, and move on…

          • HTML/XML/SVG/CSS/JS/JSON/Fonts e07b3bd4902891d088a2d7f22da96b6e Cache

            Once the deliverables are as few and small as possible, there is nothing more we can do for the initial page load.

            But we can make things better for the next page load by telling the browser it does not need to fetch it again.

            In addition to server/CDN caching, we can also cache some data in the browser. Depending on the data type, we can use:

            • Cookies 2704ec66f153558129e3aeab828c1739
            • Tools

              There are many, many, many tools and options that can perform most of the tasks referenced above… I will list a few here, feel free to share your favorites in the Comments section below.

              Minification

              • You can do this manually (Beautifier, Minify, BeautifyTools), one file at a time, but that gets pretty tiresome for things like CSS and JS, which you might edit often.
              • Ideally this is handled automatically during your build or deployment process, but there are so many options that you would need to search for one that works with your current process.
              • You can also set JS minifiers to obfuscate code, reducing size beyond just whitespace. These make the resulting code more-or-less unreadable to humans, but it still works just fine for machines.
              • There are also tools like UnCSS that look for code that your site isn’t using and removes it; most options will have an online version (manual) and build version (automatic).
              • Tree Shaking tries to do the same thing for JS, such Rollup, Webpack, and many others; again, it depends greatly on your current process.

              Compression

              This is handled on your server, and is, for the most-part, a “set it and forget” feature.

              Nearly every browser supports Gzip and Brotli now, and Zstandard support is growing.

              Ask your analytics which is right for you… ;-)

              • Gzip compresses and decompresses files on the fly, as requested, deflating before sending, inflating again in the browser. Gzip is available in basically every browser available.
              • Brotli also processes at run-time, but usually gets better results than Gzip; it lacks support in older, now-outdated browsers, but it otherwise ubiquitous.
              • Zstandard is an improvement over Gzip, but is currently not as well-supported.

              Optimization for Images

              Optimizing images for the web is an absolute must! I have seen 1MB+ images reduce to less than 500kb. That is real time saved by your users.

              • OptImage is a desktop app, offering subscriptions or limited optimizations per month for free. Can handle multiple image formats, including WEBP.
              • Can also do during build time or during deployment. Essentially every process you could choose offers a service for this, you would just need to search for one for your process.
              • Can also do during run-time, like Cloudinary, works as media repo, though has costs of new latency and possible point of failure.
              Note that you should probably be replacing your old standard JPG and PNG images with WEBP or AVIF formats. You might want to do a few conversions and see which performs better for you. Also note that some CDNs offer a “hot swap” technique where, even if your HTML asks for the JPG format, it will return the AVIF format if the user’s browser supports it… Very handy! Optimization for Videos

              In my opinion, all videos should be served via YouTube or Vimeo, as they will always be better at compressing and serving videos than the rest of us.

              But of course there are situations where that isn’t wanted, practical or ideal.

              So if must serve your own videos…

              Optimizing Fonts

              I am also of the opinion that native fonts are the best way to go, requiring no additional files to be downloaded, and incurring no related CLS.

              But again, that is not always wanted, practical or ideal.

              So if you must use web fonts…

              • It is usually recommended to download & serve fonts from your own domain. This reduces an third-party latency and eliminates any chance of the font on your website failing if someone else is having a server issue.
              • But you can also use third-party web fonts, just be aware of the concerns raised above.
              • Zach Leatherman wrote a great tutorial on setting up fonts.
              • Note that WOFF2 is currently the preferred format, but check your analytics, as you might still need to offer WOFF as a fallback.
              • “Subsetting”, where you remove parts of the font that you don’t use (characters, weights, styles, etc.), can be a powerful tool. FontSquirrel is a manual version, glyphhanger is an automated tool.
              • Variable fonts are also an option, and they can be WOFF2.

            TOC ⇪

            Tricks

            • This is a collection of tips that you might want to try employing. Remember, very few things are right everywhere, and not everything is going to fix the problems you might have… Throw a “pre” party
              • preconnect
                • For resources hosted on another domain that will be fetched in the current page.
                • Sort of a DNS pre-lookup.
                • Add a link element to the head to tell the browser “you will fetching from this site soon, so try to open a connection now”.
                   <head> ... <link rel="preconnect" href="https://maps.googleapis.com"> ... <script src="https://maps.googleapis.com/maps/api/js?key=1234567890&callback=initMap" async></script> </head> 

              • preload

                • For resources that will be needed later in the current page.
                • Add a link element to the head to tell the browser “you will need this soon, so try to download it as soon as you can”:
                   <head> ... <link rel="preload" as="style" href="style.css"> <link rel="preload" as="script" href="main.js"> ... <link rel="stylesheet" href="style.css"> </head> <body> ... <script src="main.js" defer></script> </body> 
                • You can preload several types of files.
                • The rel attribute should be "preload".
                • The href is the asset’s URL.
                • It also needs an as attribute.
                • You can optionally add a type attribute to indicate the MIME type:
                   <link rel="preload" as="image" href="image.avif" type="image/avif"> 
                • You can optionally add a crossorigin attribute for CORS fetches:
                   <link rel="preload" as="font" href="https://font-store.com/comic-sans.woff2" type="font/woff2" crossorigin> 
                • You can optionally add a media attribute to conditionally load something:
                   <link rel="preload" as="image" href="image-small.avif" type="image/avif" media="(max-width: 599px)"> <link rel="preload" as="image" href="image-big.avif" type="image/avif" media="(min-width: 600px)"> 
                • You can optionally add a fetchpriority attribute to suggest a non-standard download priority:
                   <link rel="preload" as="image" href="hero-image.avif" type="image/avif" fetchpriority="high"> 

              • prefetch

                • For resources that might be used in the next page load. prefetch has limitations, including browser support, that make Speculation Rules a better option for this functionality.
                • Add a link element to the head to tell the browser “you might need this on a future page, so try to download it as soon as you can”:
                   <link rel="prefetch" href="blog.css"> 

              • prerender

              Add Speculation Rules

              • Offers a slew of configuration and priority options to conditionally “preload” or “prefetch” documents and/or files, based on an assumption of what the user might need next.
              • Although not currently standardized, and currently only supported in Chromium browsers, Speculation Rules can provide a great performance boost.

              Add Critical CSS in-page

              • A great way to benefit all three Core Web Vitals is to add the page’s “critical CSS” in-page in a style block, then load the full CSS file via a link.
              • This gives the dual benefit of getting the “above-the-scroll” CSS downloaded and ready as quickly as possible, while also caching the complete CSS for later subsequent page loads.
              • This is another technique that is best handled during a build/deployment process, and another tool that has many, many options.
              • A good primer article can be found on Web.dev, and more info and options can be found on Addy Osmani’s GitHub.
              • Jeremy Keith offers a terrific add-on to this technique by inlining the critical CSS only the first time someone visits, then relying on the cached full CSS file for repeat page views. This helps reduce page bloat for subsequent page visits.

              Conditionally load CSS

              • Use a media attribute on link tags to conditionally load them:

                 <!-- only referenced for printing --> <link rel="stylesheet" href="./css/main-print.css" media="print"> <!-- only referenced for landscape viewing --> <link rel="stylesheet" href="./css/main-landscape.css" media="orientation: landscape"> <!-- only referenced for screens at least 40em wide --> <link rel="stylesheet" href="./css/main-wide.css" media="(min-width: 40em)"> 

              • Note that while all of the above are only applied in the scenarios indicated (print, etc.), all are actually downloaded and parsed, by the browser as the page loads. But they are all downloaded and parsed in the background… (This is foreshadowing, so remember it!)

              Split CSS into “breakpoint” files

              • Taking the above conditional-loading technique a step further, you could split your CSS based on @media breakpoints, then conditionally load them using the same media trick above.

                 <!-- all devices get this --> <link href="main-base.css"> <!-- only devices matching the min-width get each of these --> <link href="main-min-480.css" media="(min-width: 480px)"> <link href="main-min-600.css" media="(min-width: 600px)"> <link href="main-min-1080.css" media="(min-width: 1080px)"> 

              • Even if not needed right now, will still download in the background, then will be ready if it is needed later.
              You could obviously break this into any combination or structure that makes sense and is necessary for your site, but remember that all of these files, although in the background, are still downloading, and taking bandwidth away from other file downloads. Split CSS into component files
              • Taking the above splitting and conditional-loading approach beyond print or min-width, you can also break your CSS into sections.
              • Create one file for all of your global CSS (header, footer, main content layout), then create a separate file for just Home Page CSS that is only loaded on that page, Contact Us CSS that is only loaded on that page, etc.
              • The practicality of this technique would depend on the size of your site and your overall CSS.
              • If your site and CSS are small, then a single file cached for all pages makes sense.
              • If you have lots of unique sections and components and widgets and layouts, then there is no need for users to download the CSS for those sections until they visit those sections.

              Prevent blocking CSS, load it “async”

              • Remember that “downloaded and parsed in the background” bit from above? Well here is where it gets interesting…
              • Because, while neither link nor style blocks recognize async or defer attributes, files with media attributes that are not currently true do actually load async, meaning they are not render-blocking…
              • This means we can kind of “gently abuse” that feature with something like this:

                 <link rel="stylesheet" href="style.css" media="print" > 

                Note that onload event at the end? Once the file has downloaded, async “in the background”, the onload event changes the link‘s’ media value to all, meaning it will now affect the entire current page!

              • While you wouldn’t want this “async CSS” to change the visible layout, as it might harm your CLS, it can be useful for below-the-scroll content or lazy-loaded widgets/modules/components.

              Enhancing optimistically

              • The Filament Group came up with something they coined “Enhancing optimistically“.
              • This is when you want to add something to the page via JS (like a carousel, etc.), but know something could go wrong (like some JS might not load).
              • To prepare for the layout, you add a CSS class to the html that mimics how the page will look after your JS loads.
              • This helps put your layout into a “final state”, and helps prevent CLS when the JS-added component does load.
              • Ideally you would also prepare a fallback just in case the JS doesn’t finish loading, maybe a fallback image, or some content letting the user know something went sideways.

              Reduce JS

              Conditionally load JS

              • While you cannot conditionally load JS as easily as you can with CSS:
                 &lt!-- THIS DOES NOT WORK --> <script src="script.js" media="(min-width: 40em)"></script> 

                You can conditionally append JS files:

                 // if the screen is at least 480px... if ( window.innerWidth >= 480 ) { // create a new script let script = document.createElement('script'); script.type = 'text/javascript'; script.src = 'script-min-480.js'; // and prepend it to the DOM let script0 = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(script, script0); } 

              • And the above script could of course be converted into an array and loop for multiple conditions and scripts.
              • There are also libraries that handle this, like require.js, modernizr.js, and others.

              Split JS into component files

              • Similarly to how we can break CSS into components and add them to the page only as and when needed, we can do the same for JS.
              • If you have some complex code for a carousel, or accordion, or filtered search, why include that with every page load when you could break into into separate files and only add to pages that use that functionality?
              • Smaller files mean smaller downloads, and smaller JS files mean less blocking time.
              • But similarly to breaking CSS into components, there is a point where having fewer, large files might better for performance than having a bunch of smaller files. As always, test and measure.

              Prevent blocking JS

              • When JS is encountered, it stops everything until it is downloaded and parsed, just in case it will modify something.
              • If this JS is inserted into the DOM before CSS or your content, it harms the entire render process.
              • If possible, move all JS to the end of the body.
              • If this is not possible, add a defer attribute to tell the browser “go ahead and download this now, but in the background, and wait until the DOM is completely constructed before implementing it”.
              • Deferred scripts maintain the same order in which they were encountered in the DOM; this can be quite important in cases where dependencies exist, such as:

                 <!-- will download and process, in the background, without render-blocking, but will remain IN THIS ORDER --> <script src="jquery.js" defer></script> <script src="jquery-plugin.js" defer></script> 

              • In the above case, both JS files will download in the background, but, regardless of which downloads and parses first, the first will always process completely before the second.
              • A similar option is to add an async attribute. This tells the browser “go ahead and download this now, but in the background, and you can process it any time”.
              • Async scripts download and process just as the name implies: asynchronously. This means the following scripts could download and process in any order, and that order could change from one page load to another, depending on latency and download speeds:

                 <!-- these could load and process in ANY order --> <script src="script-1.js" async></script> <script src="script-2.js" async></script> <script src="script-3.js" async></script> 

              • defer and async are particularly useful tactics for third-party JS, as they remove any other-domain latency, download and parsing time, from the main thread.
              • And remember, third-party JS has a tendency to add even more JS and even CSS as it loads, all of which would otherwise block the page load.
              Note that neither async nor defer work on script blocks, only on script elements:
               &lt!-- THIS DOES NOT WORK --> <script async> /* Do something here... */ </script> &lt!-- this does work --> <script src="script.js" async></script> 
              Optimize running JS

    Aaron T. Grogg

    Originally published January 24, 2022. Last updated September 29, 2025.

    In this series, as I “get to know” some technology, I collect and share resources that I find helpful, as well as any tips, tricks and notes that I collected along the way.

    My goal here is not to teach every little thing there is to learn, but to share useful stuff that I come across and hopefully offer some insight to anyone that is getting ready to do what I just finished doing.

    As always, I welcome any thoughts, notes, pointers, tips, tricks, suggestions, corrections and overall vibes in the Comments section below

    TOC

    TOC ⇪

    What

    So, WPO is just a massive beast. There are so many parts, strewn across so many branches of tech and divisions and teams, that each “part” really deserves its own “Getting to Know” series. Maybe some day.

    But for now, I am going to cover what I consider to be the most important high-level topics, drilling down into each topic a little bit, offering best practices, suggestions, options, tips and tricks that I have collected from around the Interwebs!

    So, let’s get to know… WPO!

    TOC ⇪

    Why Okay, so this is not really me getting to first-know WPO, but more like me getting to re-know WPO. This is a topic that I have been quite passionate about for some time, but, as with all things web-related, stuff changes, so I decided to dig in and find out how much of what I already knew is still valid, how much has changed, and how much new stuff there is out there.

    The first couple things to understand is that a) “web performance” is not just about making a page load faster so someone can get to your shopping cart faster, and b) not everyone has blazing fast Internet and high-powered devices.

    The web is more than just looking at cat pics, sharing recent culinary conquests or booking upcoming vacations. People also use the web for serious life issues, like applying for public assistance, shopping for groceries, dealing with healthcare, education and much more.

    And for the 2021 calendar year, GlobalStats puts Internet users at 54.86% Mobile, 42.65% Desktop and 2.49% Tablet, and of those mobile users, 29.24% are Apple and 26.93% are Samsung, with average worldwide network speeds in November 2021 of 29.06 Mbps Download and 8.53 Mbps Upload.

    And remember, those are averages, skewed heavily by the highly-populated regions of the industrialized world. Rural areas and developing countries are lucky to get connections at all.

    So for people that really depend on the Internet, and may not have the greatest connection, nor the most powerful device, let’s see what we can do about getting them the content they want/need, as fast and reliably as possible.

    TOC ⇪

    Getting Started

    This was a tough one to get started on, and certainly to collect notes for, because, as I mentioned above, the topics are so wide, that it took a lot to try to pull them all together…

    WPO touches on server settings, CDNs, cache control, build tools, HTML, CSS, JS, file optimizations, Service Workers and more.

    In most organizations, this means pulling together several teams, and that means getting someone “up the ladder” to buy into all of this to help convince department heads to allocate resources (read: people, so read: money)…

    Luckily, there have been a LOT of success stories, and they tend to want to brag (rightfully so!), so it has actually never been easier to convince bosses to at least take a look at WPO as a philosophy!

    TOC ⇪

    BLUF

    You’ll find details for all of these below, but here are the bullets, more-or-less in order…

    1. HTTP3 before HTTP2, HTTP2 before HTTP1.1
    2. Cache-Control header, not Expire
    3. CDN
    4. preconnect to third-party domains and sub-domains
    5. preload important files coming up later in page
    6. prefetch resources for next page
    7. prerender pages likely to navigate to next (deprecated) Speculation Rules for pages likely to navigate to next
    8. fetchpriority to suggest asset importance
    9. Split CSS into components/@media sizes, load conditionally
    10. Inline critical CSS, load full CSS after
    11. Replace JS libraries with native HTML, CSS and JS, when possible
    12. Replace JS functionality with native HTML and CSS, when possible
    13. async / defer JS, especially 3rd party
    14. Split JS into components, load conditionally / if needed
    15. Avoid Data-URIs unless very small code
    16. Embedded SVG before icon fonts, icon fonts before image sprites, image sprites before numerous icon files
    17. WOFF2 before WOFF
    18. font-display: swap
    19. AVIF before WEBP, WEBP before JPG/PNG
    20. Multiple crops for various screen sizes / connection speeds
    21. srcset / sizes attributes for automated size swap
    22. media attribute for manual size swap
    23. loading="lazy" for initially-offscreen images
    24. WEBM before MP4, MP4 before GIF
    25. preload="none" for initially-offscreen videos
    26. width / height attributes on media and embeds, use CSS to make responsive
    27. Optimize all media assets
    28. Lazy-load below the fold content
    29. Reserve space for delayed-loading content, like ads and 3rd-party widgets
    30. Create flat/static versions of dynamic content
    31. Minify / compress text-based files (HTML, CSS, JS, etc.)
    32. requestIdleCallback/requestAnimationFrame/scheduler.yield to pause CPU load / find better time to run tasks
    33. Service Worker to cache / reduce requests, swap content before requesting

    Note that the above are all options, but might not all work / be possible everywhere. And each should be tested to see if it helps in your situation.

    TOC ⇪

    Glossary

    We need to define some acronyms and terms to ensure we’re all speaking the same language… In alpha-order…

    CLS (Cumulative Layout Shift) Visible layout shift as a page loads and renders Core Web Vitals

    Three metrics that score a page load:

    1. LCP: ideally <= 2.5s
    2. INP: ideally <= 200ms
    3. CLS: ideally <= 0.10
    Critical Resource Any resource that blocks the critical rendering path CRP (Critical Rendering Path) Steps a browser must complete to render page CrUX (Chrome User Experience) Perf data gathered from RUM within Chrome DSA (Dynamic Site Acceleration) Store dynamic content on CDN or edge server FCP (First Contentful Paint) First DOM content paint is complete FMP (First Meaningful Paint) Primary content paint is complete; deprecated in favor of LCP FID (First Input Delay) Time a user must wait before they can interact with the page (still a valid KPI, but use INP instead) FP (First Paint) First pixel painted (not really used anymore, use LCP instead) INP (Interaction to Next Paint) Longest time for user interaction to complete LCP (Largest Contentful Paint) Time for largest page element to render Lighthouse Google lab testing software. Analyzes and evaluates site, returns score and possible improvements LoAF (Longest Animation Frame) Longest render task PoP (Points of Presence) CDN data centers Rel mCvR (Relative Mobile Conversion Rate) Desktop Conversion Rate / Mobile Conversion Rate RUM (Real User Monitoring) Data collected from real user experiences, not simulated SI (Speed Index) Calculation for how quickly page content is visible SPOF (Single Point of Failure) Critical component that can break everything if it fails or is missing Synthetic Monitoring Data collected from similuated lab tests, not real user experiences TBT (Total Blocking Time) Time between FCP and TTI TTFB (Time to First Byte) Time until first byte is received by browser TTI (Time to Interactive) Time until entire page is interactive Tree Shaking Removing dead code from code base WebpageTest Synthetic monitoring tool, offers free and paid versions

    TOC ⇪

    Notes

    • CRP (Critical Rendering Path)
      1. To start to understand web performance, you must first understand CRP.
      2. The CRP is the steps a browser must go through before before it can render (display) anything on the screen.
      3. This includes downloading the HTML document and parsing it, as well as finding, downloading and parsing any CSS or JS files (that are not async or defer).
      4. All of the above document types are considered render-blocking assets, as the browser cannot render anything to the screen until they are all downloaded and parsed.
      5. These actions allow the browser to create the DOM (the structure of the page) and the CSSOM (the layout and look of the page).
      6. Any JS files that are not async or defer are render-blocking because their contents could affect the DOM and/or CSSOM.
      7. Naturally downloading less is always better: less to download, less to parse, less to construct, less to render, less to maintain in memory.

    • Three levels of UX

      As the above process is happening, the user experiences three main concerns:

      1. Is anything happening?

        Once a user clicks something, if there is no visual indicator that something is happening, they wonder if something broke, and so the experience suffers.

      2. Is it useful?

        Once stuff does start to appear, if all they see is placeholders, or partial content, it is not useful yet, so the experience suffers.

      3. Is it usable?

        Finally, once everything looks ready, is it? Can they read anything, or interact yet? If not, the experience suffers.

    • Three Core Web Vitals

      The three questions above drive Google’s Core Web Vitals: they are an attempt to quantify the user experience.

      1. “Is anything happening?” becomes LCP (Largest Contentful Paint)

        When the primary content section becomes visible to the user. Considerations:

        • TTFB is included in LCP (see below).
        • CSS, JS, custom fonts, images, can all delay LCP.
        • Not just download time, but also processing time: DOM has to be rendered, CSS & possibly JS processed, etc.
        • Remember, each asset can request additional assets, creating a snowball effect.

      2. “Is it useful?” becomes INP (Interaction to Next Paint)

        Time for the page to complete a user’s interaction. Considerations:

        • How busy the browser already is when the user interacts.
        • How resource-intensive the interaction is.
        • Whether the interaction requires a fetch before it can respond.

      3. “Is it usable?” becomes CLS (Cumulative Layout Shift)

        Layout shifts as a page loads and renders. Considerations:

        • Delay in loading critical CSS and/or font files.
        • JS updates to the page after the initial render.
        • Dynamic or delayed content loading into the page.

      And because TTFB is included in LCP, and might indicate issues related to INP, I consider it to be the D’Artagnon of the three Core Web Vitals (sort of their fourth Musketeer)… ;-)

      1. TTFB (Time to First Byte)

        Time between the browser requesting, and receiving, the asset’s first byte. (Usually discussed related to the page/document, but also part of every asset request, including CSS, JS, images, videos, etc., even from a 3rd party.) Considerations:

        • Connection speed, network latency, server speed, database speed, etc.
        • Distance between user and server (even at near light speed, it still takes time to travel around the world).
        • Static assets are always faster than dynamic ones.

  • Analysis Prep

    The first step to testing is analysis.

    1. Find out who your audience is

      • Where they are geographically, what type of network connection they typically have, what devices they use.
      • Ideally this comes from RUM data, reflecting your real-life users, ideally via analytics.

    2. Look for key indicators

      • Any problems/complaints your project currently experiences, to be used to determine goals.
      • Perhaps your site has a TTFB of 3.5 seconds, and a LCP of 4.5 seconds, and a CLS of 0.34.
      • All of these should be able to be improved, so they are great candidates for goals.

  • Goals and Budgets

    A goal is a target that you set and work toward, such as “TTFB should be under 300ms” or “LCP should be under 2s”.

    A budget is a limit you set and try to stay under, such as “no more than 100kb of JS per page” or “hero image must be under 200kb”.

    • How to choose goals?

      • Maybe compare your site against your competition’s.
      • Google’s Core Web Vitals could be another consideration.
      • Goals have to be achievable: If your current KPIs are too far from your long-term goals, consider creating less-aggressive short-term goals; you can always re-evaluate periodically.

    • How to create budgets?

      • Similar to goals, compare against competition, research current stats, or look for problem areas and set limits to control them.
      • For existing projects, starting budget can be “no worse than it is right now”…
      • For new sites, can start with “talking points” for the team, to help set limits on a project, then refine as needed.
      • Budgets can change as the site changes; Reach a new goal? Adjust budget to reflect that. Adding a new section/feature? That will likely affect the budget.

    • How to stick to budgets?

      • Lighthouse CI integrates with GitHub to test changes during builds, stopping deployments.
      • Speedcurve dashboard sets budgets, monitors, and notifies team of failures.
      • Calibre estimates how 3rd party add-ons will affect site, or how ad-blockers will affect page loads.

    • What if a change breaks the budgets?

      • Optimize feature to get back under budget.
      • Remove some other feature to make room for the new one.
      • Don’t add the new feature.

    These are tough decisions and require buy-in from all parties involved. the goal is to improve your site, not create civil wars. Both goals and budgets need to be realistic, or no one is going to be able to meet them, and then they just become a source of friction, and will soon just be ignored.

  • Analysis process 6e7fe6433a6f9186f6f7fd2b1ca32209

    But remember, Synthetic and RUM will never be identical; the lab has constraints that the real world doesn’t, and the real world has variables that the lab doesn’t.

    These are two different testing environments, each with its pros and cons, intended for two very different purposes.

    I always say “Synthetic is what should be, and RUM is what is“; we can only control so much in the real world, that’s why we try to polish everything as much as possible, hoping that when real world variables happen, our site is stable enough that they won’t affect things too badly.

  • Analysis tools

    Some tools allow you to test in live browsers, on a variety of devices, alter connection speeds, run multiple tests, and receive reports of request, delivery, and processing times for all of it.

    Other tools allow you to run your site against a defined set of benchmarks, receiving estimations of speeds and reports with improvement suggestions.

    Still other tools let you work right in your browser, testing live or local sites, hunting for issues, testing quickly.

    (There aare numerous RUM and Synthetic providers out there, but below I will focus on those that are, or at least offer, free levels. If you think I should add something, let me know.)

    • CrUX
      • RUM data, limited to opt-in Chrome users only, if your site gets sufficient traffic.
      • Shows Core Web Vitals and other KPI data, offers filters (Controls) and drill-down into each KPI.
      • The data collected by CrUX is also available via API, BigQuery, and is ingested into several other tools, such as PageSpeed Insights (see below), DebugBear, GTMetrix and more.

    • PageSpeed Insights

      • Pure lab testing, no real devices, just assumptions based on the code.
      • First split for Mobile & Desktop, then nice score up front, including grades on the Core Web Vitals, followed by things that you could try to improve these scores, and finally things that went well.
      • This tool is powered by the same tech that powers DevTools’ Lighthouse tab and Chrome extension.

    • WebpageTest

      • Actual run-tests, in real browsers on real devices, in locations around the globe.
      • Device and browser options vary depending on location.
      • Advanced Settings vary depending on device and browser.
      • Advanced Settings tabs allow you to Disable JS, set UA string, set custom headers, inject JS, send username/password, add Script steps to automate actions on the page, block requests that match substrings or domains, set SPOF, and more.
      • “Performance Runs” displays median runs, if you do multiple runs, for both first and repeat runs.
      • Waterfall reports:
        • Note color codes above report.
        • Start Render, First Paint, etc. draw vertical lines down the entire waterfall, so you can see what happens before & after these events, as well as which assets affect those events.
        • Wait, DNS, etc. show in the steps of a connection.
        • Light | Dark color for assets indicate Request | Download time.
        • Click each request for details in a scrollable overlay; also downloadable as JSON.
        • JS execution tells how long each file takes to process.
        • Bottom of waterfall, Main Thread is flame chart showing how hard the browser was working across the timeline.
        • To right of each waterfall is filmstrip to help view TTFB, LCP; Timeline compares filmstrip with waterfall, so you see how waterfall becomes visible, and how assets affect it.
        • Check all tabs across top (Performance Review, Content Breakdown, Processing Breakdown, etc.) for many more features.

      • The paid version offers several beneficial features, but the free version is likely enough to at least get you started.

    • BrowserStack

      • Test real browsers on remote servers.
      • Also offer automated testing, including pixel-perfect testing.
      • Free version has usage limits.

    • Browser DevTools

      • All modern browsers have one, and all vary slightly.
      • My personal opinion is that Chrome’s DevTools is the standard, with it’s Performance tab and Lighthouse extension.
      • Other browsers also have strengths that Chrome lacks, such as Firefox’s great CSS grid visualizer and Safari’s ability to connect directly to an iPhone and inspect what is currently loaded in the device’s Safari browser.
      • I do not use any other browser enough to have an opinion, but would welcome any notes you care to share in the Comments section below. ;-)

  • Analysis Example

    For this process, I recommend Chrome, if only for the Lighthouse integration:

    1. Open site in Incognito
    2. Open DevTools
    3. Go to Lighthouse tab
    4. Under “Category”, only check “Performance”
    5. Under “Device” check “Mobile”
    6. Check “Simulated throttling”(click “Clear storage” to simulate a fresh cache)
    7. Click “Generate Report”
    8. Look for issues
    9. Fix one issue
    10. Re-run audit
    11. Evaluate audit for that one change
    12. Determine if change was improvement, decide to keep or revert
    13. Repeat from step 7, until no issues or happy with results

    Chrome > DevTools > Lighthouse

  • TOC ⇪

    Tips

    • Server

      • Use HTTP/3

        Not widely adopted yet, but growing. Supported all across all major browsers, considered Baseline.

        There are numerous differences, but the key improvements are related to connection speed, stability and security:

        • Uses QUIC, instead of TCP, for faster connection.
        • Uses “UDP for faster transmission.
        • QUIC also allows for network swaps, helping mobile users.
        • Uses TLS 1.3 for improved security.

        Supported by all major browsers, considered Baseline.

        If a device does not support HTTP/3, it will automatically degrade to HTTP/2.

      • If you cannot use HTTP/3, use HTTP/2

        Created in 2015, primarily focused on mobile and server-intensive graphics/videos, it is based on Google’s SPDY protocol, which focuses on compression, multiplexing, and prioritization.

        Key differences:

        • Binary, instead of textual.
        • Fully multiplexed, instead of ordered and blocking.
        • Can use one connection for parallelism.
        • Uses header compression to reduce overhead.
        • allows servers to “push” responses proactively into client cachesThis was removed from the spec.

        Supported by all major browsers; IE11 works on Win10 only; IE<11 not supported at all.

        If a device does not support HTTP/2, it will automatically degrade to HTTP/1.x.

      • TLS

        Upgrade to 1.3 to benefit from reduced handshakes for renewed sessions, but be sure to follow RFC 8446 best practices for possible security concerns.

      • Cache Control

        While not helping with the initial page load, setting a far-future cache control header for files that do not change often, tells the browser it does not even need to ask for the file.

        This is in contrast to the Expire header that we used to use, which would prevent sending a file that had not yet expired, but still required the browser to ask the server about it.

        And there is no faster response than one not made…

    • Website (CMS, Site Generator, etc.)

      Nothing could ever be faster than static HTML, but it is not very scalable, unless you really like hand-cranking HTML.

      Assuming you are not doing that…

      Database-driven

      • Any site that uses a database, like WordPress, Drupal or some other CMS, suffers from the numerous database requests required to build the HTML before it can be sent to the user.
      • The best thing you can do here is caching the site pages, likely via a caching plugin.
      • Caching plugins pre-process all possible pages of a site (dynamic pages, like Search are hard to do) and create flat HTML versions which it then sends to users.
      • This (mostly) bypasses the database per page request, delivering as close to the static HTML experience as possible.

      SSG

      • SSG sites, like Jekyll, Gatsby, Hugo, Eleventy, etc., leave little work to do in this section.
      • As they are already pre-processed, static HTML and have no database connections, there is not much to do aside from the sections covered below under “Frontend“.

      CSR

      • CSR sites, like Angular, React, Vue, etc., have an initial advantage of delivering typically very small HTML files to the user, because a CSR site typically starts with a “shell” of a page, but no content. This makes for a very fast TTFB!
      • But then the page needs to download, and process, a ton of JS before it can even begin to build the page and show something to the user. This often makes for a pretty terrible LCP and INP, and usually CLS.
      • Aside from framework-specific performance optizations, there is not much to do aside from the sections covered below under “Frontend“.
      SSR
      • In an attempt to solve their LCP and INP issues, CSRs realized they could also render the initial content page on the server, deliver that, then handle frontend interactions as a typical CSR site.
      • Depending on the server speed and content interactions, this should solve the LCP and hopefully CLS issues, but even the SSR often needs to download, and process, a lot of JS, in order to be interactive. Therefore, INP can still suffer.
      • Aside from framework-specific performance optizations, there is not much to do aside from the sections covered below under “Frontend“.
      SSR w/ Hydration

      • Another wave of JS-created sites arrived, like Svelte, that realized they could benefit by using a build system to create “encapsulated” pages and code base.
      • Rather than delivering all of the JS needed for the entire app experience, these sites package their code in a way that allows it to deliver “only the code that is needed for this page”.
      • This method typically maintains the great TTFB of its predecessors, but also takes a great leap toward better LCP and INP, and possibly CLS.
      • Aside from framework-specific performance optimizations, there is not much to do aside from the sections covered below under “Frontend“.

    • Database

      • Create indexes on tables.
      • Cache queries.
      • When possible, create static cached pages to reduce database hits.

    • CDN

      • Distributed assets = shorter latency, quicker response.
      • Load-balancers increase capacity and reliability.
      • Typically offer automated caching patterns.
      • Some also offer automates media compression.
      • Some also offer dynamic content acceleration.
      • All the ones I know of offer at least some level of reporting.
      • Enterprise options can be quite expensive, but for personal sites, you can find free options.

    • Frontend

      The basics here read like a modified “Reduce, Reuse, Recycle”: “Reduce, Minify, Compress, Cache”.

      Reduce

      Our first goal should always be to reduce whatever we can: the less we send through the Intertubes, the faster it can get to the user.

      • HTML/XML/SVG 452c9927004155cd5b4a72533ddacf5f Minify / Optimize

        If something must be sent to the browser, remove everything from it that you can.

        • HTML/XML/SVG 5af7508c2af01633f64d069ccd46046d Compress

          Once only the absolutely necessary items are being sent, and everything is as small as it can be, it is time to compress it before sending.

          The good thing here is that compression is mostly a set-it-and-forget-it technique. Once you know what works for your users, set it up,make sure it is working, and move on…

          • HTML/XML/SVG/CSS/JS/JSON/Fonts e07b3bd4902891d088a2d7f22da96b6e Cache

            Once the deliverables are as few and small as possible, there is nothing more we can do for the initial page load.

            But we can make things better for the next page load by telling the browser it does not need to fetch it again.

            In addition to server/CDN caching, we can also cache some data in the browser. Depending on the data type, we can use:

            • Cookies 2704ec66f153558129e3aeab828c1739
            • Tools

              There are many, many, many tools and options that can perform most of the tasks referenced above… I will list a few here, feel free to share your favorites in the Comments section below.

              Minification

              • You can do this manually (Beautifier, Minify, BeautifyTools), one file at a time, but that gets pretty tiresome for things like CSS and JS, which you might edit often.
              • Ideally this is handled automatically during your build or deployment process, but there are so many options that you would need to search for one that works with your current process.
              • You can also set JS minifiers to obfuscate code, reducing size beyond just whitespace. These make the resulting code more-or-less unreadable to humans, but it still works just fine for machines.
              • There are also tools like UnCSS that look for code that your site isn’t using and removes it; most options will have an online version (manual) and build version (automatic).
              • Tree Shaking tries to do the same thing for JS, such Rollup, Webpack, and many others; again, it depends greatly on your current process.

              Compression

              This is handled on your server, and is, for the most-part, a “set it and forget” feature.

              Nearly every browser supports Gzip and Brotli now, and Zstandard support is growing.

              Ask your analytics which is right for you… ;-)

              • Gzip compresses and decompresses files on the fly, as requested, deflating before sending, inflating again in the browser. Gzip is available in basically every browser available.
              • Brotli also processes at run-time, but usually gets better results than Gzip; it lacks support in older, now-outdated browsers, but it otherwise ubiquitous.
              • Zstandard is an improvement over Gzip, but is currently not as well-supported.

              Optimization for Images

              Optimizing images for the web is an absolute must! I have seen 1MB+ images reduce to less than 500kb. That is real time saved by your users.

              • OptImage is a desktop app, offering subscriptions or limited optimizations per month for free. Can handle multiple image formats, including WEBP.
              • Can also do during build time or during deployment. Essentially every process you could choose offers a service for this, you would just need to search for one for your process.
              • Can also do during run-time, like Cloudinary, works as media repo, though has costs of new latency and possible point of failure.
              Note that you should probably be replacing your old standard JPG and PNG images with WEBP or AVIF formats. You might want to do a few conversions and see which performs better for you. Also note that some CDNs offer a “hot swap” technique where, even if your HTML asks for the JPG format, it will return the AVIF format if the user’s browser supports it… Very handy! Optimization for Videos

              In my opinion, all videos should be served via YouTube or Vimeo, as they will always be better at compressing and serving videos than the rest of us.

              But of course there are situations where that isn’t wanted, practical or ideal.

              So if must serve your own videos…

              Optimizing Fonts

              I am also of the opinion that native fonts are the best way to go, requiring no additional files to be downloaded, and incurring no related CLS.

              But again, that is not always wanted, practical or ideal.

              So if you must use web fonts…

              • It is usually recommended to download & serve fonts from your own domain. This reduces an third-party latency and eliminates any chance of the font on your website failing if someone else is having a server issue.
              • But you can also use third-party web fonts, just be aware of the concerns raised above.
              • Zach Leatherman wrote a great tutorial on setting up fonts.
              • Note that WOFF2 is currently the preferred format, but check your analytics, as you might still need to offer WOFF as a fallback.
              • “Subsetting”, where you remove parts of the font that you don’t use (characters, weights, styles, etc.), can be a powerful tool. FontSquirrel is a manual version, glyphhanger is an automated tool.
              • Variable fonts are also an option, and they can be WOFF2.

            TOC ⇪

            Tricks

            • This is a collection of tips that you might want to try employing. Remember, very few things are right everywhere, and not everything is going to fix the problems you might have… Throw a “pre” party
              • preconnect
                • For resources hosted on another domain, or a sub-domain, that will be fetched in the current page.
                • Sort of a DNS pre-lookup.
                • Add a link element to the head to tell the browser “you will fetching from this site soon, so try to open a connection now”.
                   <head> ... <link rel="preconnect" href="https://maps.googleapis.com"> ... <script src="https://maps.googleapis.com/maps/api/js?key=1234567890&callback=initMap" async></script> </head> 

              • preload

                • For resources that will be needed later in the current page.
                • Add a link element to the head to tell the browser “you will need this soon, so try to download it as soon as you can”:
                   <head> ... <link rel="preload" as="style" href="style.css"> <link rel="preload" as="script" href="main.js"> ... <link rel="stylesheet" href="style.css"> </head> <body> ... <script src="main.js" defer></script> </body> 
                • You can preload several types of files.
                • The rel attribute should be "preload".
                • The href is the asset’s URL.
                • It also needs an as attribute.
                • You can optionally add a type attribute to indicate the MIME type:
                   <link rel="preload" as="image" href="image.avif" type="image/avif"> 
                • You can optionally add a crossorigin attribute, if needed, for CORS fetches:
                   <link rel="preload" as="font" href="https://font-store.com/comic-sans.woff2" type="font/woff2" crossorigin> 
                • You can optionally add a media attribute to conditionally load something:
                   <link rel="preload" as="image" href="image-small.avif" type="image/avif" media="(max-width: 599px)"> <link rel="preload" as="image" href="image-big.avif" type="image/avif" media="(min-width: 600px)"> 

              • prefetch

                • For resources that might be used in the next page load. prefetch has limitations, including browser support, that make Speculation Rules a better option for this functionality.
                • Add a link element to the head to tell the browser “you might need this on a future page, so try to download it as soon as you can”:
                   <link rel="prefetch" href="blog.css"> 

              • prerender

              Use fetchpriority

              • You can add a fetchpriority attribute to suggest a non-standard download priority that is relative to its normal priority:

                 <img src="hero-image.avif" fetchpriority="high"> 

                =

              • This can be attached to a preload link or directly to a img, link, script, etc. element, or even programmatically to a XHR.
              • This is only a suggestion; you are asking the browser to change the priority (either higher or lower) from its norm, but it will decide whether it should actually change the priority.

              Add Speculation Rules

              • Offers a slew of configuration and priority options to conditionally “preload” or “prefetch” documents and/or files, based on an assumption of what the user might need next.
              • Although not currently standardized, and currently only supported in Chromium browsers, Speculation Rules can provide a great performance boost.

              Add Critical CSS in-page

              • A great way to benefit all three Core Web Vitals is to add the page’s “critical CSS” in-page in a style block, then load the full CSS file via a link.
              • This gives the dual benefit of getting the “above-the-scroll” CSS downloaded and ready as quickly as possible, while also caching the complete CSS for later subsequent page loads.
              • This is another technique that is best handled during a build/deployment process, and another tool that has many, many options.
              • A good primer article can be found on Web.dev, and more info and options can be found on Addy Osmani’s GitHub.
              • Jeremy Keith offers a terrific add-on to this technique by inlining the critical CSS only the first time someone visits, then relying on the cached full CSS file for repeat page views. This helps reduce page bloat for subsequent page visits.

              Conditionally load CSS

              • Use a media attribute on link tags to conditionally load them:
                 <!-- only referenced for printing --> <link rel="stylesheet" href="./css/main-print.css" media="print"> <!-- only referenced for landscape viewing --> <link rel="stylesheet" href="./css/main-landscape.css" media="orientation: landscape"> <!-- only referenced for screens at least 40em wide --> <link rel="stylesheet" href="./css/main-wide.css" media="(min-width: 40em)"> 
              • Note that while all of the above are only applied in the scenarios indicated (print, etc.), all are actually downloaded and parsed, by the browser as the page loads. But they are all downloaded and parsed in the background… (This is foreshadowing, so remember it!)

              Split CSS into “breakpoint” files

              • Taking the above conditional-loading technique a step further, you could split your CSS based on @media breakpoints, then conditionally load them using the same media trick above.
                 <!-- all devices get this --> <link href="main-base.css"> <!-- only devices matching the min-width get each of these --> <link href="main-min-480.css" media="(min-width: 480px)"> <link href="main-min-600.css" media="(min-width: 600px)"> <link href="main-min-1080.css" media="(min-width: 1080px)"> 
              • Even if not needed right now, will still download in the background, then will be ready if it is needed later.
              You could obviously break this into any combination or structure that makes sense and is necessary for your site, but remember that all of these files, although in the background, are still downloading, and taking bandwidth away from other file downloads. Split CSS into component files
              • Taking the above splitting and conditional-loading approach beyond print or min-width, you can also break your CSS into sections.
              • Create one file for all of your global CSS (header, footer, main content layout), then create a separate file for just Home Page CSS that is only loaded on that page, Contact Us CSS that is only loaded on that page, etc.
              • The practicality of this technique would depend on the size of your site and your overall CSS.
              • If your site and CSS are small, then a single file cached for all pages makes sense.
              • If you have lots of unique sections and components and widgets and layouts, then there is no need for users to download the CSS for those sections until they visit those sections.

              Prevent blocking CSS, load it “async”

              • Remember that “downloaded and parsed in the background” bit from above? Well here is where it gets interesting…
              • Because, while neither link nor style blocks recognize async or defer attributes, files with media attributes that are not currently true do actually load async, meaning they are not render-blocking…
              • This means we can kind of “gently abuse” that feature with something like this:
                 <link rel="stylesheet" href="style.css" media="print" > 

                Note that onload event at the end? Once the file has downloaded, async “in the background”, the onload event changes the link‘s’ media value to all, meaning it will now affect the entire current page!

              • While you wouldn’t want this “async CSS” to change the visible layout, as it might harm your CLS, it can be useful for below-the-scroll content or lazy-loaded widgets/modules/components.

              Enhancing optimistically

              • The Filament Group came up with something they coined “Enhancing optimistically“.
              • This is when you want to add something to the page via JS (like a carousel, etc.), but know something could go wrong (like some JS might not load).
              • To prepare for the layout, you add a CSS class to the html that mimics how the page will look after your JS loads.
              • This helps put your layout into a “final state”, and helps prevent CLS when the JS-added component does load.
              • Ideally you would also prepare a fallback just in case the JS doesn’t finish loading, maybe a fallback image, or some content letting the user know something went sideways.

              Reduce JS

              Conditionally load JS

              • While you cannot conditionally load JS as easily as you can with CSS:
                 &lt!-- THIS DOES NOT WORK --> <script src="script.js" media="(min-width: 40em)"></script> 

                You can conditionally append JS files:

                 // if the screen is at least 480px... if ( window.innerWidth >= 480 ) { // create a new script let script = document.createElement('script'); script.type = 'text/javascript'; script.src = 'script-min-480.js'; // and prepend it to the DOM let script0 = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(script, script0); } 

              • And the above script could of course be converted into an array and loop for multiple conditions and scripts.
              • There are also libraries that handle this, like require.js, modernizr.js, and others.

              Split JS into component files

              • Similarly to how we can break CSS into components and add them to the page only as and when needed, we can do the same for JS.
              • If you have some complex code for a carousel, or accordion, or filtered search, why include that with every page load when you could break into into separate files and only add to pages that use that functionality?
              • Smaller files mean smaller downloads, and smaller JS files mean less blocking time.
              • But similarly to breaking CSS into components, there is a point where having fewer, large files might better for performance than having a bunch of smaller files. As always, test and measure.

              Prevent blocking JS

              • When JS is encountered, it stops everything until it is downloaded and parsed, just in case it will modify something.
              • If this JS is inserted into the DOM before CSS or your content, it harms the entire render process.
              • If possible, move all JS to the end of the body.
              • If this is not possible, add a defer attribute to tell the browser “go ahead and download this now, but in the background, and wait until the DOM is completely constructed before implementing it”.
              • Deferred scripts maintain the same order in which they were encountered in the DOM; this can be quite important in cases where d

    2 Shares

    # Shared by Gunnar Bittersmann on Wednesday, March 11th, 2015 at 2:52pm

    # Shared by Benedikt Kastl on Wednesday, March 11th, 2015 at 6:00pm

    4 Likes

    # Liked by Michael Hastrich on Tuesday, March 10th, 2015 at 8:15pm

    # Liked by Gunnar Bittersmann on Wednesday, March 11th, 2015 at 5:39am

    # Liked by Xaviju on Wednesday, March 11th, 2015 at 3:10pm

    # Liked by Fabian Tempel on Wednesday, March 11th, 2015 at 4:15pm

    Related posts

    content-visibility in Safari

    Safari 18 supports `content-visibility: auto` …but there’s a very niche little bug in the implementation.

    Speedier tunes

    Improving performance with containment.

    Speedy tunes

    Improving performance on The Session.

    Move Fast and Don’t Break Things by Scott Jehl

    A presentation at An Event Apart Seattle 2019.

    Related links

    The Simplest Way to Load CSS Asynchronously | Filament Group, Inc.

    Scott re-examines the browser support for loading everything-but-the-critical-CSS asynchronously and finds that it might now be as straightforward as this one declaration:

    <link rel="stylesheet" href="/path/to/my.css" media="print" >

    I love the fact the Filament Group are actively looking at how deprecate their loadCSS polyfill—exactly the right attitude for polyfills in general.

    Tagged with

    Inlining or Caching? Both Please! | Filament Group, Inc., Boston, MA

    This just blew my mind! A fiendishly clever pattern that allows you to inline resources (like critical CSS) and cache that same content for later retrieval by a service worker.

    Crazy clever!

    Tagged with

    CSS and Network Performance – CSS Wizardry

    Harry takes a look at the performance implications of loading CSS. To be clear, this is not about the performance of CSS selectors or ordering (which really doesn’t make any difference at this point), but rather it’s about the different ways of getting rid of as much render-blocking CSS as possible.

    …a good rule of thumb to remember is that your page will only render as quickly as your slowest stylesheet.

    Tagged with

    Smaller, Faster Websites - - Bocoup

    The transcript of a great talk by Wilto, focusing on responsive images, inlining critical CSS, and webfont loading.

    When we present users with a slow website, a loading spinner, laggy webfonts—or tell them outright that they‘re not using a website the right way—we’re breaking the fourth wall. We’ve gone so far as to invent an arbitary line between “webapp” and “website” so we could justify these decisions to ourselves: “well, but, this is a web app. It… it has… JSON. The people that can’t use the thing I built? They don’t get a say.”

    We, as an industry, have nearly decided that we’re doing a great job as long as we don’t count the cases where we’re doing a terrible job.

    Tagged with

    Previously on this day

    13 years ago I wrote Slow glass

    Other days, other eyes.

    16 years ago I wrote South by Twenty Ten

    It’s that time of year again.

    24 years ago I wrote Fame at last

    Welcome visitors from Kottke.org - have a look ‘round, make yourselves at home.

    24 years ago I wrote Creationists in Gateshead

    It looks the Bible Belt now extends to England.