V7: Video Killed the Web Browser Star | Rob Weychert
Grrr… it turns out that browsers exhibit some very frustrating behaviour when it comes to the video element. Rob has the details…
Grrr… it turns out that browsers exhibit some very frustrating behaviour when it comes to the video element. Rob has the details…
If you’re a front-end developer and you don’t read Chris Ferdinandi’s blog, you should change that right now. Add that RSS feed to your feed reader of choice!
Lately he’s been posting about some of the thinking behind his Kelp UI library. That includes some great nuggets of wisdom around HTML web components.
First of all, he pointed out that web components don’t need a constructor(). This was news to me. I thought custom elements had to include this incantation at the start:
constructor () {
  super();
}
But it turns that if all you’re doing is calling super(), you can omit the whole thing and it’ll be done for you.
I immediately refactored all the web components I’m using on The Session. While I was at it, I implemented Chris’s bulletproof web component loading.
Now technically, I don’t need to do this. I’m linking to my JavaScript at the bottom of every page so I know it’s going to load after the HTML. But I don’t like having that assumption baked into my code.
For any of my custom elements that reference other elements in the DOM—using, say, document.querySelector()—I updated the connectedCallback() method to use Chris’s technique.
It turned out that there weren’t that many of my custom elements that were doing that. Because HTML web components are wrapped around existing markup, the contents of the custom element are usually what matters (rather than other elements on the same page).
I guess that’s another unexpected benefit to HTML web components. Because they’ve already got their own bit of DOM inside them, you don’t need to worry about when you load your markup and when you load your JavaScript.
And no faffing about with the dark arts of the Shadow DOM either.
What Trys describes here mirrors my experience too—it really is worth occasionally taking a little time to catch the low-hanging fruit of your site’s web performance (and accessibility):
I’ve shaved nearly half a megabyte off the page size and improved the accessibility along the way. Not bad for an evening of tinkering.
Huh! I did not know this. Good to know!
Remember when I wrote about sizes="auto"? Well, it’s coming to Chrome! Hallelujah!
As well as her personal site, wordridden.com, Jessica also has a professional site, lostintranslation.com.
Both have been online for a very long time. Jessica’s professional site pre-dates the Sofia Coppola film of the same name, which explains how she was able to get that domain name.
Thanks to the internet archive, you can see what lostintranslation.com looked like more than twenty years ago. The current iteration of the site still shares some of that original design DNA.
The most recent addition to the site is a collection of images on the front page: the covers of books that Jessica has translated during her illustrious career. It’s quite an impressive spread!
I used a combination of CSS grid and responsive images to keep the site extremely performant. That meant using a combination of the picture element, source elements, srcset attributes, and the sizes attribute.
That last part always feels weird. I have to tell the browser what sizes the images will displayed at, which can change depending on the viewport width. But I’ve already given that information in the CSS! It feels weird to have to repeat that information in the HTML.
It’s not just about the theoretical niceties of DRY: Don’t Repeat Yourself. There’s the very practical knock-on effects of having to update the same information in two places. If I update the CSS, I need to remember to update the HTML too. Those concerns no longer feel all that separate.
But I get it. Browsers use a look-ahead parser to start downloading images as soon as possible, so I understand why I need to explicitly state what size the images will be displayed at. Still, it feels like something that a computer should be calculating, not something for a human to list out by hand.
But wait! Most of the images on that page also have a loading attribute with a value of “lazy”. That tells browsers they don’t have to download the images immediately. That sort of negates the look-ahead parser.
That’s why the HTML spec now includes a value for the sizes attribute of “auto”. It’s only supposed to be used in conjunction with loading="lazy" (otherwise it means 100vw).
Browser makers are on board with this. You can track the implementation progress for Chromium, WebKit, and Firefox.
I would very much like to see this become a reality!
A no-nonsense checklist of good performance advice from Karolina.
This is very convincing.
Addy takes a deep dive into making sure your images are performant. There’s a lot to cover here—that’s why I ended up splitting it in two for the responsive design course: one module on responsive images and one on the picture element.
This could give a big boost to web performance!
This is a good round-up of APIs you can use to decide if and how much JavaScript to load. I might look into using storage.estimate() in service workers to figure out how much gets pre-cached.
Did you know there’s an imagesrcset attribute you can put on link rel="preload" as="image" (along with an imagesizes attribute)?
I didn’t. (Until Amber pointed this out.)
There’s a new image format on the browser block and it’s very performant indeed. Jake has all the details you didn’t ask for.
Progressive Enhancement allows us to use the latest and greatest features HTML, CSS and JavaScript offer us, by providing a basic, but robust foundation for all.
Some great practical examples of progressive enhancement on one website:
type="module" to enhance a form with JavaScript,picture element to provide webp images in HTML.All of those enhancements work great in modern browsers, but the underlying functionality is still available to a browser like Opera Mini on a feature phone.
I’m constantly forgetting the difference between the async attribute and the defer attribute on script elements—this is a handy explanation.
This is going to be so useful for client work at Clearleft—instant snapshots of performance metrics across industries and regions!
For example, we’ve been working a lot with the travel sector, and now we can call up these benchmarks without having to generate a whole bunch of Web Page Test results ourselves.
See Tammy’s blog post for me details.
This is the transcript of a brilliant presentation by Scott—read the whole thing! It starts with a much-needed history lesson that gets to where we are now with the dismal state of performance on the web, and then gives a whole truckload of handy tips and tricks for improving performance when it comes to styles, scripts, images, fonts, and just about everything on the front end.
Essential!
Aaron outlines some sensible strategies for serving up images, including using the Cache API from your service worker script.
The way you build web pages—using IntersectionObserver, for example—can have a direct effect on the climate emergency.
Webpages can be good citizens of battery life.
It’s important to measure the battery impact in Web Inspector and drive those costs down.
This is an excellent UX improvement in Chrome. For sites like The Session, where page loads are blazingly fast, this really makes them feel like single page apps.
Our goal with this work was for navigations in Chrome between two pages that are of the same origin to be seamless and thus deliver a fast default navigation experience with no flashes of white/solid-color background between old and new content.
This is exactly the kind of area where browsers can innovate and compete on the UX of the browser itself, rather than trying to compete on proprietary additions to what’s being rendered.