Journal tags: images

24

sparkline

dConstruct optimisation

When I was helping Bevan with making the dConstruct site, I kept banging on to him about the importance of performance.

Don’t get me wrong: I wanted the site to look great, but I also very much wanted it to feel great …and nothing affects the feel of a site (the user’s experience, if you will) more than performance. As Jason wrote:

If you could only do one thing to prepare your desktop site for mobile and had to choose between employing media queries to make it look good on a mobile device or optimizing the site for performance, you would be better served by making the desktop site blazingly fast.

And yet this fundamental aspect of how performant a site is going to be is all too often left until the development phase. I’d really like to see it taken into account much earlier on, during the UX and visual design phases.

Anyway, as the dConstruct site came together, I just kept asking “What would Steve Souders do?”

For a start, that meant ripping out any boilerplate markup and CSS that was there “just in case.” I very much agree with Rachel when she says stop solving problems you don’t yet have. But one of the areas where the unfortunately-named HTML5 Boilerplate excels is in its suggestions for .htaccess rules so I made sure to rip off the best bits.

Initially jQuery was being included, but given how far browsers have come in their JavaScript support, I was able to ditch it and streamline the JavaScript a bit.

Wherever possible, I made sure that background images in CSS were Base64 encoded as data URIs; icons, textures, and the like. That helped to reduce the number of HTTP requests—one of the easy wins for improving performance.

I’ve already mentioned the conditional loading that’s going on.

Then there’s the thorny issue of responsive images. The dConstruct 2012 site is similar to the dConstruct archive in that there is no correlation between browser width and image: quite often, a smaller image is required for wider screens than for narrower viewports because of the presence of a grid. So instead of trying to come up with some complex interplay of client and server cross-communication to figure out which size image would be appropriate to serve up, I instead took the same approach as I did for the archive site: optimise the hell out of images, regardless of whether they’re going to be viewed in a desktop or a mobile device.

Take a look at the original image of Kevin Slavin compared to the version that appears on the dConstruct archive.

Kevin Slavin Kevin Slavin, retouched

See how everything except the face is so much blurrier in the final version? That isn’t just an attempt to introduce some cool bokeh. It makes for much smaller JPGs with fewer jaggy artefacts. And because human beings tend to focus on other human faces, the technique isn’t really consciously noticeable (although you’ll notice it now that I’ve pointed it out to you).

The design of the 2012 dConstruct site called for monochrome images with colour filters applied.

Ben Hammersley

That turned out to be a fortunate boon for optimising the images. This time we were using PNGs rather than JPGs and we were able to get the number of colours down to 32 or even 16. Run them through Image Optim or Smushit and you can squeeze even more bytes out of them.

The funny thing is that sweating the file sizes of images used to be part and parcel of web development. Back in the nineties, there was something of an aesthetic that grew out of the need to optimise images with limited (web-safe!) colour palettes. That was because bandwidth was at a premium and you could be pretty sure that plenty of people were accessing your site on slow connections.

Well, here we are fifteen years later and thanks to the rise of mobile, bandwidth is once again at a premium and we can be pretty sure that plenty of people are accessing our sites on slow connections. Yet again, mobile is highlighting issues that were always there. When did we get so lazy and decide it was acceptable to send giant unoptimised images down the pipe to our long-suffering visitors?

Mathew Pennell recently wrote:

…it’s certainly true that the golden rule I grew up with – no page should ever be over 100Kb – has long since been mothballed.

But why? That seems like a perfectly good and still-relevant rule to me.

Alas, on the dConstruct site I wasn’t able to hit that target. With an unprimed cache, the home page comes in at around 300K (it’s 17K with a primed cache). By far the largest file is the CSS, weighing at 113K, followed by the web font, Futura bold oblique, at 32K.

By the way, when it comes to analysing performance in the browser, this missing manual for the Webkit inspector is really, really handy. I also ran the site through Google Page Speed but it seems that the user-agent chooses an arbitrary browser width (960? 1024?) so some of the advice about scaling images needs to be taken with a pinch of salt when applied to responsive designs.

I took a look at some other conference sites too. The beautiful site for the Build conference comes in at just under a megabyte for the homepage—it has quite a few fonts and images. It also has a monochrome aesthetic going on so I suspect quite a few of those images could be squeezed down (and some far-future expiry dates would help for repeat visitors).

Then there’s site for this year’s Mobilism conference which is blazingly fast. The combined file size on the homepage isn’t that different to the dConstruct site (although the CSS is significantly smaller) and I suspect there’s some server-side wizardry going on. I’ll have to corner Stephen at the conference next week and quiz him about it.

For now, server-side performance optimisation is something beyond my ken. I should really do something about that, especially as I’m expecting the dConstruct site to take a hammering the day that tickets go sale (May 29th—save the date).

In the meantime, there’s still plenty I can do on the front end. As Bruce put it:

It seems to me that old-fashioned, oh-so-dull techniques might not be ready for retirement yet. You know: well-crafted HTML, keeping JavaScript for progressive enhancement rather than a pre-requisite for the page even displaying, and testing across browsers.

All those optimisation techniques we learned in the 90s—and even wacky ideas like lowsrc—are back in fashion. Everything old is new again.

Image-y nation

There’s a great article by Wilto in the latest edition of A List Apart. It’s called Responsive Images: How they Almost Worked and What We Need.

What all I really like about the article is that it details the the thought process that went into trying working out responsive images for the Boston Globe. Don’t get me wrong: I like it when articles provide code, but I really like it when they provide an insight into how the code was created.

The Filament Group team working on the Boston Globe site were attempting to abide by the two rules of responsive images that I’ve outlined before:

  1. The small image should be default.
  2. Don’t load images twice (in other words, don’t load the small images and the larger images).

There are three reasons for this: performance, performance, performance. As Luke put it so succinctly:

Being a Web designer & not considering speed/performance is like being a print designer & not considering how your colors will print.

That said, I came across a situation recently where loading both images for desktop browsers could actually be a pretty good thing to do.

Wait, wait! Here me out…

Okay, so the way that many of the responsive image techniques work is by means of a cookie. The basic challenge of responsive images is for the client to communicate with the server (and let it know the viewport size) before the server starts sending images. Because cookies can be used both by the client and the server, they offer a way to do that:

  1. As the document begins to load, set a cookie on the client side with JavaScript recording the viewport width.
  2. On the server side, when an image is requested, check for the contents of that cookie and serve up the appropriate image for the viewport size.

There are some variations on this: you could initially route all image requests to send back a 1x1 pixel blank .gif and then, after the page has loaded, use JavaScript to load in the appropriate image for the viewport size.

That’s the theory anyway. As Mat outlined in his article, there’s a bit of a race condition with the cookie being set by the client and the images being sent from the server. New browsers are doing some clever pre-fetching of images. That means they fetch the small images first, violating the second rule of responsive images.

But, like I said, in some situations that might not be so bad…

Josh is working on a responsive project at Clearleft right now—and doing a superb job of it—where he’s deliberately cutting the server-side aspect of responsive images out of the picture. He’s still starting with the small (mobile) images by default and then, after the page has loaded, swaps them out with JavaScript if the viewport is wide enough.

Suppose the small image is 20K and the large image is 60K. That means that desktop browsers are now loading 80K of images (instead of 60). On the face of it, this sounds like really bad news for performance… but because that extra 60K is being downloaded after the page has downloaded, the perceived performance isn’t bad at all. In fact, the experience feels quite snappy. Here’s what happens:

The markup contains the small image as well as some kind of indication where the larger size resides (either in a query string or in a data- attribute):

<img class="photo" src="basestar.jpg" alt="a spiky seed" data-fullsrc="basestar-large.jpg">

Spiky

That’s about 240 by 180 pixels. Now for the large-screen layout, we want those pictures to be more like 500 by 375 pixels:

@media screen and (min-width: 50em) {
    .photo {
        width: 500px;
        height: 375px;
    }
}

That results in a “blown up” pixely image.

Spiky

Once the page has loaded, that small image is swapped out for the larger image specified in the data- attribute.

Spiky

Large-screen browsers have now downloaded 20K more than they actually needed but the perceived performance of the page was actually pretty snappy:

  1. Blown-up pixely images act as placeholders while the page is downloading.
  2. Once the page has loaded, the full-sized images snap into place.

Does that sound familiar? This is exactly what the lowsrc attribute did.

I’m probably showing my age by even acknowledging the existence of lowsrc. It was a proprietary attribute created by Netscape back in the days of universally scarce bandwidth:

<IMG SRC=basestar.jpg LOWSRC=low-basestar.jpg ALT="a spiky seed">

(See how I’m using unquoted attributes and uppercase tags and attributes for added nostalgic value?)

The lowsrc value would usually be a monochrome version of the image in the src attribute.

a spiky seed in black and white

And we only had 256 colours to play with. You tell that to the web developers today …they wouldn’t believe you.

Seriously though, it’s funny how problems from the early days of the web have a habit of resurfacing. I remember when Ajax was getting popular, all the problems associated with frames rose from the grave: bookmarking, breaking the back button, etc. Now that we’re in a time of small-screen devices on low-bandwidth networks, we’re rediscovering a lot of the same issues we had when we were developing for 640 pixel wide screens with 28K or 56K modems.

Ultimately, I think that what the great brainstorming around fixing the problems with the img element shows is a fundamental impedance mismatch between the fluid nature of the web and the fixed pixel-based nature of bitmap images. We’ve got ems for setting type and percentages for specifying the proportions of our grids, but when it comes to photographic images, all we’ve got is the pixel—a unit that makes less and less sense every day.

Responsible responsive images

I’m in Belfast right now for this year’s Build conference, so I am. I spent yesterday leading a workshop on —the marriage of responsive design with progressive enhancement; a content-first approach to web design.

I spent a chunk of time in the afternoon going over the thorny challenges of responsive images. Jason has been doing a great job of rounding up all the options available to you when it comes to implementing responsive images:

  1. Responsive IMGs, Part 1,
  2. Responsive IMGs, Part 2—an in-depth look at techniques,
  3. Responsive IMGs, Part 3—the future of the img element.

Personally, I have two golden rules in mind when it comes to choosing a responsive image technique for a particular project:

  1. The small image should be default.
  2. Don’t load images twice (in other words, don’t load the small images and the larger images).

That first guideline simply stems from the mobile-first approach: instead of thinking of the desktop experience as the default, I’m assuming that people are using small screen, narrow bandwidth devices until proven otherwise.

Assuming a small-screen device by default, the problem is now how to swap out the small images for larger images on wider viewports …without downloading both images.

I like Mark’s simplified version of Scott’s original responsive image technique and I also like Andy’s contextual responsive images technique. They all share a common starting point: setting a cookie with JavaScript before any images have started loading. Then the cookie can be read on the server side to send the appropriate image (and remember, because the default is to assume a smaller screen, if JavaScript isn’t available the browser is given the safer fallback of small images).

Yoav Weiss has been doing some research into preloaders, cookies and race conditions in browsers and found out that in some situations, it’s possible that images will begin to download before the JavaScript in the head of the document has a chance to set the cookie. This means that in some cases, on first visiting a page, desktop browsers like IE9 might begin get the small images instead of the larger images, thereby violating the second rule (though, again, mobile browsers will always get the smaller images, never the larger images).

Yoav concludes:

Different browsers act differently with regard to which resources they download before/after the head scripts are done loading and running. Furthermore, that behavior is not defined in any spec, and may change with every new release. We cannot and should not count on it.

The solution seems clear: we need to standardise on browser download behaviour …which is exactly what the HTML standard is doing (along with standardising error handling).

That’s why I was surprised by Jason’s conclusion that device detection is the future-friendly img option.

Don’t get me wrong: using a service like Sencha.io SRC (formerly TinySRC)—which relies on user-agent sniffing and a device library lookup—is a perfectly reasonable solution for responsive images …for now. But I wouldn’t call it future friendly; quite the opposite. If anything, it might be the most present-friendly technique.

One issue with relying on user-agent sniffing is the danger of false positives: a tablet may get incorrectly identified as a mobile phone, a mobile browser may get incorrectly identified as a desktop browser and so on. But those are edge cases and they’re actually few and far between …for now.

The bigger issue with relying on user-agent sniffing is that you are then entering into an arms race. You can’t just plug in a device library and forget about it. The library must be constantly maintained and kept up to date. Given the almost-exponential expansion of the device and browser landscape, that’s going to get harder and harder.

Disruption will only accelerate. The quantity and diversity of connected devices—many of which we haven’t imagined yet—will explode, as will the quantity and diversity of the people around the world who use them. Our existing standards, workflows, and infrastructure won’t hold up. Today’s onslaught of devices is already pushing them to the breaking point. They can’t withstand what’s ahead.

So while I consider user-agent sniffing to be an acceptable short-term solution, I don’t think it can scale to the future onslaught—not to mention the tricky issue of the licensing landscape around device libraries.

There’s another reason why I tend to steer clear of device libraries like WURFL and Device Atlas. When you consider the way that I’m approaching responsive images, those libraries are over-engineered. They contain a massive list of mobile user-agent strings that I’ll never need. Remember, I’m taking a mobile-first approach and assuming a mobile browser by default. So if I’m going to overturn that assumption, all I need is a list of desktop user-agent strings. That’s a much less ambitious undertaking. Such a library wouldn’t need to kept updated quite as often as a mobile device listing.

Anybody fancy putting it together?

The good new days

I’m continually struck by a sense of web design deja vu these days. After many years of pretty dull stagnation, things are moving at a fast clip once again. It reminds of the web standards years at the beginning of the century—and not just because HTML5 Doctor has revived Dan’s excellent Simplequiz format.

Back then, there was a great spirit of experimentation with CSS. Inevitably the experimentation started on personal sites—blogs and portfolios—but before long that spirit found its way into the mainstream with big relaunches like ESPN, Wired, Fast Company and so on. Now I’m seeing the same transition happening with responsive web design and, funnily enough, I’m seeing lots of the same questions popping up:

  • How do we convince the client?
  • How do we deal with ad providers?
  • How will the CMS cope with this new approach?

Those are tricky questions but I’m confident that they can be answered. The reason I feel so confident is that there are such smart people working on this new frontier.

Just as we once gratefully received techniques like Dave’s CSS sprites and Doug’s sliding doors, now we have new problems to solve in fiendishly clever ways. The difference is that we now have Github.

Here’s a case in point: responsive images. Scaling images is all well and good but beyond a certain point it becomes overkill. How do we ensure that we’re serving up appropriately-sized images to various screen widths?

Scott kicked things off with his original code, a clever mixture of JavaScript, cookies, .htaccess rules and the -data HTML5 attribute prefix. Crucially, this technique is using progressive enhancement: the smaller image is the default; the larger image only gets swapped in when the screen width is wide enough. Update: and Scott has just updated the code to remove the -data-fullsrc usage.

Mark was able to take Scott’s code and fork it to come up with his own variation which uses less JavaScript.

Andy added his own twist on the technique by coming up with a slightly different solution: instead of looking at the width of the screen, take a look a look at the width of the element that contains the image. Basically, if you’re using percentages to scale your images anyway, you can compare the offsetWidth of the image to the declared width of the image and if it’s larger, swap in a larger image. He has written up this technique and you can see it in action on the holding page for this September’s Brighton Digital Festival.

I particularly like Andy’s Content First approach. The result is that sometimes a large screen width might mean you actually want smaller images (because the images will appear within grid columns) whereas a smaller screen, like maybe a tablet, might get the larger images (if the content is linearised, for example). So it isn’t the width of the viewport that matters; it’s the context within which the image is appearing.

All three approaches are equally valuable. The technique you choose will depend on your own content and the specific kind of problem you are trying to solve.

The Mobile Safari orientation and scale bug is another good example of a crunchy problem that smart people like Shi Chuan and Mathias Bynens can tackle using the interplay of blogs, Github and to a lesser extent, Twitter. I just love seeing the interplay of ideas that cross-pollinate between these clever-clogged geeks.