The Case for Auto-Preloading: The Anatomy of a Battle-Tested WPO Treatment


The only constant is change.
My career has featured many changes – my positions change, the seasons change, and technologies change (almost as often as the seasons). Despite these changes I like to keep learning and sharing what I’ve learned as an inventor, developer, speaker and performance-obsessed researcher. That is why I am especially pleased to share that I’m being trusted to take the helm here at Web Performance Today. Radware and Tammy Everts have given me this opportunity to share some of what I know with the community they’ve built here. I’ll do my best not to disappoint 🙂

Looking back on more than six years of implementing web performance optimization (WPO) in the field, auto-preloading is by far the single most effective performance optimization technique I have seen. It provides the most benefit — we often achieve more than 70% acceleration from this technique alone.

I’d like to tell you the story of how we came to recognize the incredible value of auto-preloading, and how this single technique doesn’t just make individual pages faster — it accelerates a user’s entire flow through a site and ultimately delivers the best possible user experience.

Overview: How Does Preloading Work?

Preloading, also known as “predictive browser caching”, is a WPO technique that uses various methods to load resources (e.g. image files) needed by the “next” page in a user’s path through the site while the user is still consuming content on the current page. The preloaded resources are sent with expires headers so they are pulled from cache when then next page is viewed.

Although auto-preloading is based on some of the most basic of the WPO principles (e.g. make fewer requests, leverage the browser cache), it is not simple to implement without significant infrastructure and development cycles.

Our “Aha” Moment

Consolidation is a widely utilized performance best practice that, put in its simplest terms, bundles similar pages resources (e.g. images) so that fewer round trips are required to send resources from the server to the user’s browser. However, simple consolidation has a performance drawback: it doesn’t play well with another performance technique, browser caching.

With browser caching, resources are stored in the browser’s cache to be re-used on subsequent pages in a user’s flow through the site, again eliminating the need for server round-trips. So while the browser might cache a consolidated bundle of resources, that bundle might contain only some, but not all, of the resources needed for the next page in the flow. If we then create a bundle targeted at the “next” page in order to reduce round-trips, we must include all the common resources previously loaded on the first page, plus any resources unique to the “next” page.
Effectively this is a double download of the resources that are common across the pages. This is often why, in the minds of some performance experts, consolidation is considered an anti-pattern (especially on warm cache page views) since it often causes the repeated download of common resources.

When we were developing advanced treatments for our FastView technology, we saw this problem as a golden opportunity to take advantage of the time that a user is spending looking at the current page to “preload” individual resources into their browser cache. In other words, while a user is scanning the page and deciding where to visit next, resources for every likely navigation choice are quietly downloading behind the scenes.

Sounds simple, doesn’t it? It’s not.

Roadblock #1: The Preloading Mechanism Itself Is Pretty Tricky

There were two critical roadblocks to making preloading work.
The goal of the preloading mechanism is to load resources into the local browser cache after the current page rendering is complete. A key requirement of this process is that the current page DOM must not be affected by any of the preloading activity.

We found that, when it comes to preloading, one size does not fit all. There is no common preloading technique that can be used across browsers, and sometimes the same technique did not even work for different versions of the same browser. For example:

  • Firefox supports the syntax that does everything you need but does not raise an event to tell you when it is done. (This makes tracking your progress difficult but not impossible.)
  • Chrome supports the prefetch directive, but not in the Google Analytics build, so instead you must load resources as objects.
  • Most modern browsers support encoding images as base64-encoded images using the dataUri syntax; except Internet Explorer 8, which has a 32K limit (oh, that’s ~24K before encoding: don’t get that wrong or it won’t work).
  • Internet Explorer 7 does not support the dataUri, so you must use an HTML cabinet (an old holdover from Microsoft Office) or, alternately, you can use good old image spriting, but this means converting all your image references to that pesky CSS syntax.
  • For Internet Explorer 6, you must use spriting for images, and neither dataUri nor MHTML will save you.

With so many details, we took our techniques to the field and added detailed analytics so we could track just how well each technique really worked and when they didn’t. We refined and debugged until we had repeatable results.

Roadblock #2: Preloading Needs to be Dynamic

On any given web page, there are potentially dozens of different navigation choices a user can make. Preloading resources for every conceivable choice is impossible to do without creating new problems related to over-preloading and bandwidth consumption, not to mention the fact that it would fill up the browser cache much too quickly, thereby negating the purpose of the browser cache.

In the early stages of developing this feature, we would state the page flows (through looking at our own analytics engine) and create lists manually. This led to the much more sophisticated approach that FastView now employs: a heuristic data-gathering list, based on page transitions, which allows FastView to collect and use this data automatically in real-time.

This is a good example of where WPO can really help developers who would otherwise have to create a complex subsystem dedicated to WPO and preloading. Since FastView directs site traffic, it is in a perfect location to collect and analyze the data required to create the best possible preload list based on real user behavior. As the user behavior changes and new flows become more important, the lists changes to reflect the new usage pattern. For the same reason developers don’t develop compilers or linkers in house, automated WPO tools like FastView provide key mechanisms for site operators that would be very difficult and not cost effective for site owners to attempt to build in house.

Bonus: Preloading Also Solves Refactoring Issues

One of the strengths of preloading is that it retains the original resource names and granularity of the origin site. This means that when common resources are created by site developers they are cached, as is, without repackaging. This makes for easier debugging and less complication in production environments.

Takeaway

Web performance optimization isn’t a per-page challenge. The only meaningful way to look at WPO is in terms of contextual acceleration (i.e. multi-page flows), and auto-preloading is the most effective technique for ensuring the performance of the entire user experience on a site.

LEARN MORE: Preloading is a feature in our FastView WPO solution, as well as the latest release of our Alteon application delivery controller.

The post The Case for Auto-Preloading: The Anatomy of a Battle-Tested WPO Treatment appeared first on Web Performance Today.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *