Blog

Optimizing Cacheability and Web App Performance

June 14, 2017 by Mario Korf

In this article, we look at some key concepts and best practices that will help you maximize web application performance. These recommendation assume you’re using a content delivery service (CDN), such as Akamai, Cloudflare, or Fastly. If you aren’t yet using a CDN, some of these recommendations will still be useful.

 

Understanding Cache Control

 

Let’s take a quick look at a couple key concepts in caching, and then offer strategies for implementing effective cache control, including recommended settings for different types of content.

 

Edge Caching vs. Client Caching

 

There are multiple places where caching can occur, take a look at the image below for reference.

We use the term “edge caching” to refer to caching done by a CDN (depicted here as Akamai, but could be something else). Whereas “client caching” refers to caching done by the end user’s browser.

 

Edge Caching

 

Edge caching can be thought of as an extension of the origin architecture, because you retain a high degree of control over your content in the edge caches. You can specify how the content is to be handled and cached, while also having the ability to quickly purge content from edge caches across the network whenever necessary.

In general, most types of content can benefit from edge caching, except for personal information that is unique to each end user, or content that must be served by the origin for specific business reasons.

 

Client Caching

 

Client caching provides tremendous performance benefits, as the content is cached right on the end-user device, but it must be done carefully as client-cached content cannot be purged or invalidated. However, when used judiciously, client caching can be a powerful complement to edge caching.

Other types of downstream caching, such as ISP and corporate proxies, can be leveraged in a way that is similar to client caching. Yet, it is important to remember that these caches lie beyond your control.

 

Managing Content Freshness: TTLs and Invalidation

 

How long you keep a piece of content fresh is up to you, and later in this article we recommend some basic settings based on content type. Whatever you decide, there are two main methods to maintain the freshness of cached content, TTLs, and invalidation.

 

Time-To-Live

 

The first method of maintaining content freshness is by setting time-to-live settings, or TTLs, which instruct the cache to check with the origin server for newer content (using HTTP If-Modified-Since requests) if the TTL has passed.

 

Tip: TTL settings should not necessarily be linked to how often the content itself changes. Instead, business rules regarding the “time sensitivity” of the content — that is, how long the content can be served stale for without significantly affecting the user experience — is the key factor to setting TTLs. Longer TTLs provide greater caching offload, but you can see benefits with TTLs of even just a few seconds for popular content. Invalidation

 

The second method of controlling freshness is by invalidating content, or purging it from cache. The next time a client request for the information, a new version is sent.

Note that the “cache key” (i.e., the unique index into cache) for each object is typically the URL for that object.

 

Caching Examples and Recommendations

 

Maximizing the time that content can remain in cache boosts cache hit rates and, subsequently, end-user performance. However, using longer TTLs can also lead to stale content, so the right balance needs to be struck.

Below we recommend some strategies and TTL values — for both edge and client caching — that aim to maximize caching performance for different types of content without sacrificing content freshness.

 

Content Characteristics Examples Caching Strategy Notes
Static or versioned content Images files, or other files where the URL or file name changes whenever the content changes 1 month (or longer) Edge TTL + 1 month Client TTL Most efficient caching scenario.
Content with low to medium time sensitivity (e.g., staleness of more than 15 minutes is acceptable) Search results, user reviews, stylesheets, weather forecasts, social updates 15 min to 1 day Edge TTL (based on time sensitivity) + 5 min Client TTL Edge TTL can be set longer if using Fast Purge to invalidate content. For low time sensitivity content, client caches can use TTL of 2X median user session duration. Remember, they cannot be invalidated, thus need a relatively low TTL.
Content with high time sensitivity but low frequency changes Breaking news, sales promotions 1 month Edge TTL + 0 sec Client TTL + API to invalidate stale content Use an API for programmatic invalidation.
Content with high time sensitivity and high frequency changes Sports scores, stock prices, product availability 1 sec to 10 min Edge TTL (based on time sensitivity) + 0 sec Client TTL With high frequency changes, using the TTL to maintain freshness is typically most efficient
Content with scheduled time changes Timed promotions, product releases Countdown TTLs -OR- 1 month Edge TTL + 0 sec Client TTL + API at change time Countdown TTLs have values dynamically set based on the length of time until the next scheduled change
Content containing personal information Shopping carts, personalized recommendations, or account information Do not cache at Edge or Client If the personal information is strictly origin-generated content, such as personalized recommendations — and not user-generated, such as shopping cart contents — then Client caching, with a fairly low TTL, may be considered.

In the table above you probably saw that there were Edge TTLs and Client TTLs. When determining the content freshness, you must add these together.

In general, we recommend eliminating the time-sensitivity of content by versioning content and object URLs wherever possible. This ensures content will be served fresh while using very long TTLs, giving the best cache performance. In addition, when fine tuning performance, it is useful to check log files: a high incidence of “Not Modified” responses may indicate that longer TTLs can be used.

Finally, to improve cacheability of HTML pages, it’s best not to embed personal information directly in the HTML. It’s better to have the client fetch personal data through AJAX calls, or by reading dedicated cookies. This boosts performance by enabling the HTML page to be cached, both at the edge and at the client. The section Caching APIs and Dynamic Content below has more tips on how to improve cacheability of dynamic content.

 

TTL Configuration and Management Strategies

 

 

Categorizing by Content Type

 

Many content providers use content type (e.g., JPG versus HTML) as a first cut for categorizing content cacheability. Within the category of HTML pages, a useful strategy is to sort content into broad groups of cacheability, and use a marker in the URL to reflect these groups.

For example, for a retail site, individual product pages may be highly cacheable and long-lived, while category pages may be cacheable with a shorter life span. The homepage may have a shorter lifespan still, while the shopping cart may be entirely uncacheable except for the product images.

 

Structuring URLs

 

With the use of a content management system, or a little up-front planning on site organization, URLs can be structured to identify the level of cacheability for each piece of content. In this particular example, /product/* pages may have a one month TTL, while /category/* pages may have a one week TTL, and the home page may have a one hour TTL.

Using URL markers is one of the simplest ways to optimize cacheability and site performance, while minimizing maintenance overhead.

Since content containing personal information generally should not be cached, it is also useful to structurally separate content that contains such information, in order to easily identify it. For instance, PDFs are typically cacheable, but PDFs of personal account statements are not. In this case, you could define a rule that caches all files of type PDF except for those in /statements/*.

 

Caching APIs and Dynamic Content

 

Although it may be less obvious, many types of dynamic content benefit from caching as well. This includes APIs, which are frequently used in mobile apps, single-page applications, B2B applications, and machine-to-machine communications.

It’s easy to overlook the cacheability of this type of content due to its dynamic nature, and relatively small payload size, but in many cases you can greatly reduce the load on their origin servers and backend databases while improving response time for end users by caching these responses.

Below are some ways to maximize the overall performance of APIs, and other dynamic content with Akamai — including both cacheable and uncacheable content.

 

Caching Non-Personal Dynamic Content

 

Any dynamic content scenario, where the same content is shown to groups of users, presents an opportunity to leverage caching. For example, a web query or API call returning regionalized content like weather, upcoming movie show times, or store locations can be cached and served via an edge server.

This is done by using the location information as part of the “cache key” — the index (or unique identifier) into the cache. The requestor’s location information can be determined any number of ways, examples include, as part of the query string, as part of a cookie, or when using Akamai, its location intelligence services. By adding this information to the cache key, future requests from the same location can receive the response directly from cache.

Similarly, a web query for product details, user reviews, or search results can be cached using relevant portions of the query string — containing the product ID or the search term — as part of the cache key. In these situations, it is useful to consider how to limit the number of potential cache keys in order to maximize cache hit ratios.

For example, a location-based app might limit its cache key to the requestor’s zip code or location ID, rather than using more specific GPS coordinates, or more detailed location information.

Similarly, there are often multiple fields in a query string, many of which are not needed as part of the cache key. You should aim to minimize the number of distinct cache keys for each given query to maximize cache performance.

 

Caching Personal Content

 

As noted earlier, content containing personal information can sometimes be made cacheable with some small adjustments. For example, certain pages of a site might be personalized with a logged-in user’s name or the number of items in his shopping cart, but might otherwise be identical to the page that non-logged-in guests see.

In this case, by breaking out the personalized content as an AJAX call, the entire page — including the base HTML, CSS, JavaScript, and images can still be served from cache for both logged-in and non-logged-in guests. In some cases, by storing the personalized information in a cookie, such as the user’s name, the page can be personalized via JavaScript without requiring an AJAX call.

 

Performance Tips

 

This is not a comprehensive list of enhancements, but will give you noticeable performance gains.

 

Origin Server Configuration

 

Use the right settings on the origin server to optimize performance. Be sure to:

 

  • Use HTTP persistent connections with the correct timeouts. Ideally, the timeout should be set about a second shorter than the origin server timeout.
  • Honor If-Modified-Since requests, set Last-Modified-Time headers, and ensure the server clock time is correct.

 

 

Minimize Use of Hostnames

 

While there are legitimate reasons for using multiple hostnames within a website, using a single hostname for all the resources on a page — including the HTML, embedded objects, and API calls — is generally better for performance, as it reduces DNS lookups and TCP connection setup overhead. This is particularly important for content served over SSL/TLS, as the overhead cost for setting up each connection is greater.

Also, whereas domain sharding (i.e., splitting page resources across multiple domains to increase the number of resources browsers would download simultaneously) may be recommended in certain situations in the HTTP/1.1 world, as the web moves towards HTTP/2 with multiplexing support, domain sharding can hurt performance.

 

Handling CORS OPTIONS Calls

 

Another reason to use a single hostname for all resources when possible is that, when requesting resources from a different hostname from within a script, such as an AJAX call, Cross-Origin Resource Sharing (CORS) OPTIONS calls are triggered. This requires a separate request confirming that the cross-domain resource request is allowed, creating both additional time lag to retrieve the resource and additional overhead load on the server.

 

Mitigating Third-Party Performance Drag

 

Third-party content calls — such as ads, analytics, A/B testing platforms, and social widgets — are becoming increasingly common in web and mobile applications, often comprising the vast majority of requests on a page. Unfortunately, these third-party calls often significantly reduce page performance, reliability, and security — sometimes preventing pages from rendering at all.

To minimize problems, it’s important to first audit and understand the third-party content currently in use, then to establish a clear process for adding such code to a site. Tools such as http://requestmap.webperf.tools/ provide a nice visualization of third-party calls on a page.

Where possible, use asynchronous JavaScript to make these calls so they do not interfere with page rendering. Akamai FEO can implement this best practice on-the-fly without the need for any coding changes.

Use of DNS pre-fetch and TCP/TLS pre-connect capabilities also help to minimize delays from third-party calls. These methods allow the browser to get a head start on establishing the connections needed to retrieve third-party content. Akamai FEO can implement these directives automatically so that HTML code does not need to be modified to employ these best practices.

 

Conclusion

 

Maximizing cacheability is one of the most effective ways to improve website performance and scalability, while reducing management complexity. In this article we covered several things you can do on your own, and using whatever CDN you’re using now.

Our recommendations apply to all types of websites and applications. However, mobile web will benefit from additional design tricks and considerations, which we’ll cover in a future article on Optimizing for Mobile Audiences.