We hope you watched the first episode of this series to learn the basics of caching. Tedd and CJ are back to go into more depth on tiered caching, caching HTTP response codes, and no-store versus zero-second TTL.
Once again we’ll transcribe the video for you, but you’re encouraged to watch the fun whiteboarding.
Hey again, I’m Tedd Smith and I’m a Solutions Engineer at Akamai. Hopefully you had a chance to watch our last video that introduced some of the concepts of caching and where caching happens.
Today I want to dig in a little deeper into some of the more advanced options available in caching and mention some best practices. Today we’ll cover:
- The concept of tiered caching
- Caching HTTP response codes
- No-store versus zero-second TTL
The first thing I want to introduce is the concept of tiered caching. One of the main benefits of caching is the ability to offload requests from your web server by responding on its behalf to those requests with our edge servers.
However, sometimes a requested file isn’t cached on an edge server. Instead of forwarding that request all the way back to your web server, we can request the file from a second tier of cache, a location we call the parent server.
So… think of it like this. You’re sitting at your desk and you want an Akamai “pop” (we call it “soda” here on west coast, but we like to stick to our east coast roots, so we’ll go with “pop”.) But when you go to the vending machine, you notice “Janice from accounting” has taken the last one. Instead of having to go all the way to the Akamai Pop factory, you’re able to walk across the street and pick one up from the corner store.
Enabling the “Parent” tier of caching is simple. Log into LUNA and open up your delivery config in Property Manager.Once in your config look for a green behavior called “Tiered Distribution”. If it isn’t already turned on, give it a flip of the switch. This will enable the Parent Tier for your delivery and provide you better offload and better performance. You’ll also be presented with some options that are described in the UI, but it is always best to reach out to your Akamai account team with your specific use case and see what they say.
Caching Response Codes
So, next let’s talk about the concept of caching certain HTTP response codes. It’s sometimes called, “Negative Caching”. The idea here is if the Soda factory has stopped making “Akamai Pop”, we’d like to be notified at the vending machine, before we drive all the way to the factory and knock on their door.
Basically, we’re going to cache the follow error codes 204, 305, 400, 404, all 500s. This helps your end users see the error sooner, saving them time and providing a slightly better user experience, and it also saves your infrastructure from getting hammered by every single request.
It’s easy to setup, just add the “Cache HTTP Error Responses” behavior to your config. You’ll have the option to preserve stale objects and set the max age for all of the error codes.
The same concept can be applied to redirects. If you are issuing a 302, Akamai will cache it based on your cache control or expires headers. To override this action you can enable the “Cache HTTP Redirects” behavior in Property Manager which will configure the redirects to have the same settings as your HTTP 200 responses.
No-store Versus Zero-second TTL
Lastly, let’s explore the concept of No-store versus a zero-second TTL. This gets a little technical.
When your config is set to no-store, the edge server is instructed to not cache. This means that every request the edge server receives is forwarded on to your web server. Since the edge server doesn’t have the file in cache, a complete response including the body is provided by your web server. This results in excess bandwidth and resources being used.
This doesn’t have to be the case. Having a zero second TTL will allow the edge server to cache your file and in turn validate if the file has changed with your web server. This is done by including an If-Modified-Since header in the request. If the file has not been modified, your web server will serve a 304 and the edge server will serve the file from cache. This means your web server doesn’t have to serve the full payload.
While every request is still going to your web server, the bandwidth required from your web server is much less and the performance for the end user is much improved. In times of high traffic, this payload difference can still equate to a large amount of offload.
A good use case to think of for this is real time news feeds or stock tickers where the content needs to be updated all the time, but you’d still like to gain some benefit from caching.
Certain high-demand, ‘real time’ content could benefit by taking the above concept one step further. Just as a zero-second TTL provides increased offload, a 5-10 second TTL provides even more. If you think your use case could benefit from this, please engage your Akamai account team to dive a little deeper.
I hope you enjoyed learning a little bit more about some of the best practices and advanced features of caching that we highlighted. These simple additions can make a big difference when it comes to high demand, high load situations. Please stay tuned to our video series as we continue to explore how Akamai can help make a difference in your business.