How-To: Improve Your Application-Caching Results with API Gateway

by Jeff Costa

Recently we posted about how you can improve API performance with caching.

Now let’s see how Akamai can improve our application-caching results using CDN delivery, routing, and response caching capabilities. In the test described below, we utilized a test API to compare throughput rates with and without caching on Akamai. We observed that we were able to process more transactions per second through the implementation of a caching strategy on Akamai. Let’s walk through the steps we took to configure the platform with Akamai’s API Gateway to make this happen.

To get started, I create a new DNS A record of “” at my DNS provider that points to the IP address of the Digital Ocean server. This is the hostname that each edge server will use to forward traffic to the API server. Let’s use the dig command to check that this edge-to-origin DNS entry is set up and working correctly:

This is working as expected. Next, we create what is known as an Edge Hostname that is used in a DNS CNAME chain to ensure requests for “” get routed to the Akamai platform. You create the edge hostname entry using Property Manager or the PAPI interface. Once created, a CNAME entry must be created on your DNS zone that maps to the edge hostname.

Let’s run a dig command to ensure that a request for is properly routed to Akamai via a CNAME chain:

Here you see Akamai chasing the CNAME chain for “” and mapping the request to the closest edge servers to where the API client is geographically located (the last two IP addresses that start with 209.x.x.x). This is the Akamai secret sauce, and the core of what we call the Akamai Intelligent Platform.

After this mapping magic takes place, we then use Akamai’s configuration utility, Property Manager, to add these entries in the configuration file for the API. This can be accomplished via the Akamai Luna Control Center. In the Luna screen capture below, I have added the origin entry in the “Origin Server Hostname” field and the “” entry in the Property Hostname section:

Now, save this configuration and push it to production on the Akamai network. That’s it! Your API is now flowing over Akamai. To validate this, let’s test at Layer 7 by making an HTTP request to ensure that Akamai is actually handling the traffic. How do we know we’re on Akamai? The easiest way is to add Akamai “Pragma” debug headers into a request. Here is what adding the Pragma headers looks like in Postman:

Now that we have turned on the Pragma headers, we can make a request with Postman to and review the response headers. In those headers you can see the information Akamai is returning about the response:

The “TCP_MISS” header means that the object was not in cache, and that the server fetched the JSON response object from origin. The “X-Check-Cacheable →NO” headers indicates that this particular URL is not cached on Akamai. Now that we know this is working, let’s run the Siege tool against the API which is now fronted by Akamai. Again we will use the same command line invocation, but just change the target of the GET request to “”

$ siege -c 5 –time=5m –content-type “application/json” GET

And here is the result (again, no application-caching is enabled for this run):

Note the uplift we get by just using Akamai’s routing and network capabilities: a 26% increase in hits and transactions versus the Digital Ocean VPS. For the next step, let’s see what enabling application-level caching for the API while running it through Akamai gets us:

Hits go up yet again and we get more transactions per second. All of this is provided by the built-in network and routing optimizations of the Akamai Intelligent Platform.

For our last example, let’s keep application-level caching on AND enable caching of the /cats resource at Akamai. The ensures that every single edge server will begin caching the JSON response. We do this inside Akamai API Gateway, by enabling caching of the /cats resource for 5 minutes:
We then push this new configuration containing the updated caching rule into production at Akamai. Once deployed, we make another request with Postman to verify the settings. Akamai returns these response headers:

Here we see a few changes in the PRAGMA headers:

  • We now have an Expires header that reveals the lifetime of the cached object at Akamai.
  • We see that Akamai now records a TCP_MEM_HIT in the X-Cache header instead of a MISS. This indicates the object was served out of memory on the edge server.
  • We see the automatically defined cache-key for the object.
  • The X-Check-Cacheable header now shows YES instead of NO, indicating the object is cacheable.

Now let’s re-run Siege for the last time and see what we get:

Transactions jump yet again, getting up to a level that cannot be matched even by enabling application-level caching at Digital Ocean. While we are still not near the baseline values we observed using localhost, we nevertheless have made substantial, real-world improvements to hits and transaction levels that directly relate to a better experience for your API consumers.

Finally, let’s recap what we have observed in tabular format to make it easier to understand how Akamai caching can help your API:

As you can see, the improvements are significant, and you can get these same benefits for your API today. Simply begin a trial of Akamai API Gateway to get started.

Jeff Costa is a senior product manager at Akamai Technologies.

Categories: APIs

Suggested Article