Blog

Announcing GraphQL Caching Capability in Akamai API Gateway

June 18, 2019 · by Jeff Costa ·
Categories:

GraphQL offers developers many advantages over a traditional RESTful API, which is a key part of its growing popularity. Among those advantages is the ability to make a targeted query against multiple resources with a single request, which can dramatically increase API performance by reducing the payload size sent to the API client.

One deterrent to more widespread adoption of GraphQL is the absence of a URL-like primitive (see a full discussion of primitive data types here) that provides a unique identifier to enable caching of an API response. Such caching is easily accomplished today for a RESTful API, giving these APIs tremendous scalability and speed. In contrast, the complex structure of a GraphQL query limits the predictability of a cacheable response.

To work around this limitation, GraphQL developers can reorganize data stores, spin up Redis instances, or configure caching on API clients they control. While all of these are potentially valid workarounds for the problem, each of the three is somewhat incomplete:

  • You cannot tune data stores indefinitely
  • Redis servers must be near your GraphQL servers and cost money to operate while adding complexity
  • You often do not have the luxury of controlling the API client

Akamai is addressing this limitation in a new way: we're introducing GraphQL caching capability as part of our API Gateway product. With this capability, we use DNS to figure out where an API client is making a request from, then route that client to the closest geographic Akamai edge server. That edge server inspects the response from the GraphQL backend (origin), and determines how to cache it. The cached version is then served on every subsequent request.

More specifically, when we obtain the response from the origin server, we compute a cache-key for the object to reference its location in each edge server. This cache key is based on a SHA-256 hash of the response. At the same time, we normalize and canonicalize the response to ensure cache keys do not overlap needlessly if the requests are not different. This ensures that API Gateway customers can position GraphQL responses geographically close to API clients while enabling them to reduce the number of GraphQL servers needed to serve requests. After all, the fastest GraphQL query is the one you don’t make.

Let's take a look at how Akamai does this by reviewing eight sample queries. The origin we will use is an online retail product catalog API running at Heroku.

Sample query #1

We will start with a simple query that retrieves all cell phones made by Apple. The results of this query can be seen in the “Post Query” window.

You can see the sample query in action in this animated GIF:o-GraphQL1.gif

The first request is made to Heroku by clicking the “Send To Origin” button, and the server response body is shown in the Origin Response window. The window below it details the amount of time it took the server to return the response (“Total Time”) to the client. We then send the identical request to the same Heroku API (which is fronted by API Gateway) by clicking the “Send To Edge” button.

The response body from API Gateway is shown in the “Edge Response” window. In the window below that, you will see the Total Time (round-trip time), as well as two Akamai debug headers which are emitted by API Gateway. The first header is called “Status” and reveals whether the query is a cache hit or cache miss at the gateway. The second header is the computed cache key — an alphanumeric string used to reference the object in cache. The cache key is automatically generated by API Gateway after the first request is made against it.

As you can see in the animated GIF above, the first query takes approximately 36ms to respond from Heroku. The first request made to the Edge takes 49ms to obtain a response directly from Heroku. Here we see a cache miss at the Edge as the server does not yet have the query response object in cache. But after that first hit, Akamai has parsed the response body and created a unique cache key for it.

The very next request retrieves the object from the Edge cache in 17ms — almost 50% faster! The next request to API Gateway drops to 14ms as the response body is now in memory so the gateway can serve it to all other clients asking for it quickly. The cached object is literally now sitting right next to the client asking for it. Heroku does no work in generating the response, and the client gets the data much faster, allowing the API to be more responsive. This is an example of the power of caching a GraphQL response.

Sample query #2

The second sample query uses the same query from our first example, but adds comment fields and reorders the brand and category fields to alter the query:o-GraphQL2.gif

Running this query against origin takes 146ms, as it must be generated by the Heroku server. However, running the same query against Akamai reveals that that query is still cached at the Edge, even though it has been structured differently.

But is it REALLY different? After we canonicalize and normalize the query, we ignore the reordering of fields and comments and determine that the output response has not fundamentally changed. Thus we reuse the same cache key from the first query to look up the cached object in the edge server for the second, changed query. This means no request transits to origin, Heroku does no work, and the client gets the data 52% faster than if origin had generated it.

Sample query #3

The third sample query changes things up a bit. Here we make a query using variable substitution ($bname and $cname) to retrieve only scanners made by Sony:o-GraphQL3.gif

The first request of origin takes 155ms, while the request to the edge server takes 154ms and results in a cache miss because the edge server has never seen this request before. But after the first request, that cache miss turns into a cache hit, and the Edge serves up the response from cache in only 78ms. This is 50% faster than obtaining the response from Heroku, and it works for every API client, regardless of where they are in the world.

Sample query #4

The fourth sample query is a revision to the previous query, only with variables reversed and variable names changed; for example, instead of variable $bname we have $newbname:o-GraphQL4.gif

As you can see from the animated GIF above, this is still a cache hit at the Akamai edge server. That’s because, from API Gateway’s perspective, nothing has changed in the response that materially impacts the output of the query. Thus the response we already have in cache will be reused and served to a calling client. This cache hit means that no request is sent to the Heroku server. The cache hit means the response is served to the client 50% faster.

Sample query #5

The fifth sample query is a long one, intended to bring back all the product names in the catalog. This is where caching can really shine, with large data sets:o-GraphQL5.gif

The first query to Heroku is 140ms, while the first hit to Akamai is 245ms and is a cache miss. The subsequent call to Akamai is 78ms, 55% faster. And subsequent requests to the Edge cache hover around the 70ms mark, whereas subsequent requests to the Heroku origin can vary widely: 169, 187, and 166ms.

Sample query #6

The sixth sample query is identical to the previous query, except we are now instructing the GraphQL API to skip sending userguide in the response. Here the Edge ignores the SKIP directive, as it does not materially change the response:o-GraphQL6.gif


This once again enables the Edge to serve the object quickly from cache, and not have to ask origin to regenerate it. This is a 47% speedup, and no new cache key had to be created.

Sample query #7

The seventh sample query makes use of fragments for code reuse. API Gateway processes the query with fragment substitution, and after an initial cache miss, computes a cache key for the query, and serves up the cached copy:o-GraphQL7.gif

Sample query #8

Finally, the eighth and final sample query is identical to the previous fragment query, but the fragment order is reversed:

o-GraphQL8.gif

As you can see, the reordering does not materially change the response, thus Akamai can serve this response to clients directly from cache. No request to origin must be made to generate the response, as can be seen by the use of the identical cache key to the previous response.

Conclusion

The eight sample queries above demonstrate the incredible power and capability of the new GraphQL caching feature of Akamai API Gateway. Your GraphQL APIs can now enjoy the same offload and speed benefits that RESTful API have enjoyed for years.

Ready to try it for yourself? Simply click here to register for your free trial of API Gateway with this new caching capability.

Jeff Costa is a senior product manager at Akamai Technologies.