Akamai API Gateway

API Gateway: The Standalone API

The API Gateway API is an efficient and secure way to register, manage, and deliver your APIs over the Akamai network. Read the docs to learn more about what the API Gateway API does and how it works.

Read the docs


What to Configure

GraphQL Caching

We can detect GraphQL errors and uncacheable content (mutations and subscriptions). It also provides an increased cache hit rate and performance for GraphQL-based open APIs.

If the application uses GraphQL, we can add the GraphQL caching behavior to a property to control how Akamai handles caching.

 

PROPERTY MANAGER UI

graphql caching

API JSON SNIPPET

{
  "name": "graphqlCaching",
  "options": {
      "enabled": true,
      "cacheResponsesWithErrors": false,
      "advanced": "",
"postRequestProcessingErrorHandling": "NO_STORE",      "operationsUrlQueryParameterName": "query",     "operationsJsonBodyParameterName": "query"
  }
}

Caching

Akamai can store a copy of your content so that the edge servers don’t have to continually call the application/origin servers to deliver content to your site visitors. This increases performance for your users, and reduces load and bandwidth on your origin web server. 

Things to keep in mind:

  • Always set the default caching setting to “No-Store” to avoid unnecessary caching and avoiding caching pollution

  • Only cache specific and known scenarios 

  • CacheKey: By default, we include query strings (QS) in the cache key used to store the object

Order matters: The order of how we store the QS in the cache matters. If the application does not care about the order of the QS (meaning the content will not change if the order changes), then ordering the QS will allow us to maximize offload by reducing the possible variants of the cachekey for the same object.

Include only the QS that you need: This ensures the uniqueness of the object. By removing QS that aren’t relevant to the object returned, objects can be served from cache for more scenarios.

Include headers/cookies: Like QS, this will ensure that we maximize offload while reducing the risk of cache pollution by adding headers or cookies that affect the object returned by the application/origin.

Zone Apex Mapping

Zone apex mapping uses the mapping data available on the Akamai platform to reduce DNS lookup times for your websites.

With zone apex mapping, name servers resolve DNS lookup requests with the IP address of an optimal edge server.

FastDNS zone apex benefits:

  • Eliminate the CNAME chains inherent with CDN solutions

  • Reduce DNS lookup times for any website on the Akamai platform

  • Deploy Akamai acceleration solutions for records at the zone apex for which a CNAME cannot otherwise be used

Compression

As content travels from the edge server to users, the final leg of content delivery tends to be slower, particularly for users who are browsing on mobile devices. 

Enabling compression saves time in content delivery transit, which helps with performance and the total volume of data transmitted, as well as associated costs. Ensuring that we have it enabled — and that it covers all MIME/content types that the application uses — is crucial for the success of the feature.

Features to enable:

  • Last Mile Acceleration

  • Brotli from Origin (if origin supports it)

 

PROPERTY MANAGER UI

content compression

API JSON SNIPPET

"behaviors": [
  {
      "name": "gzipResponse",
      "options": {
          "behavior": "ALWAYS"
      }
  }
],
"criteria": [
  {
      "name": "contentType",
      "options": {
          "matchCaseSensitive": false,
          "matchOperator": "IS_ONE_OF",
          "matchWildcard": true,
          "values": [
              "text/html*",
            "text/css*",
          "application/x-javascript*"

          ]
      }
  }
]

Monitor

Enabling features like DataStream or Cloud Monitor allows us to capture platform interactions between end users, Akamai edge servers, and the origin servers as logs and aggregated metrics on performance, events, and errors. 

monitor

These logs are then aggregated and made available to your operations teams through push or pull APIs. This is of great importance and value for applications teams, as this capability allows the teams to monitor their performance and health of the API, especially when enabling caching at the edge (since these requests will not reach their infrastructure).

Disable Response Buffer

If the client does not accept chunking (that is, the request is HTTP/1.0), and if the response from the origin server does not contain a content-length header, then the edge server will buffer the response until it reaches 32 kb of the response body. 

<network:http.buffer-response>off</network:http.buffer-response>
<network:http.buffer-response-v2>off</network:http.buffer-response-v2>

By setting this tag to “off,” you can cause the server to deliver content to the client without buffering the response body. 

See more

SureRoute

The goal of SureRoute is to find the fastest path to the origin. Because most API requests are dynamic in nature, SureRoute should be configured to ensure the use of optimal routes to application origin.

 

PROPERTY MANAGER UI

sureroute property manager ui

API JSON SNIPPET

{
  "name": "sureRoute",
  "options": {
      "enabled": true,
      "forceSslForward": false,
      "raceStatTtl": "30m",
      "testObjectUrl":"/srto.html",
      "toHostStatus": "INCOMING_HH",
      "type": "PERFORMANCE",
      "enableCustomKey": false,
      "srDownloadLinkTitle": ""
  }
}

 

Note: If the site is served over HTTPS, make sure to add “Force SSL Protocol Races”

Learn more 

sureroute

Cloud Origin

When working with IaaS providers (e.g., AWS, GCP, Azure), we have to ensure that we configure our edge servers to expect their dynamic behaviors. This essentially means that IPs will vary frequently and affect features like SureRoute.

We recommend the following best practices:

SureRoute: The default settings are to use either the incoming hostname or the origin hostname for the statistics gathered when calculating the best routes.

<forward:cache-parent.sureroute2>

    <force-origin-ip-from-edge>on</force-origin-ip-from-edge>
    <stat-key>
        <host>origin-ip</host>
    </stat-key>
</forward:cache-parent.sureroute2> 

Forward Origin SSL (FOSSL): Configure the origin server behavior with “Third Party Certificate Store” as the certificate verification method and do not pin the leaf certificate. This allows Akamai to not only exercise control over the certificate to be trusted, but also maintain the certificate as up to date without having to update the configuration every time the IaaS updates it. 

origin ssl

API JSON SNIPPET:

{
  "name": "origin",
  "options": {
      "cacheKeyHostname": "REQUEST_HOST_HEADER",
      "compress": true,
      "enableTrueClientIp": false,
      "forwardHostHeader": "REQUEST_HOST_HEADER",
      "httpPort": 80,
      "httpsPort": 443,
      "originSni": true,
      "originType": "CUSTOMER",
       "verificationMode": "THIRD_PARTY",
      "hostname": "roymartinezblanco.github.io",
      "originCertificate": "",
      "ports": ""
  }
}


What Not to Configure

Because we are working with an API, there is no need to enable some of our features due to the nature of an API.

Real User Monitoring

RUM solutions monitor end-user browser performance that, for an API, is not relevant. A website might use an API as part of the assets to dynamically pull some of the content shown to users — for this use case, if the API and the website are on the same domain, then RUM injection should be disabled for the API endpoint URLs. This is because an API will generally respond with a JSON body and an HTML body. If we have a RUM solution enabled, they generally insert themselves on most HTML responses, which are unnecessary for an API response.

Legacy RUM

PROERTY MANAGER UI

RUM property manager ui

API JSON SNIPPET

{
  "name": "realUserMonitoring",
  "options": {
      "enabled": false
  }
}


mPulse

PROERTY MANAGER UI

mpulse property manager ui

API JSON SNIPPET

{
  "name": "mPulse",
  "options": {
      "enabled": false,
      "apiKey": "",
      "bufferSize": ""
  }
}

Adaptive Acceleration

Adaptive Acceleration (A2) is a feature set that automatically and continuously applies performance optimizations to a website using Akamai’s machine learning to determine optimizations. With A2, we gain the ability to (i) prioritize which content is visible to the user, (ii) reduce the size of critical cached resources, (iii) manage third-party content, and (iv) discover, identify, and defer unresponsive scripts on your page. This is an excellent feature for web content, but it is necessary for API endpoints (especially for Resource Optimizer).

 

PROPERTY MANAGER UI

adaptive acceleration

API JSON SNIPPET

{
    "name": "adaptiveAcceleration",
    "options": {
        "source": "mPulse",
        "titleHttp2ServerPush": "",
        "enablePush": false,
        "titlePreconnect": "",
        "enablePreconnect": false,
        "titlePreload": "",
        "preloadEnable": false,
        "titleRo": "",
        "enableRo": false
    }
}

Content Prefetching

Prefetching is the mechanism by which Akamai can read the HTML response from an origin and start fetching embedded content like JavaScript, style sheets, and images before the browser starts to make the request. Since these are the resources required for loading the page in the browser, we try to reduce the round-trip times by having the object ready in the edge cache. When the browser parses HTML and requests JavaScript, style sheets, and images, we serve them without the latency of a round trip to the origin. Thus, by avoiding the processing time for the browser, we are able to load the resources faster. 

This is a key feature for websites but not for APIs, since they will most likely be (at most) a resource within a site. When we have prefetching enabled and the API responds with content type “Text/HTML,” it will parse the body of the API response.

 

PROPERTY MANAGER UI

prefetch objects

API JSON SNIPPET

{
    "name": "prefetch",
    "options": {
        "enabled": false
    }
},
{
    "name": "prefetchable",
    "options": {
        "enabled": false
    }
}

Front-End Optimization

Front-End Optimization (FEO) employs several techniques that are similar to Prefetching and Resource Optimizer, in the sense that it analyzes and manipulates the response content. These types of optimizations generally aren’t needed for APIs.

 

PROPERTY MANAGER UI

FEO

API JSON SNIPPET

{
  "name": "frontEndOptimization",
  "options": {
      "enabled": false
  }
}

Image Optimization

Some API endpoints, like analytic pixels, hide behind image URLs that could be treated by our edge servers as a normal image. To avoid this, we need to ensure that our customers’ endpoints are excluded from Image Optimization if present in the same Property Manager configuration.

If your API is used to serve image content, then enabling Image Optimization is very important. It’s also key to enable it only for the API endpoints that will serve this type of content.
 

IVMadaptive image compression

Roy Martinez

About the Author

Roy Martinez is a photography enthusiast, but in business hours he is an enterprise architect with 10 years of industry experience. He has a strong background in full-stack web development, DevOps, web performance, cloud computing, architecture changes, and advanced edge logic implementations, which allow him to provide consulting and support for customers.