While Akamai standard caching and retrieval models work well for most of our customers, some content is better served using customized models.
Some techniques that can be applied are segregation of content, storing fewer copies of unpopular objects, popularity based mapping, changing the object refresh frequency, adding fault tolerance to object retrieval, improving throughput in object retrieval, prefetching, and user mapping.
Large content libraries can be served more efficiently by use of domain sharding of the user-visible hostname, which you can think of as publishing different objects on different hostnames. This technique is often employed behind the scenes in a tiered distribution hierarchy so that we not only serve an object from a single pair of servers in a tiered distribution deployment to our edge servers, but we can serve an object only from a single deployment (or subset of deployments). This allows the use of many large deployments to offload massive libraries, or the use of many tiny deployments to offload the large libraries of many different customers. It is easy to find locations for tiny deployments or to carve them off of existing ones.
Single copies of unpopular objects
It is undesirable to evict an object of medium popularity from cache in order to make room for an object of low popularity. In the case of long tail content, while the most popular objects are very popular, objects of medium popularity are significantly less popular and there are thousands or millions of unpopular objects. Popular objects will be high up in the LRU cache and not eligible for eviction, so the objects that will be evicted will be other unpopular objects, or those of medium popularity. While two copies of an object are kept to provide redundancy, we can change this for long tail content.
Popularity based mapping
Another popularity-driven technique is to redirect requests for objects of different popularity to different deployments. This results in serving the most popular content from deployments closer to the users, which improves throughput while still maintaining a high offload of origin resources. The least popular content can be served from a more centralized deployment with very large disks and longer last mile distances, which trades off throughput for high offload of long tail content.
Object refresh frequency
To further reduce the load on your origin server, the Akamai Intelligent Platform™ may need only check to see if an object may have changed once a week, or once a month, or longer. At the other end of the spectrum, other objects are frequently updated on the origin and need to be refreshed in cache every second.
Fault tolerance in object retrieval
Edge servers will retry if they experience an error retrieving an object from the origin. These retries can be customized to use an alternate origin or to choose a different path to the origin.
Improving throughput in object retrieval
The edge server can choose an optimal path to the origin, and can detect that it’s not getting sufficient throughput on that path and can choose another in real time.
By inspecting the object that it’s delivering, the edge server can make smart decisions about what objects the user is likely to request next, and can proactively download - or prefetch - the objects.
While choosing the edge server deployment to map a request to by the location of the user’s DNS server typically yields locations that are indeed near to the user, they can also be far away when they have chosen a distant DNS server. Each of the edge servers can look at the location of the user and decide to redirect the request to another deployment which is closer to the actual user location.