This is one in a series of posts following up on our announcements from the Developer Zone at Edge 2017. For a complete review of Edge, see our recap page.
Akamai’s new Application Load Balancer—one of our Cloudlet applications that was a big hit in the Developer Zone at Edge 2017—is a cloud-based load balancing solution that leverages both Application Layer (Layer 7) and DNS layer (Layer 3) logic. It gives you granular control to balance your traffic based on HTTP attributes (e.g., cookie value, url path, query string) using weighted round-robin and performance-based routing algorithms.
But wait, there’s more: Application Load Balancer is platform-agnostic, and allows you to balance traffic across any combination of data sources including cloud-based and on-premises. It provides two layers of failover; the first uses instant retry capabilities on a per request basis, which enables a user’s request to be retried immediately to the next available data center without going back to the client in the event of an error. For the second layer of failover, Application Load Balancer leverages the DNS layer to add another level of failover that leverages Layer 3 health checks to continuously check the liveness of each data center, thereby increasing the reliability of your applications and improving user experience.
As you can see, this is a really powerful tool. Let’s take a look at three key use cases for Application Load Balancer to see the kinds of things it can do for you:
- Implement hybrid cloud architecture by customizing incoming HTTP requests for your data centers
For any hybrid cloud architecture, it’s obviously necessary to route traffic between data centers on-premises and data centers in the cloud. To enable these deployments, Application Load Balancer customizes incoming traffic to meet the needs of specific data centers. For example, Application Load Balancer permits the changing of the incoming host header or the URL path on a per-data-center basis. For example, if your application is load-balanced between AWS S3 and on-premises data centers, you can configure Application Load Balancer to change the incoming request to specify an S3 bucket directory path when a request is directed to the AWS data center, thus making it easy to balance traffic between any combination of data centers.
- Maximize availability with instant retry and automated failover of failed HTTP requests to backup data centers
All load balancing solutions offer liveness detection and failover mechanisms for origin servers. However, there is often a delay from the time an origin server fails and until incoming traffic is directed to backup data centers. Typically, this delay is on the order of tens of seconds, which can be disastrous especially during periods of peak traffic.
With Application Load Balancer’s automated failover capability, a request can be configured to immediately retry to backup data centers upon receiving an HTTP error code from origin server. In this scenario, Application Load Balancer can also be configured to drop session stickiness to the non-responsive origin server and reestablish session stickiness with the servers in a backup data center. This not only maintains business continuity, but also improves user experience.
Application Load Balancer also allows you to specify a subset of data centers where requests should fail over to, which provides additional control. For example, you can configure Application Load Balancer precisely among all your data centers in both North America and Europe. Then if a European data center goes down, all the requests can be failed over to other data centers only within Europe, thus ensuring data sovereignty.
- Easily perform maintenance of your data centers by routing traffic to a highly available static version of your website
Many website owners create a static version of their site that can be displayed when the original website is down for maintenance. With Application Load Balancer, you can not only serve traffic to this static website during maintenance, but also use this for disaster recovery. For example, in the event that all your data centers are down due to maintenance or outages, your application continues to answer user requests from the static version of your website, thus maximizing the availability of your application.
Now let’s talk about scalability, which is always a hot topic. Application Load Balancer has you covered there, too. Here’s how: Most public cloud platforms—unlike Application Load Balancer—will attempt to quickly autoscale based on the demand for your applications. In reality though, autoscaling is not instantaneous, as it takes several minutes to spin up new instances. This delay can be disastrous during times of peak traffic or during a DDoS attack. With Application Load Balancer, you get the immediate scalability needed to meet any traffic demands, planned or unplanned. Thus, whether you’re dealing with Black Friday traffic or an unexpected DDoS attack, you can be assured that your customers will have superior user experiences.
In addition, Application Load Balancer ensures high performance by directing traffic to the best available data center, analyzing Internet traffic conditions in real time and avoiding congestion points and outages.
Overall, Application Load Balancer allows you to regain reliability and control over the cloud by balancing your traffic over HTTP and DNS layers. This dual-layer load balancer allows you to avoid outages and lock-in, while also providing session stickiness and instant failover. As an agnostic load balancer, Application Load Balancer can balance traffic between on-premises data centers and any cloud service provider.
Ready to take a test drive? Try out Application Load Balancer free for 60 days.