Content Delivery Networks (CDNs) make implementing a global infrastructure relatively quick and easy. The process mostly consists of defining a set of policies that tell the CDN what content can be cached (static) and what content needs to be proxied to the origin servers (dynamic). While you can use network optimizations to speed up the delivery of dynamic content as it traverses the Internet, it may still be several orders of magnitude slower than cached content.
Here’s where Edge Computing shines; it allows you to authenticate users, run fast GEO-location, deliver personalized content, and many other use cases right from the edge. Companies often use the term “edge” in different contexts, so let’s clarify how Edge Computing relates to Cloud Computing.
Cloud Computing is a service offered by cloud platforms (like AWS, Azure, or Google Cloud) that allows developers to run code on dedicated infrastructure hosted on a specific data center location. You usually pay for the infrastructure resources you provision, like the cloud computing servers, plus network traffic and other resources you use on the cloud provider. Because this is a centralized model, some users may experience faster service than others depending on how close they are to the cloud provider’s data center. You can provision additional data centers in different locations, but that comes at the expense of having to provision, monitor, and manage upgrades for more infrastructure, plus the additional operating costs.
Wikipedia defines Edge Computing as “a distributed computing paradigm that brings computation and data storage closer to the location where it is needed to improve response times and save bandwidth.” Akamai has been pioneering Edge Computing for a long time, as Ari Weil shared in his blog post 20 Years of Edge Computing.
Edge Computing is a cloud service that allows developers to run code on a shared infrastructure that is highly distributed around the world and managed by a cloud provider. This allows developers to easily deploy code in thousands of geographically distributed edge servers that are a few milliseconds away from your end users — without the extra work of having to manage the underlying infrastructure.
Because Edge Computing runs on shared infrastructure, the applications you create need to be optimized to run within the allowed resource limits that the platform provides; however, because the provider manages and monitors the infrastructure, developers can, in a few minutes, release code that runs on thousands of servers around the world for a game-changing approach.
Check out our resources to discover what you can do with EdgeWorkers and how to quickly get started.
About the author
Javier Garza is a developer evangelist at Akamai Technologies where he helps the largest companies on the internet run fast and secure apps by leveraging web performance, security, and DevOps best practices. Javier has written many articles on HTTP/2 and web performance, and is the co-author of the O’Reilly Book “Learning HTTP/2”. In 2018, Javier spoke at more than 20 developer-related events around the world, including well-known conferences like Velocity, AWS Re:Invent, and PerfMatters. His life’s motto is: share what you learn, and learn what you don’t. In his free time, he enjoys challenging workouts and volunteering for non-profits in areas like children and education.