New Relic

Get Started with New Relic GTM Integration

feedback loop

GTM New Relic Feedback Loop

You can now integrate Akamai’s Global Traffic Management (GTM) service with New Relic to view metrics like CPU utilization and concurrent connections. 

These metrics help you to better inform your traffic routing logic and create alerts. You can also your the Global Traffic Management (GTM) APIs to customize automated actions based on your metrics and traffic limits. 

You can also implement webhooks to accomplish API calls. As a trigger lessens, you can redirect traffic back to steady data centers. You can see the feedback loop in the diagram to the right.

Set Up New Relic GTM Integration

Step 1: Configure Application Performance Monitoring in New Relic

  • Configure Application Performance Monitoring (APM) in New Relic to track application performance. A good performance metric example is requests per minute (RPM).
  • Use the APM docs space to get started and configure agents that monitor each application host and make that data available to the Global Traffic Manager (GTM).

Step 2: Use New Relic APIs to Pull Load Feedback Data

  • Once your New Relic configuration is complete, use New Relic APIs to pull load feedback data.
  • The New Relic API Explorer is a convenient tool to observe API transactions and simplify implementation into other tools and scripts, like a Python script.
  • Use the Application Metric Data Resource as a starting point to summarize requests per minute (RPM) across an application that has many hosts.

Step 3: Use Akamai Developer APIs to Post Load Feedback Data

  • Use the GTM Load Feedback API to post load conditions to GTM data centers and servers.
  • Load Feedback Resources allow for a client application, such as a Python script, to pull application metric data from New Relic and post load feedback data to GTM.

The post to GTM will have a simple payload like this:

{
    "domain": "example.akadns.net",
    "datacenterId": 100,
    "resource": "requests",
    "timestamp": "2019-07-22T19:38:53.188Z",
    "current-load": 20,
    "target-load": 25,
    "max-load": 30
}

Learn more about GTM load feedback objects.

Note: A client application can simply update the current-load field with a New Relic metric such as the RPM throughput value on a recurring schedule such as each minute.

Additional Global Traffic Manager Integration Applications

Load Feedback for Performance

For Performance with Load Feedback, Global Traffic Manager (GTM) continually monitors data center conditions and adjusts traffic based on traffic conditions and your settings.

GTM can monitor conditions by getting a load feedback object that contains current, target and capacity metrics for a data center. A process for each data center can collect metrics using New Relic APIs and write results into the load feedback object. Alternatively, the process can push results into GTM via the Load Feedback API. GTM uses this load feedback to determine the best edge server to use. Data feedback can also consider load, performance, availability, and real user monitoring (RUM).

Load Feedback for a Multi-provider Workflow

Load Feedback can enable multi-provider workflow that uses a default provider during steady-state and additional providers during times of need. The following diagram shows GTM managing services to multiple providers by using load feedback to manage a New York and London data center. Load feedback from each data center includes current, target and capacity metrics. When New York or London data centers exceed capacity targets, GTM might then direct some or all new traffic to alternative service operators.

load feedback multi-provider

 

Liveness Testing

New Relic data can also contribute to GTM’s workflow to determine server liveness and filter out overloaded servers using custom GTM liveness tests that look for a particular response for each server (e.g., “Up”) and include servers that do so. A New Relic driven process can set the response to be “Up” or “Down” for each server given current metrics. Logically, the custom liveness test excludes servers that do not do so. You can learn more about liveness tests on our community forum. 

Another Sample Integration - Webhooks and AWS Lambda

You can use a variety of automation and orchestration solutions to monitor KPIs and update routing logic. An alert might follow KPIs (e.g., concurrent connections per second, network utilization, CPU utilization) and trigger a call to a webhook to mitigate alert conditions. A webhook can then call an API endpoint that then calls a serverless function to perform a routing update using the Akamai GTM API.

The following diagrams illustrate these workflows using an AWS API Gateway and Lambda function.

sample integration 1

 

sample integration 2

The Lambda script can execute Akamai authentication and authorization functions, manipulate JSON to address the alert’s event and PUT an appropriate transaction to GTM. Ruby and Akamai’s Ruby library might codify the function. In the above illustration, the function removes a target from GTM’s logic.

An Akamai Edgegrid function ties into Lambda like this:

lambada

Akamai Lambda Edgegrid Ruby Code that Creates a New GTM Target

An Lambda Akamai Edgegrid function might perform many different methods. The following example adds a new GTM target when additional traffic requires more capacity for a location. A process can monitor for this need and trigger a call to GTM via Lambda to direct GTM to include a new target in its algorithms.

The following Ruby script shows how to do so with a few lines of code:

require 'json'
require 'akamai/edgegrid'
require 'net/http'
require 'uri'
require 'aws-sdk-ec2'  # v2: require 'aws-sdk'

def wait_for_instances(ec2, state, ids)
  begin
    ec2.wait_until(state, instance_ids: ids)
    puts "Success: #{state}."
  rescue Aws::Waiters::Errors::WaiterFailed => error
    puts "Failed: #{error.message}"
  end
end

def lambda_handler(event:, context:)
    
    # Set the base URI
    baseuri = URI('https://host-from-akamai-developer')
    http = Akamai::Edgegrid::HTTP.new(
        address=baseuri.host,
        port=baseuri.port
    )

    # Set the API client
    http.setup_edgegrid(
        :client_token => 'client-token-from-akamai-developer',
        :client_secret => 'secret-from-akamai-developer',
        :access_token => 'access-token-from-akamai-developer',
        :max_body => 128 * 1024
    )
    
    # Get the current GTM configuration
    request = Net::HTTP::Get.new URI.join(baseuri.to_s, '/config-gtm/v1/domains/paveldespot.net.akadns.net/properties/edge2019').to_s
    response = http.request(request)
    gtmJson = response.body

    # Create a new VM instance
    ec2 = Aws::EC2::Client.new(region: 'us-east-1')
    ec2resource = Aws::EC2::Resource.new(region: 'us-east-1')
    ec2.start_instances({ instance_ids: ["i-00af10369a79dc6cc"] })
    wait_for_instances(ec2, :instance_running, ["i-00af10369a79dc6cc"])
    i = ec2resource.instance('i-00af10369a79dc6cc')

    # Add the new VM to GTM’s logic
    post_request = Net::HTTP::Put.new(
        URI.join(baseuri.to_s, "/config-gtm/v1/domains/paveldespot.net.akadns.net/properties/edge2019").to_s,
        initheader = { 'Content-Type' => 'application/json' }
    )
    newGtmHash = JSON.parse gtmJson
    newGtmHash['trafficTargets'][0]['servers'][1] = i.public_ip_address.to_s
    newGtmJson = newGtmHash.to_json
    post_request.body = newGtmJson    
    puts newGtmJson.to_s
    post_response = http.request(post_request)

    # Share happiness with the calling process
    functionResponse = { "isBase64Encoded" => "false", "statusCode" => "200"}
    return functionResponse

end

Join the Akamai Developer Program

The Akamai Developer Program features tailored content to connect you to the latest tools, exclusive beta releases, upcoming events, and so much more that helps you get the most out of Akamai.

Join the program