Blog

CI/CD Leveraging the Akamai Pipeline with AWS 

November 5, 2020 · by Roy Martinez ·
Categories:

Every day we see more and more customers that migrate to cloud infrastructure. This also means that companies will depend on services to deploy code and manage infrastructure as code. Cloud providers like AWS offer services to allow companies to manage their infrastructure as code. Keeping this functionality within the toolset their DevOps Engineers removes the time and effort that comes when adding/using new tools, therefore, faster time to market.

This blog talks about how we can leverage AWS CloudFormation and AWS CodeBuild (and many other services) to deploy and manage Akamai as code. 

In the blog, I’ll provide examples of how components were built (that relate to Akamai) but each component can be changed depending on the business needs or reused in other solutions.

workflow

AWS CodeBuild can run a build-job using Akamai Docker images to create the necessary JSON rules sets. The artifacts are stored in AWS S3, ensuring that they are highly available and properly versioned (any git repository can be used instead).

Once the PM JSON rule sets are created, they are consumed by AWS CloudFormation.  CloudFormation passes down the property name and AWS Lambda functioning as a custom resource will fetch the S3 stored JSON rules to deploy Akamai infrastructure seamlessly (Note: there is no need to store the merge file and it can be consumed in execution, only). 

This is an example of how to automate creation and updates to Akamai configuration. Many parts of this solution can be replaced with other services provided by AWS or other cloud services.

Components:

Implementation

In this example, we will be storing Akamai Property Manager’s JSON rules in S3 with versioning configured. This is a  means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket (you can also use other solutions like Github). 

Prerequisites:

The artifacts that will be used in this project have to be initially generated by using the Akamai CLI Property-manager import function and uploaded to AWS S3. 

> akamai property-manager import -p propertyname 

code block

This enables us to not only download a local copy of the JSON rules from Property Manager but decomposes the rules into individual JSON snippets that can be managed independently of each other by multiple teams.

Once we have the output project directory and the individual artifacts we will store them in an S3 bucket with versioning enabled.

AWS CodeBuild

Once our artifacts are stored on S3 we can depend on CodeCommit (or any other means like github) to trigger a CodeBuild job on the repository update. 

Akamai CLI

AWS CodeBuild pulls in the Akamai CLIproperty-manager to merge configurations into new json artifact versions with the following command that merges all configuration snippets into a single file that can be found under the dist directory that will be saved in our S3 bucket for later use. 

> akamai property-manager merge -p propertyname

To make this process as streamlined as possible we will be using the Akamai Property Manager CLI Docker image.

akamai/property-manager:latest

This image is very lightweight, which enables us to have a quick provisioning phase for our build job. To learn more about the docker images available please visit the following link https://github.com/akamai/akamai-docker.

Source Configuration

Artifact Configuration

bucket

bucket name

To learn more about how to create a CodeBuild Project please see AWS Documentation.

Build Spec

AWS Documentation on how to use docker images

version: 0.2

 

phases:

  build:

    commands:

       - touch /root/.edgerc

       - akamai property-manager merge -p $PROPERTY

artifacts:

    files:

    - '**/*.papi.json'

As seen in the buildspec we are using environment variables for the property, making this job suitable to any property saving it in its directory within S3 (Output json will be stored in the disc directory). 

Note: This same approach can be used to manage multi-environment configurations with the Akamai CLIpipeline

> akamai pipeline new-pipeline -p [pipeline name] -e [template property] --variable-mode user-var-value [env1 env2]

Please see our CLI Pipeline white paper for more on how to work with this module:

https://developer.akamai.com/resource/whitepaper/akamai-pipeline-cli-framework-runbook/direct

AWS CloudFormation

Provides a common language to model and provision AWS and third-party application resources. In this case, CloudFormation is used to manage a custom Resource that invokes a Lambda function. The deployment is being orchestrated through a custom resource. This resource passes down the property name to the Lambda function to update the Akamai property.

Example Template:

AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: Provisions an AWS Custom Resource to deploy Akamai configurations.

Parameters:
ArtifactBucket:
  Description: The name of the artifacts bucket
  Type: String
ArtifactFolder:
  Description: The folder containing the build artifacts
  Type: String
EnvironmentName:
  Description: The environment name
  Type: String
RevisionID:
  Description: The revision ID
  Type: String
  Default: ""
VpcID:
  Type: AWS::EC2::VPC::Id
  Description: The VPC within your account.

Resources:
LambdaSecurityGroup:
  Type: AWS::EC2::SecurityGroup
  Properties:
    GroupDescription: !Sub akamai-lambda-${EnvironmentName}
    VpcId: !Ref VpcID
    Tags:
      - Key: Name
        Value: !Sub akamai-lambda-${EnvironmentName}-sg
DeployerFunction:
  Type: AWS::Serverless::Function
  Properties:
    FunctionName: !Sub akamai-config-deployer-${EnvironmentName}
    Runtime: python3.7
    Handler: akamai_config_deployer.handler.handler
    CodeUri:
      Bucket: !Ref ArtifactBucket
      Key: !Sub ${ArtifactFolder}/main.zip
    Description: "Lambda function for activating akamai configs found in the artifact bucket."
    Timeout: 300
    MemorySize: 512
    VpcConfig:
      SecurityGroupIds:
        - !Ref LambdaSecurityGroup
      SubnetIds: !Split
        - ","
        - Fn::ImportValue: !Sub ${VpcID}:private-subnet:ids
    Environment:
      Variables:
        ENVIRONMENT_NAME: !Ref EnvironmentName
        REVISION: !Ref RevisionID
    Policies:
      - AWSLambdaVPCAccessExecutionRole
      - Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - s3:ListBucket
            Resource:
              - !Sub arn:aws:s3:::artifacts-${AWS::AccountId}-${AWS::Region}
      - Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - s3:GetObject
            Resource:
              - !Sub arn:aws:s3:::artifacts-${AWS::AccountId}-${AWS::Region}/*
      - Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - secretsmanager:GetSecretValue
            Resource:
              - !Sub arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:akamai/*

AWS Lambda

AWS Lambda lets you run code without provisioning or managing servers. Lambda will be used as the custom resource defined in our CloudFormation template. This function AkamaiClient pulls down the JSON config from S3 created in the CodeBuild section and uses the Akamai Property Manager API to create a new version based on the latest rules.

Let's take a look at each of the files used for our lambda function:

Handler.py

This module defines the handler function (or entry point) to an AWS lambda function that orchestrates the activation of Akamai configurations to a specified property. This lambda function is set up to accept input and be invoked by AWS CloudFormation as a custom resource.  For more information on how to create or find examples for this module please refer to Lambda custom resource creation - python

The main attraction to this module is handle_config_publish, this module using our aws.py module fetches the stored property manager JSON rules (created by CodeBuild), creates a new version of the property, and activates it on the intended Akamai Network.

def handle_config_publish(event, context):
    # Retrieve the rules that we want to activate in a new version by pulling from s3
    resource_arguments = event["ResourceProperties"]["ProvisioningParameters"]
    location = resource_arguments["Location"]
    new_rules = get_json_from_s3(bucket=location["Bucket"], key=location["Key"])
    LOGGER.info("Got Rules From S3")

    edge_config_secret_arn = resource_arguments["SecretId"]
    akamai_client = build_akamai_client(edge_config_secret_arn)
    LOGGER.info("Built Akamai Client")

    # Create a new version based on the most current
    target_property_id = resource_arguments["PropertyId"]
    current_version_id = akamai_client.get_current_config_version(target_property_id)
    new_version_id = akamai_client.create_new_config_version(
        target_property_id, current_version_id
    )
    LOGGER.info("Created New Version Based On Latest")

    # Update the rules from what we have in s3 and activate it on the target network.
    akamai_client.update_config_rules(target_property_id, new_version_id, new_rules)
    akamai_client.activate_config_version(
        target_property_id,
        new_version_id,
        network_name=resource_arguments["ActivationNetwork"],
        revision=resource_arguments["Revision"],
        activation_emails=resource_arguments.get("EmailsOnActivation", []),
    )
    LOGGER.info("Activated A New Config Version")

Aws.py

This module defines higher-order operations that use AWS S3 and Secrets Manager to perform operations required by Akamai Configuration Deployment.

def get_json_from_s3(bucket, key):
    """ Parses a file stored in s3 as a json document and returns it.
    Args:
        bucket: The name of the bucket in the account that stores the asset.
        key: the s3 object key that stores the asset.
    Returns:
        A parsed representation of the json document using python builtins (dict, list, etc.)
    """
    LOGGER.info(f"Getting Object: {key} in bucket: {bucket}")
    s3 = boto3.resource("s3")
    obj = s3.Object(bucket, key)
    return json.loads(obj.get()["Body"].read().decode("utf-8"))


def get_json_secret(key):
    """ Gets the secret stored in secrets manager as a json document.
    Args:
        key: the secret id in secrets manager of the document to fetch.
    Returns:
        A parsed representation of the json document using python builtins (dict, list, etc.)
    """
    LOGGER.info(f"Getting Secret with Id: {key}")
    secrets = boto3.client("secretsmanager")
    secret = secrets.get_secret_value(SecretId=key)
    return json.loads(secret["SecretString"])

Akamai_client.py

This is the main section of the script that talks with Akamai’s API leveraging our Edgegrid library to handle authentication using the AWS secret manager’s stored credentials (passed by Cloudformation). 

Steps:

  1. Create a new version of the Property: Using following the PAPI endpoint will create a new version of the configuration.
    /papi/v1/properties/{propertyId}/versions{?contractId,groupId} 

This is accomplished by the function create_new_config_version

Args:

  • property_id: The property id from akamai.

  • based_on: The numeric id of the version to say the current config is based on. (from get_current_config_version)

Returns:

  • The numerical identifier for the new config version on that property.

def create_new_config_version(self, property_id, based_on) -> int:
        LOGGER.info(
            f"Creating New Config Version for: {property_id} based on version: {based_on}"
        )
        api_path = f"/papi/v1/properties/{property_id}/versions"
        version_link = self._post_api_with_path(
            api_path, body={"createFromVersion": based_on}
        )["versionLink"]
        return _get_version_number_from_link(version_link)

  1. Update Config Rules: Using following the PAPI endpoint will create a new version of the configuration. 

/papi/v1/properties/{propertyId}/versions/{propertyVersion}/rules{?contractId,groupId,validateRules,validateMode,dryRun}

Args:             

  • property_id: The property id from akamai.             

  • config_version: The numeric id of the version to update.             

  • rules: A dict defining the json rules to update to.        

Returns:             

  • A dictionary retaining information about the version of the config rules.

    def get_current_config_version(self, property_id) -> int:
        LOGGER.info(f"Updating Config Rules for {property_id} version {config_version}")
        api_path = f"/papi/v1/properties/{property_id}/versions/{config_version}/rules"
        return self._put_api_with_path(api_path, body=rules)

  1. Activate Configuration Version: Using following the PAPI endpoint will activate the newly created version of the configuration. 

/papi/v1/properties/{propertyId}/activations{?contractId,groupId}

Args:

  • property_id: The property id from akamai.

  • config_version: The numeric id of the version to update.

  • network_name: "STAGING" or "PRODUCTION"; the name of the network to activate on.

  • activation_emails(optional): A list of email addresses to activate on.

  • revision: Some message to add to the note about the activation.

Returns:

  • A dictionary retaining information about the version of the config rules.

   def activate_config_version(
        self, property_id, config_version, network_name, revision, activation_emails=[]
    ) -> str:
        LOGGER.info(f"Activating Config for {property_id} version {config_version}")
        api_path = f"/papi/v1/properties/{property_id}/activations"
        return self._post_api_with_path(
            api_path,
            body={
                "propertyVersion": config_version,
                "network": network_name,
                "note": f"Activated through akamai config deployer cloudformation. Revision: {revision}",
                "useFastFallback": network_name == "PRODUCTION",
                "notifyEmails": activation_emails,
                "acknowledgeAllWarnings": True,
            },
        )["activationLink"]

You might also like

Here are great resources you can read to learn more.

About the author

Roy Martinez

 

 

Roy Martinez is a photography enthusiast, but in business hours he is an enterprise architect with 10 years of industry experience. He has a strong background in full-stack web development, DevOps, web performance, cloud computing, architecture changes, and advanced edge logic implementations, which allows him to provide consulting and support for customers