Harness and CloudFront (UPDATED IN HARNESS EXPERTS)

UPDATE
Look at this post instead going forward: Harness and Cloudfront - Custom Deployer
UPDATE

Hey all!

I’ve seen many people leveraging the amazing edge capabilities of AWS to reach their users all over the globe without needing a large IT team or server footprint that they need to manage. The inevitable question that comes with this opportunity and Harness is: How do we use Harness to deploy a Cloudfront application?

Essentially, since Cloudfront uses an S3 bucket to store static website files, the ability to automate that upload after the CI process finishes the zip file build makes the whole process quick and easy. By leveraging the following steps, you will be able to quickly scale your Cloudfront deployments and trigger them when a new version of the ZIP artifact is uploaded.

Initial Setup

Starting at your Harness Dashboard, go to Setup in the bottom left:

Select Harness Delegates in the bottom right:

Create a Delegate Profile:

with the following script:

apt-get update
apt-get install -y python
apt-get install -y zip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
aws --version

Once that is done, assign the Delegate Profile to a Delegate of choice:

This will create an Implicit Selector for you to use later:

Next, make sure that there is an AWS Cloud Provider in Harness that is associated to a Delegate with the correct S3 permissions assigned to it.

The last piece of setup you’ll need to do before we get the workflow built is to put the AWS programmatic keys to the Harness Secrets Manager:


Unique Harness Requirements

The next part is a bit weird, so let me explain why it has to be done before you start building it out. Harness only recognized an S3 bucket as a source for files and artifacts, but not as a destination. Therefore, we will need to add some dummy items in Harness to enable us to use built-in variables and such.

The first thing to do is add a dummy cloud provider
image

Next, we need to add a dummy SSH key in Security > Secrets Manager > SSH > + Add SSH Key

image

The last dummy item that we will need to add is an Infrastructure Definition. To do this, go to the desired Application (or create one) then to a desired Environment (or create one) and create a new Infrastructure Definition called cloudfront. The form should have the following items selected:

Cloud Provider Type: Physical Data Center
Deployment Type: Secure Shell (SSH)
Use Already Provisioned Infrastructure: True
Cloud Provider: Physical Data Center: physical_data_center
Host Name(s): localhost
Host Connection Attributes: cloudfront_key

Harness Deployment Setup

Now that the security requirements and foundational items are setup correctly we can move on to the deployment setup.

The first step is to create a Service that allows us to leverage the ZIP artifact for both built-in variables and also for Triggers.

You will need to create an SSH Service with ZIP as the artifact
image

Then, when you are inside the Service, delete everything in the Deployment Specification

Click + Add Command on the top and add a new command called cloudfront with Command Type Other

Then Add and Exec step

With the command echo "${artifact.buildNo}" in it and click Submit

The last part of setting up the Service is to add the Artifact Source. Click + Add Artifact Source
image

Then select the desired artifact repository where the ZIP file is (in this case, it is Amazon S3) and fill in the form with the correct information
image

The last piece to create is a Multi-Service Workflow that points to the dummy environment we created above
image

Next, click + Add Phase in the Deployment Phases section, select the Cloudfront Service and dummy Infrastructure Definition that we created in previous steps
image

Inside of the phase, you will need to add the Service Command into the Deploy phase


Next we will click + Add Step in the Enable Service section and add a Shell Script step with the following script in it:

export AWS_ACCESS_KEY_ID=${secrets.getValue("aws_access_key")}
export AWS_SECRET_ACCESS_KEY=${secrets.getValue("aws_secret_key")}

echo "making directory for zip download and cd to that folder"
mkdir ./cloudfront && cd ./cloudfront && mkdir ./"${service.name}"

echo "download the artifact"
aws s3 cp "s3://${artifact.bucketName}/${artifact.fileName}" ./"${service.name}"

echo "Unzip artifact"
unzip "${service.name}" -d ./"${service.name}"
cd "${service.name}"

echo "sync up to AWS S3"
aws s3 sync . "${workflow.variables.destination}"

And make sure that you add the Delegate Selector the Delegate Setup steps at the beginning of this post

The last thing to do is to go back to the Workflow and add a Workflow Variable called destination, where you will need to add the S3 bucket path to upload the Cloudfront files to.

For links to the different pieces in the script, see AWS S3 Sync, Harness Secrets Manager, and Harness Workflow Variables

Hope this helps!

Don’t forget to Like/Comment/Share!

1 Like