CONTINUOUS DELIVERY PIPELINES FOR APPLICATIONS, RUNNING IN AWS FARGATE
Abstract: This article proposes a simple CD pipeline solution for your applications, which run in AWS Fargate, and uses AWS Developer Tools and ECS. Pipeline design considerations and their architecture are discussed. Necessary changes to the application resources provisioning code are proposed. Source code is provided.
Introduction
Note: This article assumes that you have solid experience in AWS Developer Tools, AWS Fargate, Terraform and bash scripting.
In my previous articles I described solutions for:
- Provisioning infrastructure for running batch, web and L4 applications in AWS Fargate;
- Continuous integration of projects, which source code lives in svn or git.
Only one piece has been left uncovered to create a full CI/CD solution - continuous delivery (CD), and I would like to devote this article to it.
Continuous Delivery Pipelines
Design Considerations and Architecture
I decided to use AWS Developer Tools to build a CD pipeline, despite the fact that there might be a temptation to use Terraform for it. Indeed, we already have modules to provision infrastructure for running applications in Fargate, and we can just update the image
in ..cluster1/webapp1/main.tf
, and run Terraform in CD pipeline to do it. However, this simple solution has huge drawbacks. Let me name a few of them:
- if we run Terraform in a pipeline with automated approval for all changes, and there are changes in the AWS account, which are necessary for your applications to work, but which has made using AWS Console, AWS CLI, AWS SDK or CloudFormation, Terraform will overwrite them, and potentially break infrastructure;
- the previous point makes it clear that Terraform changes should be reviewed before applying them, which defeats the purpose of having an automated pipeline;
- Terraform is not a deployment tool, and doesn’t have the functionality to handle complex deployment types (for example, linear deployments).
In summary, let’s use the right tools for the tasks they have been designed for: Terraform for provisioning infrastructure and dedicated CD pipelines for handling deployments.
The solution described in this article has been developed using the following design considerations:
- A CD pipeline should only be responsible for deployments and run automatically;
- A CD pipeline should be triggered by some kind of event, which serves as approval for deploying a new revision of the application;
- Deployment history should be developer-friendly, and easy to find and use.
Let’s think about the requirements. A natural candidate for creating pipelines in AWS is CodePipeline. CodePipeline should have a source action. The third requirement (above) gives us a clue that we can use a CodeCommit repository for it! Ok, but what about the deployment? We can use the deployment provided by ECS out of the box, and supply it with revision.json
from our repository. This leads us to the following solution:
- A developer commits a new version of
revision.json
to thedeployment
CodeCommit repository; - The corresponding pipeline got triggered;
- The pipeline checks out code from a repository and passes the corresponding
revision.json
to the pipeline deployment stage; - ECS performs the deployment.
Note: In the code for this article, pipeline triggering functionality is omitted.
Deployment CodeCommit Repository
You can create a brand new CodeCommit Repository in your AWS account and structure it in the following way:
├── cluster1
│ ├── batchapp1
│ │ └── revision.json
│ ├── service1
│ │ └── revision.json
│ ├── tcpapp1
│ │ └── revision.json
│ └── webapp1
│ └── revision.json
└── cluster2
├── batchapp1
│ └── revision.json
├── service1
│ └── revision.json
├── tcpapp1
│ └── revision.json
└── webapp1
└── revision.json
...
The contents of revision.json
should match the format of the ECS deployment revision, and typically looks as follows:
[{
"name": "webapp1",
"imageUri": "ecr_image_uri"
}]
Pipeline Code
Pipeline code is simple and consists of only two stages: source and deploy:
resource "aws_codepipeline" "pipeline" {
for_each = var.pipelines
name = "deployment-master-${var.cluster_name}-${each.key}"
role_arn = aws_iam_role.codepipeline.arn
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["source_output"]
configuration = {
PollForSourceChanges = "false"
RepositoryName = "deployment"
BranchName = "master"
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "ECS"
input_artifacts = ["source_output"]
version = "1"
configuration = {
ClusterName = var.cluster_name
ServiceName = each.key
FileName = "${var.cluster_name}/${each.key}/revision.json"
}
}
}
artifact_store {
type = "S3"
location = local.build_bucket_name
}
}
As you see, we pass revision.json
for the corresponding cluster and application to the ECS on the deploy stage: ${var.cluster_name}/${each.key}/revision.json
.
Changes to The Application Provisioning Code
After the CD pipeline has run, a new ECS task definition revision will be created. When you will run your Terraform code for provisioning infrastructure for your application, it will detect drift and will prompt you to revert your deployment. That is not cool, so let’s fix it!
One quick way to fix it is to ignore any changes to the container_definitions
. You can do it in the following way:
resource "aws_ecs_task_definition" "task" {
family = local.name_prefix
container_definitions = var.container_definition
task_role_arn = aws_iam_role.fargate_task_role.arn
execution_role_arn = aws_iam_role.fargate_task_execution_role.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.cpu
memory = var.memory
lifecycle {
ignore_changes = [
container_definitions
]
}
}
The other way would be to read revision.json
from the deployment
CodeCommit repository and supply it with a correct image URI as follows:
data "external" "image_uri" {
program = ["bash", "${path.module}/get_image_uri.sh"]
query = {
app_name = var.service_name
cluster_name = var.cluster_name
}
}
module "ecs_template" {
source = "../template"
cpu = var.cpu
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
memory = var.memory
base_name = var.service_name
cluster_name = var.cluster_name
app_policies = var.app_policies
cloudwatch_log_group_name = local.cloudwatch_log_group_name
container_definition = templatefile("${path.module}/../container-definition.tpl", {
cpu = var.cpu
image = data.external.image_uri.result["imageUri"]
memory = var.memory
aws_region = local.aws_region
cluster_name = var.cluster_name
awslogs_group = local.cloudwatch_log_group_name
container_name = var.service_name
container_ports = [var.port]
})
}
#!/bin/bash
set -e #exit if any of the intermediate steps fail
#read from input of data source
eval "$(jq -r -M '@sh "CLUSTER_NAME=\(.cluster_name) APP_NAME=\(.app_name)"')"
IMAGE_URI=`aws codecommit get-file --repository-name=deployment --file-path=${CLUSTER_NAME}/${APP_NAME}/revision.json | jq -r .fileContent | base64 -d | jq -M .[0].imageUri`
#return results to module
echo "{\"imageUri\":${IMAGE_URI}}"
In the latter approach, you should populate your CodeCommit repository with the corresponding structure before provisioning resources for your applications (see this for example). As a result of this, you can omit specifying an image
in your configuration:
terraform {
backend "local" {
path = "../../../../../../../tf_state/prod/ca-central-1/prod/apps/cluster1/webapp1/terraform.tfstate"
}
}
module "service-alb" {
source = "../../../../../../modules/apps/service-alb"
env_name = "prod"
cluster_name = "cluster1"
service_name = "webapp1"
cpu = 1024
memory = 2048
alb_listener_priority_offset = 0
}
Deployment Instructions
Note: I use ca-central-1 AWS region in the pipeline code. Please change it to your desired region.
Please find deployment instructions below:
- Our CD pipeline module needs some resources to exist for its deployment and work: CodeCommit repository and build S3 bucket. It is easy to create them if you don’t have them.
- Please prepare and push the corresponding structure to the
deployment
CodeCommit repository (see this for example). - Make changes to your modules for provisioning infrastructure for running batch, web and L4 applications in AWS Fargate.
- Change the source for CD pipelines and provision them using Terraform.
For testing your new pipelines, please commit a new revision to the deployment
CodeCommit repository and manually start the corresponding CD pipeline. A push to a repository should start the pipeline execution, but in this article, we skipped adding this functionality for simplicity. ECS should trigger a new deployment (you can see it in the ECS console), and will deregister the old application revision from the load balancer once a new application revision passes its health checks.
Note: The CD module in this article is an example only, and is provided to show the working concept. For production use, pipeline triggering functionality should be added, and CodeDeploy might be used (should you require more complex deployment types). Terraform state should be moved to an S3 bucket in order to allow the team to provision resources simultaneously, etc.
Conclusion
In this article we discussed a solution for building the final piece of the puzzle - CD pipelines (in my previous articles you can find solutions for provisioning resources for running applications in AWS Fargate and CI for projects, which source code lives in svn or git). Using the module and provided code, you can create a customized solution for your own needs for real-life usage.
I hope you enjoyed this article and that you’ll find it useful for building your own CD solutions for your projects.
Happy coding!
Disclaimer: Code and article content provided ‘as-is’, without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of code or article content.
You can find sample sources for building this solution here.