This repository provides best practices and template framework for developing AWS Cloud Development Kit(CDK)-based applications effectively, quickly and collaboratively. In detail, practical approaches such as how to deploy to multi-environment, how to organize directories and how to manage dependencies between stacks will be introduced, and template codes are provided to support them. Gradually, these template codes will be further expanded to support various DevOps scenario.
This template framework suports both CDK Ver2 and CDK Ver1.
- AWS CDK Version2 branch: main, default branch
- AWS CDK Version1 branch: release_cdk_v1, now in maintenance mode
-
3-a. DevOps Collaboration: How to organize directory for collaboration
3-b. Multi-Target Deployment: How to separate configuration from codes
3-c. Stack Independence: How to manage dependency between stacks
AWS Cloud Development Kit(CDK) is an open source software development framework to define your cloud application resources using familiar programming languages. After coding using CDK Construct and Stack, if you run it through CDK CLI, it is finally compiled and deployed through AWS CloudFormation.
AWS CDK supports TypeScript, JavaScript, Python, Java, C#/.Net, and (in developer preview) Go. The template codes of this repository are implemented in TypeScript, because it clearly defines restrictions on types. Restrictions on types provide automated/powerful guide within IDE.
Because AWS CDK is provided in a language that supports OOP(Object-Oriented Programming), it is possible to configure and deploy cloud resources in the most abstract and modern way. This repository provides a template framework by maximizing these characteristics.
npm install
install dependencies only for Typescriptcdk list
list up all stackscdk deploy
deploy this stack to your default or specific AWS account/regioncdk diff
compare deployed stack with current statecdk synth
emits the synthesized CloudFormation templatecdk destroy
destroy this stack to your default or specific AWS account/region
cdk.json
in the root directory describes a entry-point file, in this repository we use infra/app-main.ts as the entry-point.
- CDK Intro: https://docs.aws.amazon.com/cdk/latest/guide/home.html
- CDK Getting Started: https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html
- API Reference: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-construct-library.html
- CDK Workshop: https://cdkworkshop.com/
- CDK Examples: https://github.com/aws-samples/aws-cdk-examples
First of all, AWS Account and IAM User is required. IAM user's credential keys also are requried.
To execute this template codes, the following modules must be installed.
- AWS CLI: aws --version
- Node.js: node --version
- AWS CDK: cdk --version
- jq: jq --version
Please refer to the kind guide in CDK Workshop.
Configure your AWS credential keys using AWS CLI.
aws configure --profile [your-profile]
AWS Access Key ID [None]: xxxxxx
AWS Secret Access Key [None]:yyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
Default region name [None]: us-east-2
Default output format [None]: json
If you don't know your account number, execute the following command:
aws sts get-caller-identity --profile [optional: your-profile]
...
...
{
"UserId": ".............",
"Account": "75157*******",
"Arn": "arn:aws:iam::75157*******:user/[your IAM User ID]"
}
Several principles were selected to improve DevOps efficiency through AWS CDK. Conversely, if you are considering using AWS CDK as an IaC template for a one-time deployment, there is no need to apply these principles.
- Purpose: As an IaC tool for DevOps, it is very important to organize the directory of the project so that you can collaborate by role.
- Approach: I recommend having a separate directory for each role within a project so that each member interacts within a fence. The following figure shows the directory structure intuitively.
Although it is not an mandatory guide, it is necessary to configure this way so that development without boundaries between each other is possible. In particular, as the development paradigm shifts to serverless, the boundary between infrastructure(infra, config, lib) and business codes(app, codes, models) is disappearing.
- Purpose: Code and configuration should be separated so that they can be freely deployed to various AWS account/region without modifying the code.
- Approach: Prepare a json file for each distribution in
config
directory in order to isolate the configuration from the code as much as possible. The following figure shows the configuration files inconfig
directory.
Because we need to ensure that a single source code is maintained, all configurations are managed in config/app-config-[your-suffix].json
. And several files are made according to the environment you want to deploy to, and you have to choose one of them when deploying.
Each config/app-config-[your-suffix].json file
consists of two main parts in json format.
{
"Project": {
},
"Stack": {
}
}
Project part describes project name and stage, and where to deploy.
The project name and stage are combined to create a unique project prefix, which is used as a prefix of all stacks.
Specifying AWS account number and region name in CDK source code causes us to modify the code per release. As a result, such information must be managed outside of the source code.
The final project part consists of:
{
"Project": {
"Name": "HelloWorld", <----- Essential: your project name, all stacks will be prefixed with [Project.Name+Project.Stage]
"Stage": "Demo", <----- Essential: your project stage, all stacks will be prefixed with [Project.Name+Project.Stage]
"Account": "75157*******", <----- Essential: update according to your AWS Account
"Region": "eu-central-1", <----- Essential: update according to your target region
"Profile": "cdk-demo" <----- Essential: AWS Profile, keep empty string if no profile configured
},
"Stack": {
}
}
The above 5 items are mandatory, and if there is no profile name, you can leave it empty string.
In this example configuration, all stack names start with HelloWorldDemo
. By setting this prefix, it is possible to deploy multiple stages to the same AWS account/region.
Usually a CDK project is implemented in several stacks, which have some degree of dependency on each other. Stack part describes detailed configuration of each stack. There is no need to define a standardized format because each stack requires different resource configurations, but the Name
item must be declared because it is common.
A sample stack configuration consists of:
{
"Project": {
},
"Stack": {
"RealtimeProcessing": {
"Name": "DataProcessingStack",
"BucketName": "my-data",
"LambdaName": "my-function1",
"LambdaMemory": 256,
"LambdaPath": "codes/lambda/function-a/src",
}
"BatchJob": {
"Name": "BatchJobStack",
"VpcId": "vpc-yyyyyyyy",
"ECSClusterName": "main-cluster",
"ECSServiceName": "main-service",
}
}
}
Set the path of this json configuration file through an environment variable. The key name of environment variable is APP_CONFIG
, which can be modified in infra/app-main.ts
file.
export APP_CONFIG=config/app-config-demo.json
Or you can select this configuration file in command line like this:
cdk deploy *DataProcessingStack --context APP_CONFIG=config/app-config-demo.json
Through this external configuration injection, multiple deployments(multiple account, multiple region, multiple stage) are possible without code modification.
- Purpose: It should be possible to independently deploy each stack by removing the strong coupling between each stack.
- Approach: If
Output
of CloudFormation is directly referenced between stacks, a strong dependency occurs and deployment becomes difficult when there are many stacks. To solve this problem, I recommend placing a key-value registry between them and referencing each other. Of course, this method does not change that the deployment order of each stack must be respected, but once stored in the parameter store, independent deployment is possible afterwards. Luckily, AWS provides Parameter Store in System Manager, which is the best solution for this.
For frequently used parameter store access, we provide methods to help with this in our base class.
For parameter provider: putParameter()
import * as base from '../../../lib/template/stack/base/base-stack';
import { AppContext } from '../../../lib/template/app-context';
export class DataStoreStack extends base.BaseStack {
constructor(appContext: AppContext, stackConfig: any) {
super(appContext, stackConfig);
const ddbTable = this.createDdbTable();
this.putParameter('TableName', ddbTable.tableName)
}
...
}
For parameter user: getParameter()
import * as base from '../../../lib/template/stack/base/base-stack';
import { AppContext } from '../../../lib/template/app-context';
export class DataProcessStack extends base.BaseStack {
constructor(appContext: AppContext, stackConfig: any) {
super(appContext, stackConfig);
const tableName = this.getParameter('TableName')
}
}
- Purpose: Frequently reused workloads should be abstracted and easily reused without duplication of code.
- Approach: Frequently used patterns and resources are provided using OOP's template method pattern. That is, frequently used things will be provided through the parent class, and the child class will inherit and reuse it. This is an essential approach used in the process of creating a framework.
As shown in the figure above, we can now use the framework to focus on more core business logic and reliably speed up development.
The stack and construct corresponding to the parent class are planned to be gradually expanded based on the workload or pattern that is essential. All contributors are welcome on this, so please contact me.
This section explains how to use each base class with example codes. This sample implements a typical backend service, which has a private ALB, ECS Cluster/Service/Task(python container), Aurora RDS and Cloud9(RDS bastion host) in a VPC. Based on CDK best practices, all of this has been implemented in four stacks.
You can see what parent class each stack inherits from through the following full diagram.
This stack creates a VPC using CloudFormation yaml file. For that reason, it inherits from CfnIncludeStack which can automatically load yaml file instead of us. Write the information about CloudFormation template in the config. If you pass yaml path and parameters to onLoadTemplateProps
method, Cfn object is passed to onPostConstructor
.
- Stack Configuration
{
"Project": {
...
},
"Stack": {
"SampleCfnVpc": {
"Name": "SampleCfnVpcStack",
"TemplatePath": "infra/stack/template/sample-cfn-vpc.yaml",
"Parameters": [
{
"Key": "VpcName",
"Value": "HelloWorldDemo-Vpc"
}
]
},
"SampleVpcRds": {
...
},
"SampleVpcCloud9": {
...
},
"SampleVpcEcs": {
...
}
}
}
-
Stack Implementation
VPC name is stored in memory via putVariable
method and will be used to retrieve VPC object from other stacks.
This stack creates Aurora serverless database in VPC which was created in SampleCfnVpcStack
. Since it inherits from VpcBastStack, if you pass VPC name to onLookupLegacyVpc
method, VPC object is passed to onPostConstructor
.
And database parameters are saved through putParateter
method, which will save those into Parameter store
in Secrets Manager
instead of us.
- Stack Configuration
{
"Project": {
...
},
"Stack": {
"SampleCfnVpc": {
...
},
"SampleVpcRds": {
"Name": "SampleVpcRdsStack",
"ClusterIdentifier": "SampeDatabase",
"DatabaseName": "helloworld"
},
"SampleVpcCloud9": {
...
},
"SampleVpcEcs": {
...
},
}
}
-
Stack Implementation
This stack just creates a Cloud9 EC2 instance in the first publict subent of VPC, so there is no special configuration in the config file. Since it inherits from VpcBastStack, if you pass VPC name to onLookupLegacyVpc
method, VPC object is passed to onPostConstructor
.
- Stack Configuration
{
"Project": {
...
},
"Stack": {
"SampleCfnVpc": {
...
},
"SampleVpcRds": {
...
},
"SampleVpcCloud9": {
"Name": "SampleVpcCloud9Stack",
"InstanceType": "t3.large",
"IamUser": "your-iam-user-id"
},
"SampleVpcEcs": {
...
},
}
}
Please change IamUser
above configuration before deployment.
-
Stack Implementation
This stack creates ECS Cluster/Serice/Task. Since it inherits from VpcBastStack, if you pass VPC name to onLookupLegacyVpc
method, VPC object is passed to onPostConstructor
.
- Stack Configuration
{
"Project": {
...
},
"Stack": {
"SampleCfnVpc": {
...
},
"SampleVpcRds": {
...
},
"SampleVpcEcs": {
"Name": "SampleVpcEcsStack",
"ClusterName": "SampleCluster",
"FilePath": "codes/sample-backend-fastapi",
"Memory": 1024,
"Cpu": 512,
"DesiredCount": 1
},
}
}
-
Stack Implementation
Execute the following command:
sh script/setup_initial.sh config/app-config-demo.json
For more details, open script/setup_initial.sh
file.
Caution: This solution contains not-free tier AWS services. So be careful about the possible costs.
Since each stack refers to each other through Parameter Store
, it must be deployed sequentially during the only first deployment. After that, independent deployment is possible.
You can deploy the entire stacks by running:
sh script/deploy_stacks.sh config/app-config-demo.json
You can find the deployment results in AWS CloudFormation.
After Cloud9
is deployed, connect and run the following sql script. completing deployment of Cloud9. Database access information(host_name, username, password) can be checked in Secrets Manager
.
Database Connection
mysql -h host_name -u admin -p
SQL Commands
CREATE DATABASE IF NOT EXISTS helloworld;
USE helloworld;
CREATE TABLE IF NOT EXISTS Items (
ID int NOT NULL AUTO_INCREMENT,
NAME varchar(255) NOT NULL UNIQUE,
PRIMARY KEY (ID)
);
INSERT INTO Items (NAME) VALUES ("name-001");
SELECT * FROM Items;
Finally we can call internal REST APIs in Cloud9
. Execute the following commands in Cloud9
:
export ALB_DNS_NAME=xxxxxxxxxxxxxxxx.region.elb.amazonaws.com
curl $ALB_DNS_NAME
...
...
{"Health": "Good"}
Or
curl $ALB_DNS_NAME/items
...
...
{"items": [[1,"name-001"]]}
Where ALB_DNS_NAME
is found in LoadBalancers
DNS name in EC2
web console.
Execute the following command, which will destroy all resources except RDS Database. So destroy these resources in AWS web console manually(Go RDS -> Select RDS Name -> Modify -> disable deletion protection -> Select again -> Delete).
sh ./script/destroy_stacks.sh config/app-config-demo.json
- AWS CDK Deploy Pipeline using AWS CodePipeline
- AWS ECS DevOps using AWS CDK
- AWS IoT Greengrass Ver2 using AWS CDK
- Amazon SageMaker Built-in Algorithm MLOps Pipeline using AWS CDK
- Amazon Cognito and API Gateway based machine to machine authorization using AWS CDK
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.