Cloudlift is built by Simpl developers to make it easier to launch dockerized services in AWS ECS.
Cloudlift is a command-line tool for dockerized services to be deployed in AWS ECS. It's very simple to use. That's possible because this is heavily opinionated. Under the hood, it is a wrapper to AWS cloudformation templates. On creating/updating a service or a cluster this creates/updates a cloudformation in AWS.
- pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py | python get-pip.py
pip install cloudlift
aws configure
Enter the AWS Access Key ID, AWS Secret Access Key. You can find instructions here on how to get Access Key ID and Secret Access Key here at http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
If you are using AWS profiles, set the desired profile name in the environment before invoking Cloudlift.
AWS_DEFAULT_PROFILE=<profile name> cloudlift <command>
OR
export AWS_DEFAULT_PROFILE=<profile name>
cloudlift <command>
cloudlift <command>
Create a new environment for services to be deployed. Cloudlift creates a new VPC for the given CIDR and sets up the required networking infrastructure for services to run in ECS.
cloudlift create_environment -e <environment-name>
This starts a prompt for required details to create an environment, which includes -
- AWS region for the environment
- VPC CIDR
- NAT Elastic IP allocation ID
- 2 Public Subnet CIDRs
- 2 Private Subnet CIDRs
- Minimum instances for cluster
- Maximum instances for cluster
- SSH key name
- SNS ARN for notifications
- AWS ACM ARN for SSL certificate
Once the configuration is saved, this is opened in the default VISUAL
editor.
Here configurations can be changed if required.
cloudlift update_environment -e <environment-name>
This opens the environment configuration in the VISUAL
editor. Update this to
make changes to the environment.
During create_service and deployment cloudlift
pulls the config from AWS
Parameter Store to apply it on the task definition. Configurations are stored in
path with the convention /<environment>/<service>/<key>
cloudlift edit_config -e <environment-name>
NOTE: This is not required for every deployment. It's required only when config needs to be changed.
In the repository for the application, run -
cloudlift create_service -e <environment-name>
This opens the VISUAL
editor with default config similar to -
{
"services": {
"Test123": {
"command": null,
"custom_metrics": {
"metrics_port": "8005",
"metrics_path": "/metrics"
},
"http_interface": {
"container_port": 80,
"internal": false,
"restrict_access_to": [
"0.0.0.0/0"
]
},
"memory_reservation": 100
}
}
}
Definitions -
services
: Map of all ECS services with configuration for current application
command
: Override command in Dockerfile
custom_metrics
: Configuration for custom metrics if required, do not include this if the service does not write/export custom metrics
NOTE: If you use custom metrics, Your ECS container Network mode will be
awsvpc
.
⚠ WARNING: If you are adding custom metrics to your existing service, there will be a downtime.
http_interface
: Configuration for HTTP interface if required, do not include
this if the services does not require a HTTP interface
container_port
: Port in which the process is exposed inside container
internal
: Scheme of loadbalancer. If internal, the loadbalancer is accessible
only within the VPC
restrict_access_to
: List of CIDR to which HTTP interface is restricted to.
memory_reservation
: Memory size reserved for each task in MBs. This is a soft
limit, i.e. at least this much memory will be available, and upto whatever
memory is free in running container instance. Minimum: 10 MB, Maximum: 8000 MB
volume
: Configuration for EFS volume mount if required, do not include this if the service does not required volume mount
- Service configuration with custom metrics:
{
"services": {
"Test123": {
"command": null,
"custom_metrics": {
"metrics_port": "8005",
"metrics_path": "/metrics"
},
"http_interface": {
"container_port": 80,
"internal": false,
"restrict_access_to": [
"0.0.0.0/0"
]
},
"memory_reservation": 100
}
}
}
- Service configuration with volume mount:
{
"services": {
"Test123": {
"command": null,
"volume": {
"efs_id" : "fs-XXXXXXX",
"efs_directory_path" : "/",
"container_path" : "/"
},
"http_interface": {
"container_port": 80,
"internal": false,
"restrict_access_to": [
"0.0.0.0/0"
]
},
"memory_reservation": 100
}
}
}
- Service configuration with http interface only:
{
"services": {
"Test123": {
"command": null,
"http_interface": {
"container_port": 80,
"internal": false,
"restrict_access_to": [
"0.0.0.0/0"
]
},
"memory_reservation": 100
}
}
}
This command build the image (only if the version is unavailable in ECR), pushes to ECR and updates the ECS
service task definition. It supports --build-arg
argument of docker build
command as well to pass
custom build time arguments
cloudlift deploy_service -e <environment-name>
For example, you can pass your SSH key as a build argument to docker build
cloudlift deploy_service --build-arg SSH_KEY "\"`cat ~/.ssh/id_rsa`\"" -e <environment-name>
This example is bit comprehensive to show
- it can execute shell commands with "`".
- It's wrapped with double quotes to avoid line-breaks in SSH keys breaking the command.
You can start a shell on a container instance which is running a task for given
application using the start_session
command. One pre-requisite for this is
installing the session manager plugin for awscli
. To install session manager
plugin follow the guide
cloudlift start_session -e <environment-name>
MFA code can be passed as parameter --mfa
or you will be prompted to enter
the MFA code.
git clone [email protected]:GetSimpl/cloudlift.git
cd cloudlift
./install-cloudlift.sh
To ensure the tests use the development version and not the installed version run (refer here)
pip install -e .
First level of tests have been added to assert cloudformation template generated vs expected one.
py.test test/deployment/
To run high level integration tests
pytest -s test/test_cloudlift.py
This tests expects to have an access to AWS console. Since there's no extensive test coverage, it's better to manually test the impacted areas whenever there's a code change.