Skip to content

Commit

Permalink
Trim down terraform config for the new Electric (#1)
Browse files Browse the repository at this point in the history
* Define multiple public subnets in the vpc module

* Remove non-existent env variables from the ECS task definition

* Remove any mentions of the migrations proxy from the ECS service module

* Update ECS service definition to use the new Electric port and updated VPC module

* Remove any mentions of the migrations proxy from the load balancer module

* Switch to using an application load balancer

* Bring the main config up to date with the sub-module changes

* Remove frontend modules and variables

* Add terraform.tfvars to .gitignore

* Fix the certificate and HTTPS listener config

* Update the top-level README

* docs: README tweaks from a clean pass through.

---------

Co-authored-by: James Arthur <[email protected]>
  • Loading branch information
alco and thruflo authored Nov 14, 2024
1 parent 6284eea commit 21aa57c
Show file tree
Hide file tree
Showing 28 changed files with 121 additions and 470 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.terraform
terraform.tfstate
terraform.tfstate.backup
terraform.tfvars
96 changes: 50 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,87 +1,93 @@
ElectricSQL on Amazon ECS
=========================

Terraform configuration for provisioning an ECS cluster to run [ElectricSQL](https://electric-sql.com/) behind a Network Load Balancer, connected to an instance of RDS for PostgreSQL, together with a CloudFront distribution backed by an S3 bucket for hosting assets of the local-first web app.
Terraform configuration for provisioning an ECS cluster to run [ElectricSQL](https://electric-sql.com/) behind an Application Load Balancer, connected to an instance of RDS for PostgreSQL.

> [!WARNING]
> This Terraform configuration is a **work in progress**. We don't recommend using it
> in a production setting just yet.
> This Terraform configuration is a **work in progress**. You should review it carefully
> before using it in a production setting.
>
> Please let us know if you notice any bugs, missing configuration or poorly chosen
> defaults. See "Contributing" and "Support" sections at the bottom.
## Overview

The top-level configuration is comprised of logical modules, providing a concise and high-level overview of the whole setup. Each module is defined in a subdirectory of the top-level `modules/` directory. The only external dependency used is the [hashicorp/aws](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) provider.

This is meant to be used as a starting point for a production deployment of your electrified local-first app to AWS. Feel free to make changes to it and adapt the included modules to your needs.

Running `terraform apply` for this configuration without any modifications will provision the following infrastructure:

- a new VPC with two private subnets and one public subnet
- a new VPC with two private and two public subnets
- an instance of RDS for PostgreSQL that has logical replication enabled
- Electric sync service running the `electricsql/electric:latest` image [from Docker Hub](https://hub.docker.com/r/electricsql/electric) as a Fargate task on ECS
- a new S3 bucket for hosting your web app's assets
- a CloudFront distribution to serve the web app
- an Application Load Balancer with an HTTP and an HTTPS listener, both routing to the default port of the Electric sync service container

Things you can customize with input variables:

- the name of each logical component
- database credentials
- domain names used for the CloudFront distribution and the NLB
- Electric's Docker image tag, etc.

**NOTE:** when building a new infrastructure from scratch for the first time, a few manual steps will be required, such as initializing the remote state for Terraform, validating a TLS certificate request for your custom domain, etc. See the next section for a complete walkthrough.
> [!NOTE]
> When building this infrastructure from scratch for the first time, you will need to perform some manual steps, including initializing the remote state for Terraform and requesting a TLS certificate from AWS Certificate Manager. See the next section for a complete walkthrough.
## Usage

### Initial setup

To set up a new infra from scratch, follow these steps:

1. Sign in to AWS CLI and input your access key id, secret key and region.
1. Sign in to AWS CLI and input your access key id, secret key and region.

```shell
aws configure --profile '<profile-name>'
```

2. Initialize the provider and local modules.
```shell
aws configure --profile '<profile-name>'
```

```shell
terraform init
```
2. Initialize the provider and local modules.

3. Copy the `terraform.tfvars.example` file and edit the variable values in it to match your
preferences. Use the same `<profile-name>` you specified above for the `profile` variable in
your `terraform.tfvars` file.
```shell
terraform init
```

```shell
cp terraform.tfvars.example terraform.tfvars
```
3. Copy the `terraform.tfvars.example` file and edit the variable values in it to match your
preferences. Use the same `<profile-name>` you specified above for the `profile` variable in
your `terraform.tfvars` file.

4. Finally, provision the infrastructure.
```shell
cp terraform.tfvars.example terraform.tfvars
```

```shell
terraform apply
```
4. Request a TLS certificate from AWS Certificate Mananger, e.g. via the AWS console
(https://console.aws.amazon.com/acm/home). You will need to provide a domain name, such as `my-electric-sync-service.example.com`. Keep a note of this as you'll create a CNAME for it below once you know the load balancer's hostname. (This is *different* from the validation CNAME you add in the next step).

5. TODO: Validate the certificate request.
5. TODO: Add a CNAME to your domain pointing at the Load Balancer endpoint.
5. TODO: Add a CNAME to your domain pointing at the Cloudfront distribution endpoint.
5. Verify your ownership of the domain by adding a validation CNAME record to your domain on the website you use to manage your DNS records. This is so that AWS can validate the certificate request and issue the certificate. You can find the "CNAME name" and "CNAME value" to use in the "Domains" section of the certificate page once you've created it. (If you don't see the information in the table, scroll right!).

### Deploying the web app
6. Use the ARN of the newly issued certificate as the value for the top-level `tls_certificate_arn` variable in your `terraform.tfvars` file.

With the S3-backend CloudFront setup created by the configuration in this repo, all you need to release a new version of your web app is upload its latest assets to the S3 bucket.
7. Provision the infrastructure.

```shell
# Change directory to your web app. For example,
cd examples/web-wa-sqlite
terraform apply
```

8. Once the load balancer is up an running, create another new CNAME record on your domain using with the domain you chose for your certificate as the name and the load balancer's generated domain name as the value. Here's how it might look in Namecheap's advanced DNS management view:

![CNAME in Namecheap](img/namecheap_cname.png)

9. Try sending an HTTP request to your custom domain to verify that it's working:

# Build app assets
npm run build
```sh
$ curl -i https://sync.aws-testing.example.com/v1/health
HTTP/2 200
date: Thu, 14 Nov 2024 11:28:57 GMT
content-type: application/json
content-length: 19
vary: accept-encoding
cache-control: no-cache, no-store, must-revalidate
x-request-id: GAfSPDjAhfDWy3QAAAXy
server: ElectricSQL/0.8.1
access-control-allow-origin: *
access-control-expose-headers: *
access-control-allow-methods: GET, HEAD

# Upload the assets to the S3 bucket, deleting previous versions of the bundles
# and other remote files that are no longer included in the local build.
aws s3 sync dist/ s3://electric-aws-example-app-bucket --delete
{"status":"active"}
```

### Updating the sync service
Expand Down Expand Up @@ -115,9 +121,7 @@ Included modules:
- [rds](./modules/rds) - instance of RDS for Postgres with logical replication enabled
- [ecs_task_definition](./modules/ecs_task_definition) - Fargate task for the Electric sync service based on the [Docker Hub image](https://hub.docker.com/r/electricsql/electric)
- [ecs_service](./modules/ecs_service) - custom ECS cluster with one Fargate service that uses the task definition from above
- [load_balancer](./modules/load_balancer) - Network Load Balancer for SSL termination and routing traffic to the sync service's HTTP and TCP ports
- [s3](./modules/s3) - S3 bucket to host the web app's assets
- [cloudfront](./modules/cloudfront) - CloudFront distribution for serving the web app
- [load_balancer](./modules/load_balancer) - Application Load Balancer for SSL termination and routing traffic to the sync service's HTTP port

## Input variables

Expand Down
Binary file added img/namecheap_cname.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
59 changes: 13 additions & 46 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ module "vpc" {
source = "./modules/vpc"

cidr_block = var.vpc_cidr_block
public_subnet_cidr = var.vpc_public_subnet_cidr
public_subnet_cidrs = var.vpc_public_subnet_cidrs
private_subnet_cidrs = var.vpc_private_subnet_cidrs
}

Expand Down Expand Up @@ -38,20 +38,20 @@ module "ecs_task_definition" {

container_environment = [
{
name = "AUTH_MODE"
value = "insecure"
name = "LOG_LEVEL"
value = "info"
},
{
name = "DATABASE_URL"
value = module.rds.connection_uri
},
{
name = "ELECTRIC_WRITE_TO_PG_MODE"
value = "direct_writes"
name = "ELECTRIC_INSTANCE_ID"
value = "terraform-aws-test-instance"
},
{
name = "PG_PROXY_PASSWORD"
value = var.pg_proxy_password
name = "PROMETHEUS_PORT"
value = "4000"
}
]
}
Expand All @@ -60,51 +60,18 @@ module "ecs_service" {
source = "./modules/ecs_service"

vpc_id = module.vpc.id
public_subnet_cidr = var.vpc_public_subnet_cidr
public_subnet_id = module.vpc.public_subnet_id
public_subnet_cidrs = var.vpc_public_subnet_cidrs
public_subnet_ids = module.vpc.public_subnet_ids
task_definition = module.ecs_task_definition
task_container_name = var.ecs_task_container_name
}

# You'll have to manually copy the CNAME value from the
# request in AWS console and update your custom domain name's records
# with it to pass the DNS validation.
resource "aws_acm_certificate" "tls_cert" {
domain_name = var.tls_cert_domain
subject_alternative_names = var.tls_cert_aliases
key_algorithm = var.tls_cert_key_algorithm
validation_method = "DNS"

lifecycle {
create_before_destroy = true
}
}


module "load_balancer" {
source = "./modules/load_balancer"

vpc_id = module.vpc.id
public_subnet_id = module.vpc.public_subnet_id
tls_certificate = aws_acm_certificate.tls_cert
ssl_policy = var.load_balancer_ssl_policy

lb_target_group_main = module.ecs_service.lb_target_group_main
lb_target_group_proxy = module.ecs_service.lb_target_group_proxy
}

### Frontend

module "s3" {
source = "./modules/s3"

app_bucket_name = var.s3_bucket_name
}

module "cloudfront" {
source = "./modules/cloudfront"
vpc_id = module.vpc.id
subnet_ids = module.vpc.public_subnet_ids
tls_certificate_arn = var.tls_certificate_arn

web_app_bucket = module.s3.bucket
tls_certificate = aws_acm_certificate.tls_cert
distribution_domain_alias = var.cloudfront_domain
lb_target_group_main = module.ecs_service.lb_target_group_main
}
31 changes: 0 additions & 31 deletions modules/cloudfront/README.md

This file was deleted.

82 changes: 0 additions & 82 deletions modules/cloudfront/main.tf

This file was deleted.

4 changes: 0 additions & 4 deletions modules/cloudfront/outputs.tf

This file was deleted.

21 changes: 0 additions & 21 deletions modules/cloudfront/variables.tf

This file was deleted.

Loading

0 comments on commit 21aa57c

Please sign in to comment.