Skip to content

Latest commit

 

History

History
222 lines (164 loc) · 9.58 KB

README.md

File metadata and controls

222 lines (164 loc) · 9.58 KB

Telemetry App · GitHub Super-Linter

A Python API that takes in temperature readings from remote sensors and store these readings in AWS DynamoDB. The API also provides temperature statistics such as Max, Min and Average temperatures, based on the readings received.

Contents

API Reference

A Swagger documentation page is currently under development for the API but in the meantime the below should provide enough guidance on how to interact with the API.

Public URL

The API is available at: https://prod.lucastelemetry3m.com

The API running on the staging environment is also available at: https://staging.lucastelemetry3m.com

The only reason why the staging environment is exposed is to demonstrate the multiple environments development strategy implemented. More about that in the CI/CD Pipeline Section

Endpoints

The API exposes two endpoints:

  • GET /api/stats
    Used to retrieve the current temperature statistics based on the readings received. No authentication required.

    Response Sample:

    200 OK
    {
      "Maximum": 28,
      "Minimum": -6,
      "Average": 10
    }
    
  • PUT /api/temperature
    Used to store a new reading. No authentication required.

    Expected payload sample:

    {
      "sensorId": "202",
      "temperature": 18,
      "timestamp": "YYYY-MM-DDTHH:MM:SS"
    }
    

    Response Sample:

    200 OK
    {
      "message": "Temperature reading recorded successfully"
    }
    

Built With

Python Application

  • Python 3.8
  • Docker v20.10.10
  • Flask Web Framework v2.0.2
  • Flask-RESTful 0.3.9
  • Boto3 v1.20.14 (AWS SDK)
  • uWSGI v2.0.18 (Web server)
  • AutoPEP8 v1.6.0 (Python Linter)

Infrastructure as Code (IaC)

  • Terraform v2.4

Running the API locally

Prerequisites

Terraform Prerequisites

The following are required to create this stack in your AWS Account.

  1. AWS IAM user with at least the permission listed in this Sample IAM Policy
  2. Custom Domain registered in AWS Route53
    After registering your domain, update the "dns_zone_name" variable in deploy/variables.tf with your domain name.

The Terraform state and lock are stored remotely following best practices when working as part of a team. Terraform requires the following AWS Resources to be set up for remote state/lock.

  • S3 Bucket (Used to store the TF State)
  • DynamoDB Table (Used to store the TF Lock)

To replicate this, create the resources above and replace the details in the terraform/main.tf file to match your S3 bucket name and DynamoDB Lock table. More information about setting up Terraform.

Setting up Dev environment

  1. Given that all the Prerequisites above have been correctly installed, execute the below
git clone https://github.com/lucasfdsilva/telemetry-app
cd telemetry-app/
pip install -r /requirements.txt
export FLASK_APP=wsgi.py
export FLASK_ENV=development
export PREFIX=telemetry-dev

Creating Terraform Stack in your AWS Account

IMPORTANT
Terraform will use your AWS account to build all resources required. This in turn will generate costs in you AWS account.

Estimated costs for running the infrastructure required (per environment)
Monthly: $125.19
Daily: $4.04
Hourly: $0.17

Please refer to the Architecture Diagram to understand which resources are part of this Terraform Stack.

  1. Once the AWS credentials have been configured locally, We will use Docker Compose to run Terraform.
docker-compose -f terraform/docker-compose.yml run --rm terraform init
docker-compose -f terraform/docker-compose.yml run --rm terraform workspace select dev || terraform workspace create dev
docker-compose -f terraform/docker-compose.yml run --rm terraform plan
docker-compose -f terraform/docker-compose.yml run --rm terraform apply
  1. Run the following command to "seed" the Aggregations DynamoDB Table.
    Remember to replace "dev" with your workspace name if using a different name.
aws dynamodb put-item \
--table-name telemetry-dev-temperature-readings-aggregation \
--item file://terraform/templates/dynamodb/seed.json \
--condition-expression "attribute_not_exists(total_readings_count)" \
|| true
  1. Now that Terraform has been initialized and the AWS Resources have been provisioned, run the application:
cd /app
flask run

Deploying the API

We will use Docker to build a new docker image and then push this image to the ECR repository in AWS so that ECS can pull in and use this image.

  1. Before we deploy, make sure your Terraform is valid.
docker-compose -f terraform/docker-compose.yml run --rm terraform init
docker-compose -f terraform/docker-compose.yml run --rm terraform fmt
docker-compose -f terraform/docker-compose.yml run --rm terraform validate
  1. Now run the following at the project root. Make sure you replace the variables where applicable to match your ECR Repo.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:latest .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
  1. We can now apply the Terraform stack so that ECS uses the newest version of the API.
docker-compose -f terraform/docker-compose.yml run --rm terraform plan
docker-compose -f terraform/docker-compose.yml run --rm terraform apply
  1. After the apply job is complete, Terraform will output the URL you can use to access the application.

Tearing down the Terraform stack

Since Terraform manages our entire stack, destroying and re-creating it can be done very quickly. The stack created for this application takes approx. 6 minutes to be created from scratch.

  1. Ensure you're in the correct Terraform workspace.
  2. Destroy your Terraform stack
docker-compose -f terraform/docker-compose.yml run --rm terraform workspace select dev
docker-compose -f terraform/docker-compose.yml run --rm terraform destroy

CI/CD Pipeline

In this repository you will find GitHub Actions workflows that automate the process of continuous integrations and continuous deployment of this application.

The workflows available make it possible to have the environments "staging" and "prod" being constantly and seamlessly tested, created and updated.

The Staging environment is built following changes and updates to the "main" branch, while "prod" is updated when new commits and pull requests are made to the "prod" branch.

For more information on the configuration of these workflows, please refer to the following:

Architecture

The Terraform stack was developed following the 5 pillars of the AWS Well-Architected framework.

Please refer to the Architecture Diagram to understand the resources used and the relationship between these resources.

Architecture Diagram

Database

This application only uses DynamoDB to persistent and access the data required. Due to the nature of NoSQL databases the schemas are very simple and basic.

Please refer to the Database Diagram to understand how the DynamoDB tables are set up.