Skip to content
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.

Commit

Permalink
examples: add raft example
Browse files Browse the repository at this point in the history
  • Loading branch information
Pondidum committed Jan 21, 2021
1 parent 37aa020 commit b5b295c
Show file tree
Hide file tree
Showing 5 changed files with 251 additions and 0 deletions.
45 changes: 45 additions & 0 deletions examples/vault-raft-cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Vault Cluster with DynamoDB backend example

This folder shows an example of Terraform code to deploy a [Vault](https://www.vaultproject.io/) cluster in
[AWS](https://aws.amazon.com/) using the [vault-cluster module](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster).
The Vault cluster uses [DynamoDB](https://aws.amazon.com/dynamodb/) as a high-availability storage backend and [S3](https://aws.amazon.com/s3/)
for durable storage, so this example also deploys a separate DynamoDB table

This example creates a Vault cluster spread across the subnets in the default VPC of the AWS account. For an example of a Vault cluster
that is publicly accessible, see [the root example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/root-example).

![Vault architecture](https://github.com/hashicorp/terraform-aws-vault/blob/master/_docs/architecture-with-dynamodb.png?raw=true)

You will need to create an [Amazon Machine Image (AMI)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
that has Vault installed, which you can do using the [vault-consul-ami example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-consul-ami)).

For more info on how the Vault cluster works, check out the [vault-cluster](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster) documentation.

**Note**: To keep this example as simple to deploy and test as possible, it deploys the Vault cluster into your default
VPC and default subnets, some of which might be publicly accessible. This is OK for learning and experimenting, but for
production usage, we strongly recommend deploying the Vault cluster into the private subnets of a custom VPC.




## Quick start

To deploy a Vault Cluster:

1. `git clone` this repo to your computer.
1. Optional: build a Vault and Consul AMI. See the [vault-consul-ami
example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-consul-ami) documentation for
instructions. Make sure to note down the ID of the AMI.
1. Install [Terraform](https://www.terraform.io/).
1. Open `variables.tf`, set the environment variables specified at the top of the file, and fill in any other variables that
don't have a default. If you built a custom AMI, put the AMI ID into the `ami_id` variable. Otherwise, one of our
public example AMIs will be used by default. These AMIs are great for learning/experimenting, but are NOT
recommended for production use.
1. Run `terraform init`.
1. Run `terraform apply`.
1. Run the [vault-examples-helper.sh script](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-examples-helper/vault-examples-helper.sh) to
print out the IP addresses of the Vault servers and some example commands you can run to interact with the cluster:
`../vault-examples-helper/vault-examples-helper.sh`.

To see how to connect to the Vault cluster, initialize it, and start reading and writing secrets, head over to the
[How do you use the Vault cluster?](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster#how-do-you-use-the-vault-cluster) docs.
77 changes: 77 additions & 0 deletions examples/vault-raft-cluster/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# ----------------------------------------------------------------------------------------------------------------------
# REQUIRE A SPECIFIC TERRAFORM VERSION OR HIGHER
# ----------------------------------------------------------------------------------------------------------------------
terraform {
# This module is now only being tested with Terraform 0.13.x. However, to make upgrading easier, we are setting
# 0.12.26 as the minimum version, as that version added support for required_providers with source URLs, making it
# forwards compatible with 0.13.x code.
required_version = ">= 0.12.26"
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE VAULT SERVER CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

module "vault_cluster" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "github.com/hashicorp/terraform-aws-vault.git/modules/vault-cluster?ref=v0.0.1"
source = "../../modules/vault-cluster"

cluster_name = var.vault_cluster_name
cluster_size = var.vault_cluster_size
instance_type = var.vault_instance_type

ami_id = var.ami_id
user_data = data.template_file.user_data_vault_cluster.rendered

enable_s3_backend = true
s3_bucket_name = var.s3_bucket_name
force_destroy_s3_bucket = var.force_destroy_s3_bucket

vpc_id = data.aws_vpc.default.id
subnet_ids = data.aws_subnet_ids.default.ids

# To make testing easier, we allow requests from any IP address here but in a production deployment, we *strongly*
# recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.

allowed_ssh_cidr_blocks = ["0.0.0.0/0"]
allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
allowed_inbound_security_group_ids = []
allowed_inbound_security_group_count = 0
ssh_key_name = var.ssh_key_name
}

# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH VAULT SERVER WHEN IT'S BOOTING
# This script will configure and start Vault
# ---------------------------------------------------------------------------------------------------------------------

data "template_file" "user_data_vault_cluster" {
template = file("${path.module}/user-data-vault.sh")

vars = {
aws_region = data.aws_region.current.name
s3_bucket_name = var.s3_bucket_name
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTERS IN THE DEFAULT VPC AND AVAILABILITY ZONES
# Using the default VPC and subnets makes this example easy to run and test, but it means Consul and Vault are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------

data "aws_vpc" "default" {
default = var.vpc_id == null ? true : false
id = var.vpc_id
}

data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default.id
}

data "aws_region" "current" {
}

39 changes: 39 additions & 0 deletions examples/vault-raft-cluster/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
output "asg_name_vault_cluster" {
value = module.vault_cluster.asg_name
}

output "launch_config_name_vault_cluster" {
value = module.vault_cluster.launch_config_name
}

output "iam_role_arn_vault_cluster" {
value = module.vault_cluster.iam_role_arn
}

output "iam_role_id_vault_cluster" {
value = module.vault_cluster.iam_role_id
}

output "security_group_id_vault_cluster" {
value = module.vault_cluster.security_group_id
}

output "aws_region" {
value = data.aws_region.current.name
}

output "vault_servers_cluster_tag_key" {
value = module.vault_cluster.cluster_tag_key
}

output "vault_servers_cluster_tag_value" {
value = module.vault_cluster.cluster_tag_value
}

output "ssh_key_name" {
value = var.ssh_key_name
}

output "vault_cluster_size" {
value = var.vault_cluster_size
}
25 changes: 25 additions & 0 deletions examples/vault-raft-cluster/user-data-vault.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/bin/bash
# This script is meant to be run in the User Data of each EC2 Instance while it's booting. The script uses the
# run-consul script to configure and start Consul in client mode and then the run-vault script to configure and start
# Vault in server mode. Note that this script assumes it's running in an AMI built from the Packer template in
# examples/vault-consul-ami/vault-consul.json.

set -e

# Send the log output from this script to user-data.log, syslog, and the console
# From: https://alestic.com/2010/12/ec2-user-data-output/
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

# The Packer template puts the TLS certs in these file paths
readonly VAULT_TLS_CERT_FILE="/opt/vault/tls/vault.crt.pem"
readonly VAULT_TLS_KEY_FILE="/opt/vault/tls/vault.key.pem"

# The variables below are filled in via Terraform interpolation
/opt/vault/bin/run-vault \
--tls-cert-file "$VAULT_TLS_CERT_FILE" \
--tls-key-file "$VAULT_TLS_KEY_FILE" \
--enable-raft-backend \
--enable-s3-backend \
--s3-bucket "${s3_bucket_name}" \
--s3-bucket-region "${aws_region}"

65 changes: 65 additions & 0 deletions examples/vault-raft-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# ---------------------------------------------------------------------------------------------------------------------
# ENVIRONMENT VARIABLES
# Define these secrets as environment variables
# ---------------------------------------------------------------------------------------------------------------------

# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
# AWS_DEFAULT_REGION

# ---------------------------------------------------------------------------------------------------------------------
# REQUIRED PARAMETERS
# You must provide a value for each of these parameters.
# ---------------------------------------------------------------------------------------------------------------------

variable "ami_id" {
description = "The ID of the AMI to run in the cluster. This should be an AMI built from the Packer template under examples/vault-consul-ami/vault-consul.json."
type = string
}

variable "ssh_key_name" {
description = "The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair."
type = string
}

# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# These parameters have reasonable defaults.
# ---------------------------------------------------------------------------------------------------------------------

variable "vault_cluster_name" {
description = "What to name the Vault server cluster and all of its associated resources"
type = string
default = "vault-example"
}

variable "vault_cluster_size" {
description = "The number of Vault server nodes to deploy. We strongly recommend using 3 or 5."
type = number
default = 3
}

variable "vault_instance_type" {
description = "The type of EC2 Instance to run in the Vault ASG"
type = string
default = "t2.micro"
}

variable "vpc_id" {
description = "The ID of the VPC to deploy into. Leave an empty string to use the Default VPC in this region."
type = string
default = null
}

variable "s3_bucket_name" {
description = "The name of an S3 bucket to create and use as a storage backend (if configured). Note: S3 bucket names must be *globally* unique."
type = string
default = "my-vault-bucket"
}

variable "force_destroy_s3_bucket" {
description = "If you set this to true, when you run terraform destroy, this tells Terraform to delete all the objects in the S3 bucket used for backend storage (if configured). You should NOT set this to true in production or you risk losing all your data! This property is only here so automated tests of this module can clean up after themselves."
type = bool
default = false
}

0 comments on commit b5b295c

Please sign in to comment.