Skip to content
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.

Support raft for HA storage #205

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions examples/vault-raft-cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Vault Cluster with DynamoDB backend example

This folder shows an example of Terraform code to deploy a [Vault](https://www.vaultproject.io/) cluster in
[AWS](https://aws.amazon.com/) using the [vault-cluster module](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster).
The Vault cluster uses [DynamoDB](https://aws.amazon.com/dynamodb/) as a high-availability storage backend and [S3](https://aws.amazon.com/s3/)
for durable storage, so this example also deploys a separate DynamoDB table

This example creates a Vault cluster spread across the subnets in the default VPC of the AWS account. For an example of a Vault cluster
that is publicly accessible, see [the root example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/root-example).

![Vault architecture](https://github.com/hashicorp/terraform-aws-vault/blob/master/_docs/architecture-with-dynamodb.png?raw=true)

You will need to create an [Amazon Machine Image (AMI)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
that has Vault installed, which you can do using the [vault-consul-ami example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-consul-ami)).

For more info on how the Vault cluster works, check out the [vault-cluster](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster) documentation.

**Note**: To keep this example as simple to deploy and test as possible, it deploys the Vault cluster into your default
VPC and default subnets, some of which might be publicly accessible. This is OK for learning and experimenting, but for
production usage, we strongly recommend deploying the Vault cluster into the private subnets of a custom VPC.




## Quick start

To deploy a Vault Cluster:

1. `git clone` this repo to your computer.
1. Optional: build a Vault and Consul AMI. See the [vault-consul-ami
example](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-consul-ami) documentation for
instructions. Make sure to note down the ID of the AMI.
1. Install [Terraform](https://www.terraform.io/).
1. Open `variables.tf`, set the environment variables specified at the top of the file, and fill in any other variables that
don't have a default. If you built a custom AMI, put the AMI ID into the `ami_id` variable. Otherwise, one of our
public example AMIs will be used by default. These AMIs are great for learning/experimenting, but are NOT
recommended for production use.
1. Run `terraform init`.
1. Run `terraform apply`.
1. Run the [vault-examples-helper.sh script](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-examples-helper/vault-examples-helper.sh) to
print out the IP addresses of the Vault servers and some example commands you can run to interact with the cluster:
`../vault-examples-helper/vault-examples-helper.sh`.

To see how to connect to the Vault cluster, initialize it, and start reading and writing secrets, head over to the
[How do you use the Vault cluster?](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/vault-cluster#how-do-you-use-the-vault-cluster) docs.
77 changes: 77 additions & 0 deletions examples/vault-raft-cluster/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# ----------------------------------------------------------------------------------------------------------------------
# REQUIRE A SPECIFIC TERRAFORM VERSION OR HIGHER
# ----------------------------------------------------------------------------------------------------------------------
terraform {
# This module is now only being tested with Terraform 0.13.x. However, to make upgrading easier, we are setting
# 0.12.26 as the minimum version, as that version added support for required_providers with source URLs, making it
# forwards compatible with 0.13.x code.
required_version = ">= 0.12.26"
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE VAULT SERVER CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

module "vault_cluster" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "github.com/hashicorp/terraform-aws-vault.git/modules/vault-cluster?ref=v0.0.1"
source = "../../modules/vault-cluster"

cluster_name = var.vault_cluster_name
cluster_size = var.vault_cluster_size
instance_type = var.vault_instance_type

ami_id = var.ami_id
user_data = data.template_file.user_data_vault_cluster.rendered

enable_s3_backend = true
s3_bucket_name = var.s3_bucket_name
force_destroy_s3_bucket = var.force_destroy_s3_bucket

vpc_id = data.aws_vpc.default.id
subnet_ids = data.aws_subnet_ids.default.ids

# To make testing easier, we allow requests from any IP address here but in a production deployment, we *strongly*
# recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.

allowed_ssh_cidr_blocks = ["0.0.0.0/0"]
allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
allowed_inbound_security_group_ids = []
allowed_inbound_security_group_count = 0
ssh_key_name = var.ssh_key_name
}

# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH VAULT SERVER WHEN IT'S BOOTING
# This script will configure and start Vault
# ---------------------------------------------------------------------------------------------------------------------

data "template_file" "user_data_vault_cluster" {
template = file("${path.module}/user-data-vault.sh")

vars = {
aws_region = data.aws_region.current.name
s3_bucket_name = var.s3_bucket_name
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTERS IN THE DEFAULT VPC AND AVAILABILITY ZONES
# Using the default VPC and subnets makes this example easy to run and test, but it means Consul and Vault are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------

data "aws_vpc" "default" {
default = var.vpc_id == null ? true : false
id = var.vpc_id
}

data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default.id
}

data "aws_region" "current" {
}

39 changes: 39 additions & 0 deletions examples/vault-raft-cluster/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
output "asg_name_vault_cluster" {
value = module.vault_cluster.asg_name
}

output "launch_config_name_vault_cluster" {
value = module.vault_cluster.launch_config_name
}

output "iam_role_arn_vault_cluster" {
value = module.vault_cluster.iam_role_arn
}

output "iam_role_id_vault_cluster" {
value = module.vault_cluster.iam_role_id
}

output "security_group_id_vault_cluster" {
value = module.vault_cluster.security_group_id
}

output "aws_region" {
value = data.aws_region.current.name
}

output "vault_servers_cluster_tag_key" {
value = module.vault_cluster.cluster_tag_key
}

output "vault_servers_cluster_tag_value" {
value = module.vault_cluster.cluster_tag_value
}

output "ssh_key_name" {
value = var.ssh_key_name
}

output "vault_cluster_size" {
value = var.vault_cluster_size
}
25 changes: 25 additions & 0 deletions examples/vault-raft-cluster/user-data-vault.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/bin/bash
# This script is meant to be run in the User Data of each EC2 Instance while it's booting. The script uses the
# run-consul script to configure and start Consul in client mode and then the run-vault script to configure and start
# Vault in server mode. Note that this script assumes it's running in an AMI built from the Packer template in
# examples/vault-consul-ami/vault-consul.json.

set -e

# Send the log output from this script to user-data.log, syslog, and the console
# From: https://alestic.com/2010/12/ec2-user-data-output/
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

# The Packer template puts the TLS certs in these file paths
readonly VAULT_TLS_CERT_FILE="/opt/vault/tls/vault.crt.pem"
readonly VAULT_TLS_KEY_FILE="/opt/vault/tls/vault.key.pem"

# The variables below are filled in via Terraform interpolation
/opt/vault/bin/run-vault \
--tls-cert-file "$VAULT_TLS_CERT_FILE" \
--tls-key-file "$VAULT_TLS_KEY_FILE" \
--enable-raft-backend \
--enable-s3-backend \
--s3-bucket "${s3_bucket_name}" \
--s3-bucket-region "${aws_region}"

65 changes: 65 additions & 0 deletions examples/vault-raft-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# ---------------------------------------------------------------------------------------------------------------------
# ENVIRONMENT VARIABLES
# Define these secrets as environment variables
# ---------------------------------------------------------------------------------------------------------------------

# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
# AWS_DEFAULT_REGION

# ---------------------------------------------------------------------------------------------------------------------
# REQUIRED PARAMETERS
# You must provide a value for each of these parameters.
# ---------------------------------------------------------------------------------------------------------------------

variable "ami_id" {
description = "The ID of the AMI to run in the cluster. This should be an AMI built from the Packer template under examples/vault-consul-ami/vault-consul.json."
type = string
}

variable "ssh_key_name" {
description = "The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair."
type = string
}

# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# These parameters have reasonable defaults.
# ---------------------------------------------------------------------------------------------------------------------

variable "vault_cluster_name" {
description = "What to name the Vault server cluster and all of its associated resources"
type = string
default = "vault-example"
}

variable "vault_cluster_size" {
description = "The number of Vault server nodes to deploy. We strongly recommend using 3 or 5."
type = number
default = 3
}

variable "vault_instance_type" {
description = "The type of EC2 Instance to run in the Vault ASG"
type = string
default = "t2.micro"
}

variable "vpc_id" {
description = "The ID of the VPC to deploy into. Leave an empty string to use the Default VPC in this region."
type = string
default = null
}

variable "s3_bucket_name" {
description = "The name of an S3 bucket to create and use as a storage backend (if configured). Note: S3 bucket names must be *globally* unique."
type = string
default = "my-vault-bucket"
}

variable "force_destroy_s3_bucket" {
description = "If you set this to true, when you run terraform destroy, this tells Terraform to delete all the objects in the S3 bucket used for backend storage (if configured). You should NOT set this to true in production or you risk losing all your data! This property is only here so automated tests of this module can clean up after themselves."
type = bool
default = false
}

1 change: 1 addition & 0 deletions modules/install-vault/install-vault
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,7 @@ function create_vault_install_paths {
sudo mkdir -p "$path/data"
sudo mkdir -p "$path/tls"
sudo mkdir -p "$path/scripts"
sudo mkdir -p "$path/raft"
sudo chmod 755 "$path"
sudo chmod 755 "$path/bin"
sudo chmod 755 "$path/data"
Expand Down
37 changes: 34 additions & 3 deletions modules/run-vault/run-vault
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@ function print_usage {
echo -e " --enable-dynamo-backend\tIf this flag is set, DynamoDB will be enabled as the backend storage (HA)"
echo -e " --dynamo-region\tSpecifies the AWS region where --dynamo-table lives. Only used if '--enable-dynamo-backend is on'"
echo -e " --dynamo--table\tSpecifies the DynamoDB table to use for HA Storage. Only used if '--enable-dynamo-backend is on'"
echo -e " --enable-raft-backend\tIf this flag is set, Vault's Integrated Storage will be enabled as the backend storage (HA)"
echo -e " --raft-dir\t\tSpecifies the path to store Vault's Integrated Storage data. Optional. Default is the absolute path of '../raft', relative to this script."
echo
echo "Options for Vault Agent:"
echo
Expand Down Expand Up @@ -244,6 +246,8 @@ function generate_vault_config {
local -r auto_unseal_kms_key_id="${16}"
local -r auto_unseal_kms_key_region="${17}"
local -r auto_unseal_endpoint="${18}"
local -r enable_raft_backend="${19}"
local -r raft_dir="${20}"
local -r config_path="$config_dir/$VAULT_CONFIG_FILE"

local instance_ip_address
Expand Down Expand Up @@ -301,8 +305,19 @@ EOF
dynamodb_storage_type="ha_storage"
fi

if [[ "$enable_raft_backend" == "true" ]]; then
vault_storage_backend=$(cat <<EOF
ha_storage "raft" {
path = "$raft_dir"
node_id = "$instance_ip_address"
}
# HA settings
cluster_addr = "https://$instance_ip_address:$cluster_port"
api_addr = "$api_addr"
EOF
)

if [[ "$enable_dynamo_backend" == "true" ]]; then
elif [[ "$enable_dynamo_backend" == "true" ]]; then
vault_storage_backend=$(cat <<EOF
$dynamodb_storage_type "dynamodb" {
ha_enabled = "true"
Expand Down Expand Up @@ -438,6 +453,7 @@ function run {
local cluster_port=""
local api_addr=""
local config_dir=""
local raft_dir=""
local bin_dir=""
local data_dir=""
local log_level="$DEFAULT_LOG_LEVEL"
Expand All @@ -452,6 +468,7 @@ function run {
local enable_dynamo_backend="false"
local dynamo_region=""
local dynamo_table=""
local enable_raft_backend="false"
local agent="false"
local agent_vault_address="$DEFAULT_AGENT_VAULT_ADDRESS"
local agent_vault_port="$DEFAULT_PORT"
Expand Down Expand Up @@ -558,6 +575,14 @@ function run {
dynamo_table="$2"
shift
;;
--enable-raft-backend)
enable_raft_backend="true"
;;
--raft-dir)
assert_not_empty "$key" "$2"
raft_dir="$2"
shift
;;
--agent)
agent="true"
;;
Expand Down Expand Up @@ -641,7 +666,7 @@ function run {
assert_not_empty "--s3-bucket-region" "$s3_bucket_region"
fi
fi

if [[ "$enable_dynamo_backend" == "true" ]]; then
assert_not_empty "--dynamo-table" "$dynamo_table"
assert_not_empty "--dynamo-region" "$dynamo_region"
Expand All @@ -666,6 +691,10 @@ function run {
data_dir=$(cd "$SCRIPT_DIR/../data" && pwd)
fi

if [[ -z "$raft_dir" ]]; then
raft_dir=$(cd "$SCRIPT_DIR/../raft" && pwd)
fi

if [[ -z "$user" ]]; then
user=$(get_owner_of_path "$config_dir")
fi
Expand Down Expand Up @@ -720,7 +749,9 @@ function run {
"$enable_auto_unseal" \
"$auto_unseal_kms_key_id" \
"$auto_unseal_kms_key_region" \
"$auto_unseal_endpoint"
"$auto_unseal_endpoint" \
"$enable_raft_backend" \
"$raft_dir"
fi
fi

Expand Down