title | parent | tags | date | description | mrm | xredirect | slug | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
Extending Terraform OKE with a helm chart |
|
|
2021-10-28 12:00 |
Extend a sample repo with your own extensions to make reusable provisioning scripts. |
WWMK211125P00024 |
extending-terraform-oke-helm-chart |
{% slides %} When designing the Terraform OKE provisioning scripts, one of the things we wanted to do was make it reusable. So, what does that translate to here? In this context, it means extending the base sample repo and adding in our own extensions.
In this tutorial, we'll deploy a Redis Cluster to OKE using helm charts. Terraform conveniently provides a helm provider, so we'll use that for our purposes.
Topics covered in this tutorial:
- Adding the helm provider and repository
- Adding Redis with a helm release
- Interacting with the Redis Cluster
- Inspecting the new Redis Cluster
- Updating your release after deployment
For additional information, see:
- Signing Up for Oracle Cloud Infrastructure
- Getting started with Terraform
- Getting started with OCI Cloud Shell
To successfully complete this tutorial, you'll need the following:
- An Oracle Cloud Infrastructure Free Tier account. [Start for Free]({{ site.urls.always_free }}).
- A MacOS, Linux, or Windows computer with
ssh
support installed. - OCI Cloud Shell - It provides a great platform for quickly working with Terraform as well as a host of other OCI interfaces and tools.
- Access to Terraform.
First, clone the repo as we did before:
git clone https://github.com/oracle/sample-oke-for-terraform.git tfoke
Then, navigate into the tfoke
directory:
cd tfoke
And finally, follow the instructions to create your Terraform variable file.
In the OKE module, create a redis.tf
file. First, we need to configure the helm provider. Since we already have the kubeconfig
file, we'll use the File Config method:
-
Add the following to
redis.tf
:provider "helm" { kubernetes { config_path = "${path.root}/generated/kubeconfig" } }
-
Add a helm repository:
data "helm_repository" "stable" { name = "stable" url = "https://kubernetes-charts.storage.googleapis.com" }
In this section, we'll use the Redis helm chart to create a helm release. However, we want helm to deploy only after the worker nodes become active, so we'll have to make sure to check their status before proceeding.
Let's get started setting up our release. In the sample repo, there’s a null_resource is_worker_active
that you can use to set an explicit dependency.
To make use of this dependency, add the following to redis.tf
:
resource "helm_release" "redis" { depends_on = ["null_resource.is_worker_active", "local_file.kube_config_file"] provider = "helm"
name = "oke-redis"
repository = "${data.helm_repository.stable.metadata.0.name}"
chart = "redis"
version = "6.4.5" set {
name = "cluster.enabled"
value = "true"
} set {
name = "cluster.slaveCount"
value = "3"
}
set {
name = "master.persistence.size"
value = "50Gi"
}
}
If you prefer to customize your helm release using a yaml file, we'll quickly walk through setting that up here:
-
Create a folder called
resources
under the oke module. -
Copy the file,
values.yaml
from the redis chart repo toredis_values.yaml
:curl -o modules/oke/resources/redis_values.yaml https://raw.githubusercontent.com/helm/charts/master/stable/redis/values.yaml
-
Remove the individual settings in the redis release from the terraform code and add the following instead:
values = [ "${file("${path.module}/resources/redis_values.yaml")}" ]
Your release should then look like this:
resource "helm_release" "redis" { depends_on = ["null_resource.is_worker_active", "local_file.kube_config_file"] provider = "helm" name = "my-redis-release" repository = "${data.helm_repository.stable.metadata.0.name}" chart = "redis" version = "6.4.5" values = [ "${file("${path.module}/resources/redis_values.yaml")}" ] }
Note: You can also combine the two approaches above, but in general it's not a bad idea to keep the configurations in a single location for easy updating.
Also, you can change the values in the yaml file if you want to. For example, a good working pair of settings might be:
- default
cluster.slaveCount
= 3- persistence.size = 50Gi {:.notice}
-
Run
terraform init
to download the helm provider and then apply again:terraform init terraform apply -auto-approve
-
Log in to the bastion and do a helm list:
helm list NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE oke-redis 1 Wed Apr 24 12:05:40 2019 DEPLOYED redis-6.4.5 4.0.14 default
-
Get the notes provided by the redis chart:
helm status
After you've run helm status
(see previous section), the following are available to you:
-
Get the Redis password:
export REDIS_PASSWORD=$(kubectl get secret --namespace default oke-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
-
Run a Redis pod:
kubectl run --namespace default oke-redis-client --rm --tty -i --restart='Never' \ --env REDIS_PASSWORD=$REDIS_PASSWORD \ --image docker.io/bitnami/redis:4.0.14 -- bash
-
Connect using the Redis cli:
redis-cli -h oke-redis-master -a $REDIS_PASSWORD
-
Type a redis command:
oke-redis-master:6379> ping PONG
Recall that in the yaml file, we set the number of redis slaves
to 3. Let’s verify that this is still the case:
kubectl get pods
Your output should look something like this:
NAME READY STATUS RESTARTS AGE
oke-redis-master-0 1/1 Running 0 42m
oke-redis-slave-79c45c57d8-67bxj 1/1 Running 1 42m
oke-redis-slave-79c45c57d8-s6znq 1/1 Running 0 42m
oke-redis-slave-79c45c57d8-wnfrh 1/1 Running 0 42m
From this, you can see that there are 3 pods running the redis slaves.
Let's consider a real-world example. Let's say we want to update the helm release to change some settings. For example, we need to reduce the number of slaves from 3 to 2. We actually have a couple of different ways we can do this.
-
Change settings (2 methods)
-
helm cli - Perform the setting change manually using the helm cli:
helm upgrade oke-redis stable/redis --set cluster.slaveCount=2
-
yaml file - Or, change the settings in the
redis_values.yaml
and then runterraform apply
again.
In the case where we reduced the number of slaves from 3 to 2 the output of theterraform apply
command should be something like:.. .. .. module.oke.helm_release.redis: Still modifying… (ID: oke-redis, 10s elapsed) module.oke.helm_release.redis: Still modifying… (ID: oke-redis, 20s elapsed) module.oke.helm_release.redis: Still modifying… (ID: oke-redis, 30s elapsed) module.oke.helm_release.redis: Still modifying… (ID: oke-redis, 40s elapsed) module.oke.helm_release.redis: Still modifying… (ID: oke-redis, 50s elapsed) module.oke.helm_release.redis: Still modifying… (ID: oke-redis, 1m1s elapsed) module.oke.helm_release.redis: Modifications complete after 1m9s (ID: oke-redis) Apply complete! Resources: 1 added, 1 changed, 1 destroyed.
-
-
In the meantime, from another terminal, we can watch the number of pods being updated:
kubectl get pods -w
Your output should be something like:
oke-redis-master-0 0/1 Terminating 0 61s oke-redis-slave-6bd9dc8d89-jdrs2 0/1 Running 0 3s oke-redis-slave-6bd9dc8d89-kvc8r 0/1 Running 0 3s oke-redis-slave-6fdd8c4b56-44qpb 0/1 Terminating 0 63s
In future articles, we'll look at other ways to extend the terraform-oci-oke module to deploy software on OKE.
Check out these sites to explore more information about development with Oracle products:
- Oracle Developers Portal
- Oracle Cloud Infrastructure {% endslides %}