Deploying a Wordpress image on Kubernetes cluster backed up by MySQL Database and AWS - EFS for persistent volume
These instructions will get you a copy of the project up and running on your AWS account and local machine for development purposes.
Following softwares and tools needs to be setup before proceeding with the project setup
- The projects runs well on Linux/MacOS machines. If using Windows, you will need to create VIrtual machine. Look below for setup:
- Kubernetes needs to be installed on machine. Installation can be done using following ways:
- Local-Setup: Minikube Setup
- Docker-Client: Windows Installation
- Docker-Client: MacOS Installation
- AWS KOPS Installation (Used in this project)
- IP Name can be setup for running the cluster using one IP addresses. For cheaper options, use Minikube
- Namecheap was used to purchase IP addresses
- AWS-Route53 : DNS for routing the requests to AWS resources
- Git installation for cloning the project.
- Debian operating systems
- Mac OS has in-built git installation.
After checking/setting up the prerequisites, we setup the project by following the steps below in the same order:
- Kubernetes Cluster needs to be created FOr this we need kubectl to be setup first:
MAC-OS
brew install kubernetes-cli
Ubuntu/Debian
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
- Install AWS CLI (Command line tools) for interacting with AWS cluster:
pip install awscli --upgrade --user
-
Login into the AWS account and create a S3 Bucket or simply use AWS CLI to create a bucket. Note the name of bucket
-
Open command terminal of your local Host OS and generate SSH Keys & move KOPS to bin folder
ssh-keygen -f .ssh/id_rsa
cat .ssh/id_rsa.pub
sudo mv /usr/local/bin/kops-xx-xxxx /usr/local/bin/kops
- Create the cluster in kubernetes:
kops create cluster --name=kubernetes.<your cluster name> --state=s3://<your-bucket-name> --zones=<zone for awscli> --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=<name configured in namecheap/route53>
kops update cluster kubernetes.<your cluster name>
- Check the state of cluster using:
kubectl get node
EFS volume is setup using aws-cli. The name will be added to the wordpress-web file which is explained in the next section. Steps required are:
- Create an EFS volume using aws-cli tools. Issue the following command and copy the FileSystemID parameter value:
aws efs create-file-system --creation-token 1
- Run the following command to get the subnet-id of the Kubernetes cluster which was launched before:
aws ec2 describe-instances
- Get the subnet-id and security groups of either master or slave nodes from the description of following command. Now issue the following:
aws efs create-mount-target file-system-id <id_from step_1> --subnet-id <id_from_step_2> --security-groups <id_from_step_2>
We use a bunch of YAML files to setup our wordpress application which has MySQL as backend database and AWS Elastic File system (EFS) as volume mount for persistent storage for images (when we upload things to our wordpress application). Files are:
This file is used to create autoprovisioned volumes on the region provided with volume type (gp2) in our case. Its signature is:
kind: StorageClass
provisioner: kubernetes.io/aws-ebs
This file is claim the storage with 8 GB specified in the signature. Its signature is:
kind: PersistenceVolumeClaim
spec:
resources:
requests:
storage: 8Gi
- This file is used to deploy a MySQL image in pod with replication controller enabled to generate 1 prelica only.
- Moreover, selector configuration is being used to map it with our original wordpress image.
- The passwordto log into the wordpress image uses secrets file and stores it in MySQL database
- Finally, the persistent volume claim of 8 GB's is mapped with persistentvolumeclaim
kind: ReplicationController
selector:
app: wordpress-db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: db-storage
- Service definition file for database service discovery for wordpress-mysql.
- It basically maps the service with the yml file wordpress-db yml file and open the wordpress db for DNS service discovery
kind: Service
spec:
selector:
app: wordpress-db
- This file consists of the actual wordpress image to be setup on Pods.
- It refers to secrets file for password matching
- Additionally, it launches the containers in deployment mode, which helps in rolling updates to the cluster
- Finally, we also create an EFS persistent volume to account for the static images on our blog post and the respective mount path on the wordpress image itself /var/www/html/wp-content/uploads
kind: Deployment
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumes:
- name: uploads
nfs:
server: us-west-1b.<efs_vol_name>.efs.us-west-1.amazonaws.com
- Service definition file for wordpress appication service discovery for wordpress-web.
- It basically maps the service with the yml file wordpress-web yml file and open the wordpress application for DNS service discovery
- Additionally, a classic load balancer configuration is also provided for effective load balancing using AWS ELB (Elastic Load Balancer)
kind: Service
selector:
app: wordpress
type: LoadBalancer
- This file is used for passing secrets (passwords and other application information) to other configuration files.
kind: Secret
data:
db-password: cGFzc3dvcmQ=
After completing the above steps, run the following commands to deploy the pods in kubernetes cluster:
kubectl create -f storage.yml
kubectl create -f pv-claim.yml
kubectl create -f wordpress-db.yml
kubectl create -f wordpress-db-service.yml
kubectl create -f wordpress-web.yml
kubectl create -f wordpress-web-service.yml
kubectl create -f wordpress-secrets.yml
Stage 6: Wordpress Access via URL (http://wordpress.kubernetes.kubetest231.site)
- Testing Fault Tolerance
Firstly, we are testing fault tolerance -HA by issuing delete pods commands. In this case, all running containers are terminated however, new containers take its place. Firstly, to get all poid information, we issue
kubectl get pods
After getting the information, we issue delete commands on all Pods to check fault tolerance.
kubectl delete pods/wordpress-db-<unique-id>
kubectl delete pods/wordpress-deployment-<unique-id>
kubectl delete pods/wordpress-deployment-<unique-id>
When we issue the following commands, automatically old containers are terminated and new ones pop up, automatically handling High availabilaty-Fault tolderance by the Kubernetes cluster iteself.
Addtionally, when we issue logs on the pods, we get the messages that wordpress was still present on the pods. COmmand issued is:
kubectl logs wordpress-deployment-<unique-id>
- Testing Persistent volume - AWS EFS Now we login into one of the deployment pods to check whether our static image is still present or not (after destroying-automaticaaly recreated pods). We issue the following command to login into the pods and run bash commands:
kubectl exec wordpress-deployment-<unique-id> -it /bin/bash
ls wp-contents/uploads/2018/08
For this project, a music mixer image was added to the blogpost. It still persists, even when the pods were destroyed!!
- Kubernetes - Managing containers in a cluster for High Availabilaty
- Wordpress - Application deployed on pods
- KOPS - Provisioning Kubernetes cluster on Amazon Web Services
- kubectl - Issuing commands to kubernetes cluster
- Amazon Web Services (AWS) - Cloud-platform for deploying kubernetes cluster
- AWS-EC2 - Servers used for Master-Slave nodes
- AWS-Route53 - DNS service for AWS
- AWS-ElasticFileSystem (EFS) - Service used for deploying persistent volumes
- Namecheap - Domain for deploying cluster