Skip to content

Latest commit

 

History

History
147 lines (133 loc) · 7.95 KB

README.md

File metadata and controls

147 lines (133 loc) · 7.95 KB

docker terraform ansible jenkins aws Kuberntes

Final ITI project intake 42

This project provision an ec2 on aws and configure to run minikube to function as a a single-node kubernetes cluster, deploy a nodejs app and install nginx to function as a reverse-proxy Follow the Steps section to run this project.

🛠️ Languages and Tools :

nodejs  Jenkins  Terraform  Ansible  Bash  docker  kubernetes  mysql  git  github  linux 

Prerequisites

  • Create an aws iam user with programmatic access. Run the following command to access the aws and create resources
   aws configure
  • Create a S3 bucket to be used to store the terraform state file and enable versioning. Add the name of the S3 bucket to the main.tf file in the Minikube-Infra directory.
  • Create a Dynamodb table named iti-final-task with partition key named LockID of type String.

Steps to Deploy the application

  • Clone the the repo
   git clone [email protected]:sambo2021/ITI-Final-Task.git
  • Build infrastructure, run provisioner on EC2 to run minikube and nginx as a reverse proxy, deploy jenkins and nexus resources on the k8s cluster.
  ./Build.sh
  • To build infrastructure, connect to remote minikube, deploy jenkins and nexus resources -> ./Build.sh
  • Open jenkins by ec2 ip and set user name and password, then download kubernetes plugin and configure a cloud node of kubernetes
  • Then add config file taht downloades locally by script as secret file on jenkins using id mykubeconfig
  • Dont forget to restart jenkins
  • Open nexus at ec2-ip/nexus and set username and password the same user name and password as secret.tf in ./Kubernetes-Resources and create docker hosted repo at http port 8082
  • Back to jenkins and create a job of pipline by this repo link and Jenkinsfile
  • Then you can access your application at ec2-ip/app
  • To destroy the whole infrastructure-> ./Destroy.sh
  • To know how we build the docker file inside kaniko container see Links number 2
  • And to know prequesities that we had done of config file and kubernetes node cloud on jenkins see Links number 3

What behind Build.sh

  • creating empty key and 2 inventory files as shown :

      touch ./Minikube-Infra/TF_key.pem
      touch ./Ansible-Credentials/inventory
      touch ./Get-Passwords/inventory
  • building minikube cluster remotely on ec2 and using its local exec to set ec2 ip to inventory in ../Ansible-Credentials/inventory , ../Get-Passwords/inventory and kubernetes provider in ../Kubernetes-Resources/main.tf

     cd ./Minikube-Infra
     terraform init
     terraform apply -auto-approve
  • play ansible that get all cluster certificates and config file to local ../Kubernetes-Resources

     cd ../Ansible-Credentials
     ansible-playbook  playbook.yaml --private-key ../Minikube-Infra/TF_key.pem -u ubuntu --ssh-common-args='-o StrictHostKeyChecking=no' --verbose
  • after getting config file change its certificates urls by content of downlodes certificates

     cd ../Kubernetes-Resources
     var1=$(cat ca.crt | base64 -w 0 ; echo )
     sed -i -e "/certificate-authority:/ s/certificate-authority:[^/\n]*/certificate-authority-data: ${var1}/g"  ./config
     var2=$(cat client.crt | base64 -w 0 ; echo )
     sed -i -e "/client-certificate:/ s/client-certificate:[^/\n]*/client-certificate-data: ${var2}/g"  ./config
     var3=$(cat client.key | base64 -w 0 ; echo )
     sed -i -e "/client-key:/ s/client-key:[^/\n]*/client-key-data: ${var3}/g"  ./config  
     sed -i "s|/root/.minikube/ca.crt||g" ./config
     sed -i "s|/root/.minikube/profiles/minikube/client.crt||g" ./config
     sed -i "s|/root/.minikube/profiles/minikube/client.key||g" ./config
  • now kubernetes provider has its ec2 ip which apiserver endpoint listen to and has all its certificate and has config that gonna be use by jenkins secret file credentials to deploy on minikube from pipeline

  • apply all kubernetes resources to build jenkins and nexus

     terraform init
     terraform apply -auto-approve
  • getting jenkins & nexus passwords and apply new nexus cluster ip service to ../CI-CD/app.yaml and ../Kubernetes-Resources/secret.tf

     cd ../Get-Passwords
     ansible-playbook -i inventory playbook.yaml --private-key ../Minikube-Infra/TF_key.pem -u ubuntu --ssh-common-args='-oStrictHostKeyChecking=no' 
  • applly agian kubernetes resources to change that new service ip cause the old one from our previous build

     cd ../Kubernetes-Resources
     terraform apply -auto-approve
  • now pushing all new changes to git hub

    cd ..
    git add . 
    git commit -m "update nexus service ip to new one"
    git push -u origin master
  • printing jenkins password and nexus password on terminal

     echo "Infrastructure has been built Successfully "
     echo "-------------------"
     echo "Jenkins-Password : "
     awk ' {print $1}' ./Get-Passwords/file.txt
     echo "-------------------"
     echo "nexus-Password : "
     awk ' {print $2}' ./Get-Passwords/file.txt 
     echo "--------------------"

To access the cluster from Your machine do the following:

  • Replace the certificates path with tha actual data and add -data to the end of each field for example certificate-authority: would become certificate-authority-data: Also, use the output of the following commmand to populate the corresponding field cat ca.crt | base64 -w 0 ; echo cat client.key | base64 -w 0 ; echo cat client.crt | base64 -w 0 ; echo
  • Change the value of server field to "https://public_IP_of_ec2:49154"
  • Replace the kubeconfig file with this file

Links: