- I used terraform to deploy the EKS cluster ,kindly find my github repo
- Installing kubectl CLI
$ curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
kubectl version --short --client
Client Version: v1.21.2-13+d2965f0db10712
- Create a Pod
nginx-pod.yaml
manifest on your master node
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
protocol: TCP
- Apply the manifest with the help of kubectl
kubectl apply -f nginx-pod.yaml
- Get an output of the pods running in the cluster
kubectl get pods
- To check optional fields given by kubernetes after deployed the resource, run below command
kubectl get pod nginx-pod -o yaml
or
kubectl describe pod nginx-pod
Now you have a running Pod. What’s next?
- We need another Kubernetes object called Service to accept our request and pass it on to the Pod so we can access it through the browser
kubectl get pod nginx-pod -o wide
Let us try to access the Pod through its IP address from within the K8s cluster. To do this,
- We need an image that already has curl software installed. You can check it out here
dareyregistry/curl
- Run kubectl to connect inside the container
kubectl run curl --image=dareyregistry/curl -i --tty
- Run curl and point to the IP address of the Nginx Pod (Use the IP address of your own Pod)
curl -v 10.0.0.163:80
- Create a Service yaml manifest file:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx-pod
ports:
- protocol: TCP
port: 80
targetPort: 80
- Create a nginx-service resource by applying your manifest
kubectl apply -f nginx-service.yaml
-
Check the created service
kubectl get service
-
To access the app since there is no public IP address, we can leverage kubectl's port-forward functionality.
kubectl port-forward svc/nginx-service 8089:80
Unfortunately, this will not work quite yet. Because there is no way the service will be able to select the actual Pod it is meant to route traffic to. If there are hundreds of Pods running, there must be a way to ensure that the service only forwards requests to the specific Pod it is intended for.
- To make this work, you must reconfigure the Pod manifest and introduce labels to match the selectors key in the field section of the service manifest.
- Update the Pod manifest with the below and apply the manifest:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx-pod
spec:
containers:
- image: nginx:latest
name: nginx-pod
ports:
- containerPort: 80
protocol: TCP
Apply the manifest with kubectl apply -f nginx-pod.yaml
- Run kubectl port-forward command again
kubectl port-forward svc/nginx-service 8089:80
- Then go to your web browser and enter localhost:8089 – You should now be able to see the nginx page in the browser.
A Node port service type exposes the service on a static port on the node’s IP address. NodePorts are in the 30000-32767 range by default, which means a NodePort is unlikely to match a service’s intended port (for example, 80 may be exposed as 30080).
Update the nginx-service yaml to use a NodePort Service.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-pod
ports:
- protocol: TCP
port: 80
nodePort: 30080
-
To access the service, you must:
-
Allow the inbound traffic in your EC2’s Security Group to the NodePort range 30000-32767
-
Get the public IP address of the node the Pod is running on, append the nodeport and access the app through the browser.
- To get the experience of this service type, update your service manifest and use the LoadBalancer type. Also, ensure that the selector references the Pods in the replica set.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
tier: frontend
ports:
- protocol: TCP
port: 80 # This is the port the Loadbalancer is listening at
targetPort: 80 # This is the port the container is listening at
- Run the configuration
kubectl apply -f nginx-service.yaml
- Get the newly created service
kubectl get service nginx-service
- An ELB resource will be created in your AWS console.
- Get the output of the entire yaml for the service. You will some additional information about this service in which you did not define them in the yaml manifest. Kubernetes did this for you.
kubectl get service nginx-service -o yaml
- Copy and paste the load balancer’s address to the browser, and you will access the Nginx service
- Let us create a rs.yaml manifest for a ReplicaSet object
#Part 1
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-rs
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
#Part 2
template:
metadata:
name: nginx-pod
labels:
app: nginx-pod
spec:
containers:
- image: nginx:latest
name: nginx-pod
ports:
- containerPort: 80
protocol: TCP
kubectl apply -f rs.yaml
kubectl get pods
Scaling Replicaset up and down:
Imperative:
- We can now easily scale our ReplicaSet up by specifying the desired number of replicas in an imperative command, like this:
kubectl scale rs nginx-rs --replicas=5
Declarative:
- Declarative way would be to open our
rs.yaml manifest
, change desired number of replicas in respective section
Do not Use Replication Controllers – Use Deployment Controllers Instead
Officially, it is highly recommended to use Deplyments to manage replica sets rather than using replica sets directly.
Let us see Deployment in action.
- Delete the ReplicaSet
kubectl delete rs nginx-rs
- Create
deployment.yaml
manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
tier: frontend
spec:
replicas: 1
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
kubectl apply -f deployment.yaml
Run commands to get the following
- Get the Deployment
kubectl get deployment nginx-deployment
- Get the ReplicaSet
kubectl get rs <name of replicasSet>
- Get the Pods
kubectl get pod nginx-pod
- Scale the replicas in the Deployment to 15 Pods
kubectl scale rs <name of replica> --replicas=<number of replica>
- Exec into one of the Pod’s container to run Linux commands
kubectl exec <name of pod> -it bash
- List the files and folders in the Nginx directory
ls -ltr /etc/nginx/
- Check the content of the default Nginx configuration file
cat /etc/nginx/conf.d/default.conf
Required to update the content of the index.html file inside the container, and the Pod dies, that content will not be lost since a new Pod will replace the dead one.
Let us try that:
- Scale the Pods down to 1 replica.
- Exec into the running container
- Install vim so that you can edit the file
- Update the content of the file and add the code below
/usr/share/nginx/html/index.html
- Check the browser
- Now, delete the only running Pod
- Refresh the web page – You will see that the content you saved in the container is no longer there. That is because Pods do not store data when they are being recreated – that is why they are called ephemeral or stateless