In this lab you will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl, then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Generate the CA configuration file, certificate, and private key:
$ cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
$ cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Results:
ca-key.pem
ca.pem
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin
user.
Generate the admin
client certificate and private key:
$ cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
Results:
admin-key.pem
admin.pem
Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes
group, with a username of system:node:<nodeName>
. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
$ for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=${instance}" "Name=instance-state-name,Values=running" \
--query 'Reservations[0].Instances[0].PublicIpAddress' --output text)
INTERNAL_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=${instance}" "Name=instance-state-name,Values=running" \
--query 'Reservations[0].Instances[0].PrivateIpAddress' --output text)
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
Results:
worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem
Generate the kube-controller-manager
client certificate and private key:
$ cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
Results:
kube-controller-manager-key.pem
kube-controller-manager.pem
Generate the kube-proxy
client certificate and private key:
$ cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
Results:
kube-proxy-key.pem
kube-proxy.pem
Generate the kube-scheduler
client certificate and private key:
$ cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
Results:
kube-scheduler-key.pem
kube-scheduler.pem
In the previous section we've created Kubernetes Public IP Address with Elastic IP Addresses (EIP). The EIP will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
You can retrieve the EIP named eip-kubernetes-the-hard-way
we've created in the previous section by executing following command:
$ aws ec2 describe-addresses \
--filters "Name=tag:Name,Values=eip-kubernetes-the-hard-way" \
--query 'Addresses[0].PublicIp' --output text
xxx.xxx.xxx.xx
$ KUBERNETES_PUBLIC_ADDRESS=$(aws ec2 describe-addresses \
--filters "Name=tag:Name,Values=eip-kubernetes-the-hard-way" \
--query 'Addresses[0].PublicIp' --output text)
Then, using this environemnt variable let's generate the Kubernetes API Server certificate and private key:
$ KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
$ cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
The Kubernetes API server is automatically assigned the
kubernetes
internal dns name, which will be linked to the first IP address (10.32.0.1
) from the address range (10.32.0.0/24
) reserved for internal cluster services during the control plane bootstrapping lab.
Results:
kubernetes-key.pem
kubernetes.pem
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.
Generate the service-account
certificate and private key:
$ cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
Results:
service-account-key.pem
service-account.pem
Copy the appropriate certificates and private keys to each worker EC2 instance:
$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
--query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
--output text | sort | grep worker
worker-0 i-aaaaaaaaaaaaaaaaa ap-northeast-1c 10.240.0.20 aa.aaa.aaa.aaa running
worker-1 i-bbbbbbbbbbbbbbbbb ap-northeast-1c 10.240.0.21 b.bbb.b.bbb running
worker-2 i-ccccccccccccccccc ap-northeast-1c 10.240.0.22 cc.ccc.cc.ccc running
$ scp -i ~/.ssh/your_ssh_key worker-0-key.pem worker-0.pem ca.pem [email protected]:~/
$ scp -i ~/.ssh/your_ssh_key worker-1-key.pem worker-1.pem ca.pem [email protected]:~/
$ scp -i ~/.ssh/your_ssh_key worker-2-key.pem worker-2.pem ca.pem [email protected]:~/
Copy the appropriate certificates and private keys to each master EC2 instance:
$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
--query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
--output text | sort | grep master
master-0 i-xxxxxxxxxxxxxxxxx ap-northeast-1c 10.240.0.10 xx.xxx.xxx.xxx running
master-1 i-yyyyyyyyyyyyyyyyy ap-northeast-1c 10.240.0.11 yy.yyy.yyy.yy running
master-2 i-zzzzzzzzzzzzzzzzz ap-northeast-1c 10.240.0.12 zz.zzz.z.zzz running
$ for masternode in xx.xxx.xxx.xxx yy.yyy.yyy.yy zz.zzz.z.zzz; do
scp -i ~/.ssh/your_ssh_key \
ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem \
ubuntu@${masternode}:~/
done
The
kube-proxy
,kube-controller-manager
,kube-scheduler
, andkubelet
client certificates will be used to generate client authentication configuration files in the next lab.
Next: Generating Kubernetes Configuration Files for Authentication