This repository provides a set of ansible playbooks demonstrating the automation of a minimal development/testing OpenShift cluster installation on the z/VM platform.
Please see the official OpenShift github repository for a more thorough description of hardware and networking infrastructure prerequisites.
- 3+ z/VM guests with master node specs
- 4+ vcpus
- 16GB+ RAM
- 2+ z/VM guest(s) with worker specs
- 2+ vcpus
- 16GB+ RAM
- 1 z/VM guest with bootstrap specs
- 2+ vcpus
- 16GB+ RAM
This host is not considered part of the cluster, but will provide network infrastructure services in the example environment. We use a single host to provide these services for the sake of simplicity of our example environment. This environment is intended to demonstrate how the user provided infrastructure components fit together. It is not a suitable production configuration.
In the automation, this host is referred to as the bastion workstation.
- 1 zVM guest with RHEL 8 installed
- 2+ vcpus
- 8GB+ RAM
The example ansible playbooks in this repository are broken
down by role. The network infrastructure services playbooks are
located in playbooks/examples/
. These playbooks are individually
optional, and can be replaced with preferred equivalent network services.
After the network infrastructure is set up, whether through the
playbooks/example
playbooks or via user-chosen alternatives,
the cluster can be deployed using the playbooks/create-cluster.yml
playbook, and subsequently performing a manual IPL boot of Red Hat
CoreOS on each OpenShift node.
The central configuration file for the automation is located
at group_vars/all.yml
. This file provides the definition of the
IP and mac addresses of the OpenShift nodes and the bastion services
host. It also contains fields denoting the download URL for the
openshift installer and coreos images. Most of the variables will
need to be modified by the user based on their existing network set up.
The variables are documented directly in the config file, so we will not
discuss all of them in the readme. A few key things to note:
The dns_nameserver
value should be the IP address of a lab or public
DNS server which will be set as the forward server for the example DNS
service.
The bastion_public_ip_address
and bastion_private_ip_address
fields
are normally the same value. They will only be two different values
if the bastion is set up to act as the network gateway to the other nodes.
By default, we will create services on the bastion services host for DNS, load balancing, and file serving. We will not use the dhcp or masquerade playbooks for the default example.
Configure your ansible inventory to point at the bastion services host.
- Configure the DNS service
$ ansible-playbook -i inventory playbooks/examples/configure-dns.yml
- Configure the HAProxy load balancer service
$ ansible-playbook -i inventory playbooks/examples/configure-haproxy.yml
- Configure apache to serve ignition configs
$ ansible-playbook -i inventory playbooks/examples/configure-apache-ignition.yml
- Run the create-cluster playbook
$ ansible-playbook -i inventory playbooks/create-cluster.yml
- Monitor until the automation is at the following step
TASK [/<pwd>/multiarch-upi-playbooks/playbooks/examples/roles/create-cluster : wait for
bootstrap node accessibility] ***
ok: [bastion-host] => (item=bootstrap-0)
-
IPL the Red Hat CoreOS nodes. Kernel parameter files generated by the automation will be located on the bastion host in
/root/rhcos-bootfiles
. -
Configure image registry storige profider
- For devel/testing environments only, patch the image registry to use local storage.
- For a more robust setup, use NFS instead. TODO document storage.