Skip to content
This repository has been archived by the owner on Jul 6, 2022. It is now read-only.

Install OKD 3.11 (openshift) on oVirt

Roy Golan edited this page Jan 1, 2019 · 17 revisions

DRAFT - WIP
Installing OKD on oVirt has many advantages, and its also a lot easier these days. Admins and users who like to take container platform management for a spin, on oVirt will be encouraged by this.

The installation uses the openshift-ansible and inside it specifically the openshift_ovirt ansible role. The integration between openshift and oVirt is tighter, and provides storage integration. If you need persistent volumes for your containers you can get that directly from oVirt using the ovirt-volume-provisioner and ovirt-flexvolume-driver.

For the sake of simplicity, this installation will be an all-in-one Openshift cluster, on a single VM. On top of that we would run a classic web stack, Node.js + Postgres. Postgres will get a persistent volume from oVirt using its flexvolume driver.

Single shell file installation! \0/

Dropping to shell(climbing up for some of us) - this install.sh is a wrapper for installation of ovirt-openshift-installer container, it uses asible-playbook and has 2 main playbooks - 1 - install_okd.yaml and install_extensions.yaml. The latter is mainly for installing ovirt storage plugins.

The install.sh has one dependency, it needs to have 'podman' installed, all the rest runs inside a container. dnf install podman will do, for other ways to install podman consult the readme

Get the install.sh and customize

curl -O "https://raw.githubusercontent.com/oVirt/ovirt-openshift-extensions/master/automation/ci/{install.sh,customization.yaml}"

Edit the customization.yaml:

  • Put the engine details in engine_url

    engine_url: https://ovirt-engine-fqdn/ovirt-engine/api
  • Choose the oVirt cluster and data domain you want if you don't want 'Default'

    openshift_ovirt_cluster: yours
    openshift_ovirt_data_store: yours
  • Unmark the memory check if you don't change the VM default memory for this install is 8Gb

    openshift_disable_check: memory_availability,disk_availability,docker_image_availability
  • Domain name of the setup. The setup will create a VM with the name master0.$public_hosted_zone here. It will be used for all the components of the setup

    public_hosted_zone: example.com

For a more complete list of customization please (refer to the vars.yaml](https://github.com/oVirt/ovirt-openshift-extensions/blob/master/automation/ci/vars.yaml) packed into the container.

Install

Now run the install.sh.

bash install.sh

What it would do is the following:

  1. Pull the ovirt-openshift-installer container and run it
  2. It will download centos cloud image and import it into oVirt (set by qcow_url)
  3. Create a VM named master0.example.com based on the template above (set by public_hosted_zone)
  4. Cloud init will configure repositories, network, ovirt-guest-agent etc (set by cloud_init_script_master)
  5. The Vm will dynamically be inserted into an ansible inventory, under master, compute, and etc groups
  6. Openshift-ansible main playbooks are called - prerequisite.yml and deploy_cluster.yml

In the end there is a running all-in-one cluster running. Let's check it.

[root@master0 ~]# oc get nodes
NAME                         STATUS    ROLES                  AGE       VERSION
master0.example.com   Ready     compute,infra,master   1h        v1.11.0+d4cacc0

Check oVirt's extensions

[root@master0 ~]# oc get deploy/ovirt-volume-provisioner
NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ovirt-volume-provisioner   1         1         1            1           57m

[root@master0 ~]# oc get ds/ovirt-flexvolume-driver
NAME                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ovirt-flexvolume-driver   1         1         1         1            1           <none>          59m

In case the router is not scheduled, label the node with this:

 oc label node master0.example.com  "node-router=true"

Make sure that oVirt storage class is the default, so all future claims will be created by it's provisioner.

oc patch sc/ovirt -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Ready to go! Let's deploy nodejs + postgresql with a persistent disk from ovirt

Deploy persistent Postgres container with its storage from oVirt

A persistent deployment means that /var/lib/pgsql/data where the data is saved will be kept on a persistent volume disk. First let's pull a template of a persistent postgres deployment:

curl -O https://raw.githubusercontent.com/openshift/library/master/arch/x86_64/official/postgresql/templates/postgresql-persistent.json

Create a new-app based on this deployment. The parameters will create the proper persistent volume claim.

oc new-app postgresql-persistent.json \
  -e DATABASE_SERVICE_NAME=postgresql \
  -e POSTGRESQL_DATABASE=testdb \
  -e POSTGRESQL_PASSWORD=testdb \
  -e POSTGRESQL_USER=testdb \
  -e VOLUME_CAPACITY=10Gi \
  -e MEMORY_LIMIT=512M \
  -e POSTGRESQL_VERSION=10 \
  -e NAMESPACE=default \
  centos/postgresql-10-centos7

The disk is being created in oVirt for us, by the persistent storage claim

[root@master0 ~]# oc get pvc/postgresql
NAME         STATUS    VOLUME                                     CAPACITY      ACCESS MODES   STORAGECLASS   AGE
postgresql   Bound     pvc-70a8ea75-0e03-11e9-8188-001a4a160100   10737418240   RWO            ovirt          5m

To demonstrate the oVirt created the disk, let look for a disk with name as the VOLUME name of the claim:

[root@master0 ~]# curl -k -u admin@internal 'https://ovirt-engine-fqdn/ovirt-engine/api/disks?search=name=pvc-70a8ea75-0e03-11e9-8188-001a4a160100'  | grep status
<status>ok</status>

Postgresl is ready with a persistent disk for its data!

oc rsh postgresql-10-centos7-1-89ldp
psql testdb testdb -c "\l"
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 testdb    | testdb   | UTF8     | en_US.utf8 | en_US.utf8 | 
(4 rows)
Clone this wiki locally