diff --git a/docs/guides/configuration-guide/ceph.md b/docs/guides/configuration-guide/ceph.md index c07edd032d..c107125947 100644 --- a/docs/guides/configuration-guide/ceph.md +++ b/docs/guides/configuration-guide/ceph.md @@ -3,6 +3,9 @@ sidebar_label: Ceph sidebar_position: 30 --- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + # Ceph The official Ceph documentation is located on https://docs.ceph.com/en/latest/rados/configuration/ @@ -13,6 +16,9 @@ It is **strongly advised** to use the documentation for the version being used. * Quincy - https://docs.ceph.com/en/quincy/rados/configuration/ * Reef - https://docs.ceph.com/en/reef/rados/configuration/ +Its a good idea to review all options in the follwing list. + + ## Unique Identifier The File System ID is a unique identifier for the cluster. @@ -23,11 +29,17 @@ and must be unique. It can be generated with `uuidgen`. fsid: c2120a4a-669c-4769-a32c-b7e9d7b848f4 ``` +## Configure the mon address on the mon nodes + +Set the variable `monitor_address` in the inventory files of the mon hosts to give ceph-ansible the +advise which ip adress should be used to reach the monitor instances. +(`inventory/host_vars/.yml`). + ## Client The `client.admin` keyring is placed in the file `environments/infrastructure/files/ceph/ceph.client.admin.keyring`. -## Swappiness +## SysCtl Parameters, Swappiness and Friends The swappiness is set via the `os_tuning_params` dictionary. The dictionary can only be completely overwritten via an entry in the file `environments/ceph/configuration.yml`. @@ -125,7 +137,7 @@ vm.min_free_kbytes=4194303 ceph-control ``` -## Extra pools +## Configuration of custom ceph pools Extra pools can be defined via the `openstack_pools_extra` parameter. @@ -154,136 +166,183 @@ pools are to be created is `ceph.rbd`, then the parameters would be stored in | `openstack_pool_default_pg_num` | 64 | | `openstack_pool_default_min_size` | 0 | +It is important to set up the [placement groups](https://docs.ceph.com/en/nautilus/rados/operations/placement-groups/) appropriately so that +Ceph does not fall short of its potential in terms of performance, for example. +Dabei können Werkzeuge wie [PG Calc ](https://docs.ceph.com/en/latest/rados/operations/pgcalc/) helfen. + ## OSD devices -1. For each Ceph storage node edit the file `inventory/host_vars/.yml` - add a configuration like the following to it. Ensure that no `devices` parameter - is present in the file. - - 1. Parameters - - * With the optional parmaeter `ceph_osd_db_wal_devices_buffer_space_percent` it is possible to - set the percentage of VGs to leave free. The parameter is not set by default. Can be helpful - for SSD performance of some older SSD models or to extend lifetime of SSDs in general. +For more advanced OSD layout requirements leave out the `devices` key +and instead use `lvm_volumes`. Details for this can be found on the +[OSD Scenario](https://docs.ceph.com/projects/ceph-ansible/en/latest/osds/scenarios.html) documentation. + +In order to aid in creating the `lvm_volumes` config entries and provision the LVM devices for them, +OSISM has the two playbooks `ceph-configure-lvm-volumes` and `ceph-create-lvm-devices` available. + +### Configure the device layout + +For each Ceph storage node edit the file `inventory/host_vars/.yml` +add a configuration like the following to it. Ensure that no `devices` parameter +is present in the file. + +**General information about the parameters** + +* With the optional parmaeter `ceph_osd_db_wal_devices_buffer_space_percent` it is possible to + set the percentage of VGs to leave free. The parameter is not set by default. Can be helpful + for SSD performance of some older SSD models or to extend lifetime of SSDs in general. + + ```yaml + ceph_osd_db_wal_devices_buffer_space_percent: 10 + ``` +* It is possible to configure the devices to be used with the parameters `ceph_osd_devices`, + `ceph_db_devices`, `ceph_wal_devices`, and `ceph_db_wal_devices`. This is described below. +* It is always possible to use device names such as `sda` or device IDs such as + `disk/by-id/wwn-` or `disk/by-id/nvme-eui.`. + The top level dierectory `/dev/` is not prefixed and is added automatically. +* The `db_size` parameter is optional and defaults to `(VG size - buffer space (if enabled)) / num_osds`. +* The `wal_size` parameter is optional and defaults to `2 GB`. +* The `num_osds` parameter specifies the maximum number of OSDs that can be assigned to a WAL device or DB device. +* The optional parameter `wal_pv` can be used to set the device that is to be used as the WAL device. +* The optional parameter `db_pv` can be used to set the device that is to be used as the DB device. + +**Layout variants** + +OSISM basically utilizes LVM volumes for all OSD setup variants. + + + + - ```yaml - ceph_osd_db_wal_devices_buffer_space_percent: 10 - ``` - * It is possible to configure the devices to be used with the parameters `ceph_osd_devices`, - `ceph_db_devices`, `ceph_wal_devices`, and `ceph_db_wal_devices`. This is described below. - * It is always possible to use device names such as `sda` or device IDs such as - `disk/by-id/wwn-` or `disk/by-id/nvme-eui.`. `/dev/` is not - prefixed and is added automatically. - * The `db_size` parameter is optional and defaults to `(VG size - buffer space (if enabled)) / num_osds`. - * The `wal_size` parameter is optional and defaults to `2 GB`. - * The `num_osds` parameter specifies the maximum number of OSDs that can be assigned to a WAL device or DB device. - * The optional parameter `wal_pv` can be used to set the device that is to be used as the WAL device. - * The optional parameter `db_pv` can be used to set the device that is to be used as the DB device. - - 2. OSD only - - The `sda` device will be used as an OSD device without WAL and DB device. - - ```yaml - ceph_osd_devices: - sda: - ``` - - 3. OSD + DB device - - The `nvme0n1` device will be used as an DB device. It is possible to use this DB device for up to 6 OSDs. Each - OSD is provided with 30 GB. - - ```yaml - ceph_db_devices: - nvme0n1: - num_osds: 6 - db_size: 30 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as DB device. - - ```yaml - ceph_osd_devices: - sda: - db_pv: nvme0n1 - ``` - - 4. OSD + WAL device - - The `nvme0n1` device will be used as an WAL device. It is possible to use this WAL device for up to 6 OSDs. Each - OSD is provided with 2 GB. - - ```yaml - ceph_wal_devices: - nvme0n1: - num_osds: 6 - wal_size: 2 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as WAL device. - - ```yaml - ceph_osd_devices: - sda: - wal_pv: nvme0n1 - ``` - - 5. OSD + DB device + WAL device (same device for DB + WAL) - - The `nvme0n1` device will be used as an DB device and a WAL device. It is possible to use those devices for up - to 6 OSDs. - - ```yaml - ceph_db_wal_devices: - nvme0n1: - num_osds: 6 - db_size: 30 GB - wal_size: 2 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme0n1` as WAL device. - - ```yaml - ceph_osd_devices: - sda: - db_pv: nvme0n1 - wal_pv: nvme0n1 - ``` - - 6. OSD + DB device + WAL device (different device for DB + WAL) - - The `nvme0n1` device will be used as an DB device. It is possible to use this DB device for up to 6 OSDs. Each - OSD is provided with 30 GB. - - ```yaml - ceph_db_devices: - nvme0n1: - num_osds: 6 - db_size: 30 GB - ``` - - The `nvme1n1` device will be used as an WAL device. It is possible to use this WAL device for up to 6 OSDs. Each - OSD is provided with 2 GB. - - ```yaml - ceph_wal_devices: - nvme1n1: - num_osds: 6 - wal_size: 2 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme1n1` as WAL device. +This variant does not use a dedicated wal- or db-device. +This is the most simple variant and this variant can be used if you use a all-flash setup with NVMes. - ```yaml - ceph_osd_devices: - sda: - db_pv: nvme0n1 - wal_pv: nvme1n1 - ``` +The `sda` device will be used as an OSD device without WAL and DB volume device. -2. Push the configuration to your configuration repository and after that do the following +```yaml +ceph_osd_devices: + sda: +``` + + + + +The `nvme0n1` device will be used as an source for DB device volumes. +With the configured values the provisioning mechanism creates 6 logical volumes of 30GB size each on the nvme which can +be used for 6 OSD instances. + + ```yaml + ceph_db_devices: + nvme0n1: + num_osds: 6 + db_size: 30 GB + ``` + +The devices `sda` up to `sdf` will use the previously defined DB volumes from `nvme0n1` for the listed OSD instances. + + ```yaml + ceph_osd_devices: + sda: + db_pv: nvme0n1 + ... + sdf: + db_pv: nvme0n1 + ``` + + + +The `nvme0n1` device will be used as an source for WAL device volumes. +With the configured values the provisioning mechanism creates 6 logical volumes of 2B size each on `nvme0n1` which can +be used for 6 OSD instances. + + +```yaml +ceph_wal_devices: + nvme0n1: + num_osds: 6 + wal_size: 2 GB +``` + +The devices `sda` up to `sdf` will use the previously defined WAL volumes from `nvme0n1` for the listed OSD instances. + +```yaml +ceph_osd_devices: + sda: + wal_pv: nvme0n1 +``` + + + + +The `nvme0n1` device will be used as an source for WAL and DB device volumes. +With the configured values the provisioning mechanism creates 6 logical DB volumes of 30GB and 6 logical WAL volumes of 2B size each +on `nvme0n1` which can be used for 6 OSD instances. + + +The `nvme0n1` device will be used as an DB device and a WAL device. It is possible to use those devices for up +to 6 OSDs. + +```yaml +ceph_db_wal_devices: + nvme0n1: + num_osds: 6 + db_size: 30 GB + wal_size: 2 GB +``` + +The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme0n1` as WAL device. + +```yaml +ceph_osd_devices: + sda: + db_pv: nvme0n1 + wal_pv: nvme0n1 +``` + +In the example shown here, both the data structures for the RocksDB and for the write-ahead log are placed on the faster NVMe device. +(This is described in the [Ceph documentation](https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/) as "...whenever a DB device is specified but an explicit WAL device is not, the WAL will be implicitly colocated with the DB on the faster device..."). + + + + + +The `nvme0n1` device will be used as an source for WAL and `nvme1n1` for DB device volumes. +With the configured values the provisioning mechanism creates 6 logical volumes of 30 GB and 6 logical WAL volumes of 2B size each. + +```yaml +ceph_db_devices: + nvme0n1: + num_osds: 6 + db_size: 30 GB +``` + +The `nvme1n1` device will be used as an WAL device. It is possible to use this WAL device for up to 6 OSDs. Each +OSD is provided with 2 GB. + +```yaml +ceph_wal_devices: + nvme1n1: + num_osds: 6 + wal_size: 2 GB +``` + +The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme1n1` as WAL device. + +```yaml +ceph_osd_devices: + sda: + db_pv: nvme0n1 + wal_pv: nvme1n1 +``` + + + + +### Provision the configured layout + +1. Commit and push the configuration to your configuration repository +2. Establish the changed configuration + Make sure that you do not have any open changes on the manager node either, as these will be discarded during this step. ``` $ osism apply configuration $ osism reconciler sync @@ -331,14 +390,15 @@ pools are to be created is `ceph.rbd`, then the parameters would be stored in This content from the file in the `/tmp` directory is added in the host vars file. The previous `ceph_osd_devices` is replaced with the new content. -5. Push the updated configuration **again** to your configuration repository and re-run: - +5. Commit and push the configuration to your configuration repository **again** +6. Establish the changed configuration + Make sure that you do not have any open changes on the manager node either, as these will be discarded during this step. ``` $ osism apply configuration $ osism reconciler sync ``` -6. Finally create the LVM devices. +7. Finally create the LVM devices. ``` $ osism apply ceph-create-lvm-devices diff --git a/docs/guides/deploy-guide/services/ceph.mdx b/docs/guides/deploy-guide/services/ceph.mdx index 6a33607ab9..c4c19e7451 100644 --- a/docs/guides/deploy-guide/services/ceph.mdx +++ b/docs/guides/deploy-guide/services/ceph.mdx @@ -11,6 +11,7 @@ import TabItem from '@theme/TabItem'; In OSISM it is also possible to integrate and use existing Ceph clusters. It is not necessary to deploy Ceph with OSISM. If Ceph is deployed with OSISM, it should be noted that OSISM does not claim to provide all possible features of Ceph. + Ceph provided with OSISM is intended to provide the storage for Glance, Nova, Cinder and Manila. In a specific way that has been implemented by OSISM for years. It should be checked in advance whether the way in OSISM the Ceph deployment and the @@ -22,80 +23,100 @@ open source projects, please refer to :::warning -Before starting the Ceph deployment, the configuration and preparation of the -OSD devices must be completed. The steps that are required for this can be found in the -[Ceph Configuration Guide](../../configuration-guide/ceph.md#osd-devices). +Before starting the Ceph deployment, the it is recommended to perform the general ceph configuration. +All the preparing steps are listed in the [Ceph Configuration Guide](../../configuration-guide/ceph). + +At least the [preparation](../../configuration-guide/ceph.md#osd-devices) of the necessary LVM2 volumes for the osd devices must be completed. ::: -1. Deploy services. - * Deploy [ceph-mon](https://docs.ceph.com/en/quincy/man/8/ceph-mon/) services +## Deploy ceph services. - ``` - osism apply ceph-mons - ``` +* Deploy [ceph-mon](https://docs.ceph.com/en/quincy/man/8/ceph-mon/) services - * Deploy ceph-mgr services + ``` + osism apply ceph-mons + ``` - ``` - osism apply ceph-mgrs - ``` +* Deploy ceph-mgr services - * Deploy [ceph-osd](https://docs.ceph.com/en/quincy/man/8/ceph-osd/) services + ``` + osism apply ceph-mgrs + ``` - ``` - osism apply ceph-osds - ``` +* Prepare OSD devices [as described](../../configuration-guide/ceph#osd-devices) in the configuration guide - * Generate pools and keys. This step is only necessary for OSISM >= 7.0.0. +* Deploy [ceph-osd](https://docs.ceph.com/en/quincy/man/8/ceph-osd/) services - ``` - osism apply ceph-pools - ``` + ``` + osism apply ceph-osds + ``` - * Deploy ceph-crash services +* Configure custom pools [as described](../../configuration-guide/ceph#extra-pools) in the configuration guide - ``` - osism apply ceph-crash - ``` +* Generate pools and the related keys. This step is only necessary for OSISM >= 7.0.0. - :::info + ``` + osism apply ceph-pools + ``` - It's all done step by step here. It is also possible to do this in a single step. - This speeds up the entire process and avoids unnecessary restarts of individual - services. +* Deploy ceph-crash services - - - ``` - osism apply ceph - ``` + ``` + osism apply ceph-crash + ``` + +:::info + +It's all done step by step here. It is also possible to do this in a single step. +This speeds up the entire process and avoids unnecessary restarts of individual +services. + + + +``` +osism apply ceph +``` + +Generate pools and keys. + +``` +osism apply ceph-pools +``` + + +``` +osism apply ceph-base +``` + + - Generate pools and keys. +::: + +## Install Ceph Clients + +1. Get ceph keys. This places the necessary keys in `/opt/configuration`. ``` - osism apply ceph-pools + osism apply copy-ceph-keys ``` - - + +2. Encrypt the fetched keys + It is highly recommended to store the Ceph keys encrypted in the Git repository. ``` - osism apply ceph-base + cd /opt/configuration + make ansible_vault_encrypt_ceph_keys ``` - - - ::: - -2. Get ceph keys. This places the necessary keys in `/opt/configuration`. +3. Add the keys permanently to the repository ``` - osism apply copy-ceph-keys + git add **/ceph.*.keyring + git commit -m "Add the downloaded Keyes to the repository" ``` - After run, these keys must be permanently added to the configuration repository - via Git. - + Here is an overview of the individual keys: ``` environments/infrastructure/files/ceph/ceph.client.admin.keyring environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring @@ -108,6 +129,8 @@ OSD devices must be completed. The steps that are required for this can be found environments/kolla/files/overlays/glance/ceph.client.glance.keyring ``` + :::info + If the `osism apply copy-ceph-keys` fails because the keys are not found in the `/share` directory, this can be ignored. The keys of the predefined keys (e.g. for Manila) were then not created as they are not used. If you only use Ceph and do not need the predefined @@ -117,19 +140,22 @@ OSD devices must be completed. The steps that are required for this can be found ```yaml title="environments/ceph/configuration.yml" ceph_kolla_keys: [] ``` + ::: -3. After the Ceph keys have been persisted in the configuration repository, the Ceph +2. After the Ceph keys have been persisted in the configuration repository, the Ceph client can be deployed. ``` osism apply cephclient ``` -4. Enable and prepare the use of the Ceph dashboard. +## Enable Ceph Dashboard - ``` - osism apply ceph-bootstrap-dashboard - ``` +Enable and prepare the use of the Ceph dashboard. + +``` +osism apply ceph-bootstrap-dashboard +``` ## RGW service diff --git a/docs/guides/upgrade-guide/ceph.md b/docs/guides/upgrade-guide/ceph.md index dd5236366e..9e06a952f9 100644 --- a/docs/guides/upgrade-guide/ceph.md +++ b/docs/guides/upgrade-guide/ceph.md @@ -1,6 +1,6 @@ --- sidebar_label: Ceph -sidebar_position: 20 +sidebar_position: 30 --- # Ceph diff --git a/docs/guides/upgrade-guide/docker.md b/docs/guides/upgrade-guide/docker.md index d2ea040114..533efb17a4 100644 --- a/docs/guides/upgrade-guide/docker.md +++ b/docs/guides/upgrade-guide/docker.md @@ -1,6 +1,6 @@ --- sidebar_label: Docker -sidebar_position: 20 +sidebar_position: 40 --- # Docker @@ -18,28 +18,12 @@ All installable versions can be displayed with `apt-cache madison docker-ce`. $ apt-cache madison docker-ce docker-ce | 5:24.0.6-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages docker-ce | 5:24.0.5-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:24.0.4-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:24.0.3-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:24.0.2-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:24.0.1-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:24.0.0-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:23.0.6-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:23.0.5-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:23.0.4-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:23.0.3-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:23.0.2-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages + ... docker-ce | 5:23.0.1-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages docker-ce | 5:23.0.0-1~ubuntu.22.04~jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages docker-ce | 5:20.10.24~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages docker-ce | 5:20.10.23~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.22~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.21~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.20~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.19~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.18~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.17~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.16~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages - docker-ce | 5:20.10.15~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages + ... docker-ce | 5:20.10.14~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages docker-ce | 5:20.10.13~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages ``` diff --git a/docs/guides/upgrade-guide/index.md b/docs/guides/upgrade-guide/index.md index d59c023135..e3e4f0bdb2 100644 --- a/docs/guides/upgrade-guide/index.md +++ b/docs/guides/upgrade-guide/index.md @@ -5,7 +5,52 @@ sidebar_position: 20 # Upgrade Guide +## A dedicated test environment + +Using a dedicated test environment to upgrade the complex technical system landscape +consististing of Kubernetes, Ceph and Openstack is benenfical for some reasons: + +1. **Developing and Testing Upgrade Procedures**: + - **Validation of Runbooks**: Allows you to develop, validate and refine upgrade procedures documented in runbooks, ensuring accuracy and completeness. + - **Procedure Testing**: Ensures that each step of the upgrade process is thoroughly tested specific for your conditions, reducing the risk of errors during the actual upgrade. + +2. **Testing New Technical Changes**: + - **Risk-Free Testing**: Provides a safe space to implement and test new technical changes without impacting the production environment. + - **Issue Identification**: Helps identify and resolve potential issues with new features or configurations before they go live. + +3. **Practicing Critical Steps for Training**: + - **Staff Training**: Enables IT staff to practice critical steps and gain hands-on experience with the upgrade process, enhancing their preparedness. + - **Engineer Familiarization**: Allows new engineers to become familiar with the system itself, features and workflows, reducing the learning curve. + +4. **Developing New Concepts**: + - **Concept Testing**: Facilitates the development and testing of new concepts, ensuring they are viable and effective before implementation in production. + - **Innovation Space**: Provides a controlled environment to experiment with innovative ideas and configurations without disrupting existing services. + +Overall, a dedicated test environment ensures a smoother, safer upgrade and operation process, minimizes risks, and enhances staff training and innovation. + +## General hints for the Upgrade documentation + In the examples, the pull of images (if supported by a role) is always run first. While this is optional, it is recommended to speed up the execution of the upgrade action in the second step. This significantly reduces the times required for the restart from a service. + +## The order of the upgrade steps + + +:::warning + +Always read the [release notes](https://osism.tech/docs/release-notes/) first to learn what has changed and what +adjustments are necessary. Read the release notes even if you are only updating for minor releases e.g. from 7.0.2 to 7.0.5. + +::: + +* Step 1: [Upgrade the Manager](./manager) +* Step 2: [Upgrade the Network](./network) +* Step 3: [Upgrade the Ceph](./ceph) +* Step 4: [Upgrade Docker](./docker) +* Step 5: [Upgrade Kubernetes](./kubernetes) +* Step 6: [Upgrade Logging & Monitoring](./logging-monitoring) +* Step 7: [Upgrade the Infrastructure](./infrastructure) +* Step 8: [Upgrade Openstack](./openstack) + diff --git a/docs/guides/upgrade-guide/infrastructure.md b/docs/guides/upgrade-guide/infrastructure.md index 08ee85805c..164377c644 100644 --- a/docs/guides/upgrade-guide/infrastructure.md +++ b/docs/guides/upgrade-guide/infrastructure.md @@ -1,17 +1,20 @@ --- sidebar_label: Infrastructure -sidebar_position: 30 +sidebar_position: 70 --- # Infrastructure -1. Kubernetes - - This is only necessary if the internal Kubernetes cluster has also been deployed. - This can be checked by executing `kubectl get nodes` on the manager node. +1. **Optional:** Pull containers ``` - osism apply k3s-upgrade + osism apply -a pull common + osism apply -a pull loadbalancer + osism apply -a pull redis + osism apply -a pull memcached + osism apply -a pull memcached + osism apply -a pull rabbitmq + osism apply -a pull mariadb ``` 2. Cron, Fluentd & Kolla Toolbox @@ -19,46 +22,40 @@ sidebar_position: 30 The common role of Kolla is used to manage the services `cron`, `fluentd` and `kolla-toolbox`. - It is important to do this upgrade before any other upgrades in the Kolla + It is important to do this upgrade **before any other upgrades** in the Kolla environment, as parts of the other upgrades depend on the `kolla-toolbox` service. ``` - osism apply -a pull common osism apply -a upgrade common ``` 3. Loadbalancer ``` - osism apply -a pull loadbalancer osism apply -a upgrade loadbalancer ``` 4. Redis ``` - osism apply -a pull redis osism apply -a upgrade redis ``` 5. Memcached ``` - osism apply -a pull memcached osism apply -a upgrade memcached ``` 6. RabbitMQ ``` - osism apply -a pull rabbitmq osism apply -a upgrade rabbitmq ``` 7. MariaDB ``` - osism apply -a pull mariadb osism apply -a upgrade mariadb ``` diff --git a/docs/guides/upgrade-guide/kubernetes.md b/docs/guides/upgrade-guide/kubernetes.md new file mode 100644 index 0000000000..5b45da2e93 --- /dev/null +++ b/docs/guides/upgrade-guide/kubernetes.md @@ -0,0 +1,14 @@ +--- +sidebar_label: Kubernetes +sidebar_position: 50 +--- + +# Kubernetes + +This is an optional procedure. +This step is only neccessary if you have enabled the k3s cluster with `enable_osism_kubernetes: yes`. + +``` +osism apply k3s-upgrade +``` + diff --git a/docs/guides/upgrade-guide/logging-monitoring.md b/docs/guides/upgrade-guide/logging-monitoring.md index c58148b048..fd0250b197 100644 --- a/docs/guides/upgrade-guide/logging-monitoring.md +++ b/docs/guides/upgrade-guide/logging-monitoring.md @@ -1,29 +1,34 @@ --- sidebar_label: Logging & Monitoring -sidebar_position: 40 +sidebar_position: 60 --- # Logging & Monitoring -1. OpenSearch +1. **Optional:** Pull containers + + ``` + osism apply -a pull opensearch + osism apply -a pull prometheus + osism apply -a pull grafana + ``` + +2. OpenSearch OpenSearch dashboards is also upgraded with the `opensearch` role. ``` - osism apply -a pull opensearch osism apply -a upgrade opensearch ``` -2. Prometheus +3. Prometheus ``` - osism apply -a pull prometheus osism apply prometheus ``` -3. Grafana +4. Grafana ``` - osism apply -a pull grafana osism apply -a upgrade grafana ``` diff --git a/docs/guides/upgrade-guide/manager.mdx b/docs/guides/upgrade-guide/manager.mdx index 4337ec2fcb..8eb84349e4 100644 --- a/docs/guides/upgrade-guide/manager.mdx +++ b/docs/guides/upgrade-guide/manager.mdx @@ -11,20 +11,20 @@ import TabItem from '@theme/TabItem'; :::warning Always read the [release notes](https://osism.tech/docs/release-notes/) first to learn what has changed and what -adjustments are necessary. Read the release notes even if you are only updating from e.g. 7.0.2 to 7.0.5. +adjustments are necessary. Read the release notes even if you are only updating from e.g. 7.0.2 to 7.1.2. ::: The update of a manager service with a stable release of OSISM is described here. -In the example, OSISM release 7.0.5 is used. +In the example, OSISM release 7.1.2 is used. 1. Change the OSISM release in the configuration repository. 1.1. Set the new OSISM version in the configuration repository. ``` - MANAGER_VERSION=7.0.5 - sed -i -e "s/manager_version: .*/manager_version: ${MANAGER_VERSION}/g" environments/manager/configuration.yml + MANAGER_VERSION="7.1.2" + sed -i -e "s/manager_version: .*/manager_version: ${MANAGER_VERSION:?}/g" environments/manager/configuration.yml ``` 1.2. If `openstack_version` or `ceph_version` are set in `environments/manager/configuration.yml` @@ -40,6 +40,7 @@ In the example, OSISM release 7.0.5 is used. make sync ``` + If Gilt is not installed via the `requirements.txt` of the manager environment it is important to use a version smaller v2. The v2 of Gilt is not yet usable. @@ -65,17 +66,24 @@ In the example, OSISM release 7.0.5 is used. workflows for changes to the configuration repository, only a generic example for Git. ``` - git commit -a -s -m "manager: use OSISM version 7.0.5" + git commit -a -s -m "manager: use OSISM version ${MANAGER_VERSION:?}" git push ``` -2. Update the configuration repository on the manager node. +2. Login to the manager using the "dragon" user + ``` + ssh dragon@ + ``` + +3. Update the configuration repository on the manager node. ``` + git pull + git checkout osism apply configuration ``` -3. Update the manager service on the manager node. +4. Update the manager service on the manager node. ``` osism update manager @@ -86,21 +94,27 @@ In the example, OSISM release 7.0.5 is used. * If `osism update manager` does not work yet, use `osism-update-manager` instead. -4. Refresh the facts cache. +5. Refresh the facts cache. ``` osism apply facts ``` -5. If Traefik is used on the manager node (`traefik_enable: true` in `environments/infrastructure/configuration.yml`) +6. Finally, the Ansible vault password must be made known again. + + ``` + osism set vault password + ``` + +7. If Traefik is used on the manager node (`traefik_enable: true` in `environments/infrastructure/configuration.yml`) then Traefik should also be upgraded. ``` osism apply traefik ``` -6. Finally, the Ansible vault password must be made known again. - +8. Optional: Tag the upgrade in Git ``` - osism set vault password + git tag "$(grep "manager_version:" environments/manager/configuration.yml|awk '{print $2}')" + git push --tags ``` diff --git a/docs/guides/upgrade-guide/network.md b/docs/guides/upgrade-guide/network.md index 83fcf56bb1..e0ca6ff315 100644 --- a/docs/guides/upgrade-guide/network.md +++ b/docs/guides/upgrade-guide/network.md @@ -1,6 +1,6 @@ --- sidebar_label: Network -sidebar_position: 15 +sidebar_position: 20 --- # Network diff --git a/docs/guides/upgrade-guide/openstack.md b/docs/guides/upgrade-guide/openstack.md index 6505bc2755..1570eb9268 100644 --- a/docs/guides/upgrade-guide/openstack.md +++ b/docs/guides/upgrade-guide/openstack.md @@ -1,6 +1,6 @@ --- sidebar_label: OpenStack -sidebar_position: 40 +sidebar_position: 80 --- # OpenStack @@ -13,6 +13,20 @@ of the APIs. This downtime is usually less than 1 minute. ::: +1. Pull containers + + ``` + osism apply -a pull keystone + osism apply -a pull glance + osism apply -a pull designate + osism apply -a pull placement + osism apply -a pull cinder + osism apply -a pull neutron + osism apply -a pull nova + osism apply -a pull octavia + osism apply -a pull horizon + ``` + 1. OpenStack client ``` @@ -22,56 +36,48 @@ of the APIs. This downtime is usually less than 1 minute. 2. Keystone ``` - osism apply -a pull keystone osism apply -a upgrade keystone ``` 3. Glance ``` - osism apply -a pull glance osism apply -a upgrade glance ``` 4. Designate ``` - osism apply -a pull designate osism apply -a upgrade designate ``` 5. Placement ``` - osism apply -a pull placement osism apply -a upgrade placement ``` 6. Cinder ``` - osism apply -a pull cinder osism apply -a upgrade cinder ``` 7. Neutron ``` - osism apply -a pull neutron osism apply -a upgrade neutron ``` 8. Nova ``` - osism apply -a pull nova osism apply -a upgrade nova ``` 9. Octavia ``` - osism apply -a pull octavia osism apply -a upgrade octavia ``` @@ -97,6 +103,5 @@ of the APIs. This downtime is usually less than 1 minute. 10. Horizon ``` - osism apply -a pull horizon osism apply -a upgrade horizon ```