Skip to content

Commit

Permalink
New feature: ceph configure lvm volumes (#323)
Browse files Browse the repository at this point in the history
Add playbooks to configure/populate lvm_volumes using a special configuration playbook and another playbook to create LVM devices according to the provided configuration.

Relates to osism/issues#595

Example config:
```
# optional percentage of VGs to leave free,
# defaults to false
# Can be helpful for SSD performance of some older SSD models
# or to extend lifetime of SSDs in general
ceph_osd_db_wal_devices_buffer_space_percent: 10

ceph_db_devices:
  nvme0n1: # required, PV for a DB VG
           # Will be prefixed by /dev/ and can also be specified
           # like "by-path/foo" or other things under /dev/
    num_osds: 6 # required, number of OSDs that shall be
                # maximum deployed to this device
    db_size: 30 GB # optional, if not set, defaults to
                   # (VG size - buffer space (if enabled)) / num_osds
ceph_wal_devices:
  nvme1n1: # See above, PV for a WAL VG
    num_osds: 6 # See above
    wal_size: 2 GB # optional, if not set, defaults to 2 GiB

ceph_db_wal_devices:
  nvme2n1: # See above, PV for combined WAL+DB VG
    num_osds: 3 # See above
    db_size: 30 GB # See above, except that it also considers
                   # total WAL size when calculating LV sizes
    wal_size: 2 GB # See above

ceph_osd_devices:
  sda: # Device name, will be prefixed by /dev/, see above conventions
       # This would create a "block only" OSD without DB/WAL
  sdb: # Create an OSD with dedicated DB
    db_pv: nvme0n1 # Must be one device configured in ceph_db_devices
                   # or ceph_db_wal_devices
  sdc: # Create an OSD with dedicated WAL
    wal_pv: nvme1n1 # Must be one device configured in ceph_wal_devices
                    # or ceph_db_wal_devices
  sdb: # Create an OSD with dedicated DB/WAL residing on different devices
    db_pv: nvme0n1 # See above
    wal_pv: nvme1n1 # See above
  sdc: # Create an OSD with dedicated DB/WAL residing on the same VG/PV
    db_pv: nvme2n1 # Must be one device configured in ceph_db_wal_devices
    wal_pv: nvme2n1 # Must be the same device configured in ceph_db_wal_devices

# Be warned that you can mix things up here. There is some logic to
# try and stop some errors, but still /dev/brain should be put
# to good use when considering the use of this deployment method.

# For all flash clusters without special configurations
# the "devices" method would probably be sufficient.
# For all HDD clusters the same rule applies.

# It is however very flexible in creating config for
# complex OSD scenarios and will spare the operator from having to
# fill the lvm_volumes key manually, also it will - with the help of
# the ceph-configure-lvm-volumes playbook - create required PVs,
# VGs and LVs for you, which ceph-ansible will not do when using lvm_volumes.
```
# Usage:

1. Provide config stanza like above either in group_vars or host_vars
2. Do `osism reconciler sync` and `osism apply facts` or if you use an external configuration repo add the stanza to your repo and then do: `osism apply configuration` followed by the two commands mentioned before.
3. Run this configuration playbook for the hosts you wish to configure: `osism apply ceph-configure-lvm-volumes -e ireallymeanit=yes`
4. The configuration generated for the hosts can be found on the first manager node of your setup in `/tmp/<inventory_hostname>-ceph-lvm-configuration.yml`
5. Add this configuration to your host_vars for the nodes (see step 2)
6. Notice that the old config stanza has been now expanded with UUIDs, if you used group_vars for providing the config stanza, you should leave the group_vars untouched and integrate the entire generated configuration into host_vars for the nodes, as UUIDs are generated _for each host_ individually.
7. After making sure that configuration is okay and synced + applied, you can run the ceph-create-lvm-devices playbook: `osism apply ceph-create-lvm-devices -e ireallymeanit=yes`

Signed-off-by: Paul-Philipp Kuschy <[email protected]>
  • Loading branch information
ppkuschy authored Sep 17, 2023
1 parent 85a86e7 commit 2f6bf1b
Show file tree
Hide file tree
Showing 5 changed files with 1,237 additions and 0 deletions.
1 change: 1 addition & 0 deletions playbooks/ceph-configure-lvm-volumes.yml
1 change: 1 addition & 0 deletions playbooks/ceph-create-lvm-devices.yml
Loading

0 comments on commit 2f6bf1b

Please sign in to comment.