Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: crmsh workflow and SUSE support #186

Merged
merged 6 commits into from
Feb 9, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 1 addition & 5 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,7 @@ ha_cluster_start_on_boot: true

ha_cluster_extra_packages: []

ha_cluster_fence_agent_packages: "{{
['fence-agents-all']
+
(['fence-virt'] if ansible_architecture == 'x86_64' else [])
}}"
ha_cluster_fence_agent_packages: []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ha_cluster_fence_agent_packages: []
ha_cluster_fence_agent_packages: "{{ __ha_cluster_fence_agent_packages }}"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that is correct place, because it would expose private variable to potential user inputs.

I have added conditional into main yesterday to do switch, depending if users added something to it.

+
      ha_cluster_fence_agent_packages
        if ha_cluster_fence_agent_packages | length > 0
        else __ha_cluster_fence_agent_packages }}"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that is correct place, because it would expose private variable to potential user inputs.

I'm not sure what you mean. I believe my proposal encapsulates the desired behavior - putting this in defaults/main.yml:

ha_cluster_fence_agent_packages: "{{ __ha_cluster_fence_agent_packages }}"

This allows users to provide their own list of packages for ha_cluster_fence_agent_packages (which is the desired behavior - @tomjelinek please correct me if I'm wrong), and if the user does not specify ha_cluster_fence_agent_packages, it will be set to the default value __ha_cluster_fence_agent_packages. And, since __ha_cluster_fence_agent_packages is defined for different values depending on the platform/version, we get the correct value for ha_cluster_fence_agent_packages for all platforms/versions.

I have added conditional into main yesterday to do switch, depending if users added something to it.

+
      ha_cluster_fence_agent_packages
        if ha_cluster_fence_agent_packages | length > 0
        else __ha_cluster_fence_agent_packages }}"

I think this is not the "Ansible way" to do this. I believe the correct way to do this is the way I outlined this above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richm I have changed to in latest commit after conversation with @tomjelinek.

Copy link
Contributor

@richm richm Feb 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the default variable was renamed to __ha_cluster_fence_agent_packages_default, so this should be

ha_cluster_fence_agent_packages: "{{ __ha_cluster_fence_agent_packages_default }}"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richm Changes are completed and pushed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richm this can be marked as resolved?


ha_cluster_hacluster_password: ""
ha_cluster_regenerate_keys: false
Expand Down
4 changes: 2 additions & 2 deletions tasks/distribute-fence-virt-key.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
file:
path: /etc/cluster
state: directory
mode: 0755
mode: '0755'

- name: Get fence_xvm.key
include_tasks: presharedkey.yml
Expand All @@ -20,4 +20,4 @@
dest: /etc/cluster/fence_xvm.key
owner: root
group: root
mode: 0600
mode: '0600'
3 changes: 3 additions & 0 deletions tasks/enable-repositories/Suse.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# SPDX-License-Identifier: MIT
---
# All required repositories are already part of SLES for SAP 15 SP5+.
14 changes: 9 additions & 5 deletions tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@
when:
- ha_cluster_hacluster_password | string | length > 0

- name: Configure pcs / pcsd
include_tasks: shell_{{ ha_cluster_pacemaker_shell }}/pcs-configure-pcs-pcsd.yml # yamllint disable-line rule:line-length
- name: Configure shell
include_tasks: shell_{{ ha_cluster_pacemaker_shell }}/configure-shell.yml # yamllint disable-line rule:line-length

- name: Configure firewall and selinux
when: ha_cluster_cluster_present | bool or ha_cluster_qnetd.present | d(false)
Expand All @@ -58,7 +58,8 @@
ha_cluster_sbd_enabled | ternary(__ha_cluster_sbd_packages, [])
+
ha_cluster_fence_agent_packages
}}"
if ha_cluster_fence_agent_packages | length > 0
else __ha_cluster_fence_agent_packages }}"
tomjelinek marked this conversation as resolved.
Show resolved Hide resolved
state: present
use: "{{ (__ha_cluster_is_ostree | d(false)) |
ternary('ansible.posix.rhel_rpm_ostree', omit) }}"
Expand All @@ -74,10 +75,10 @@
- name: Configure corosync
include_tasks: shell_{{ ha_cluster_pacemaker_shell }}/cluster-setup-corosync.yml # yamllint disable-line rule:line-length

- name: Pcs auth
- name: Cluster auth
# Auth is run after corosync.conf has been distributed so that pcs
# distributes pcs tokens in the cluster automatically.
include_tasks: shell_{{ ha_cluster_pacemaker_shell }}/pcs-auth.yml
include_tasks: shell_{{ ha_cluster_pacemaker_shell }}/cluster-auth.yml

- name: Distribute cluster shared keys
# This is run after pcs auth, so that the nodes are authenticated against
Expand All @@ -93,6 +94,9 @@

- name: Create and push CIB
include_tasks: shell_{{ ha_cluster_pacemaker_shell }}/create-and-push-cib.yml # yamllint disable-line rule:line-length
# CIB changes should be done only on one of cluster nodes to avoid
# corruption and inconsistency of resulting cibadmin patch file.
run_once: true
marcelmamula marked this conversation as resolved.
Show resolved Hide resolved

- name: Remove cluster configuration
when: not ha_cluster_cluster_present
Expand Down
Empty file removed tasks/shell_crmsh/.gitkeep
Empty file.
127 changes: 127 additions & 0 deletions tasks/shell_crmsh/check-and-prepare-role-variables.yml
marcelmamula marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# SPDX-License-Identifier: MIT
---
- name: Check cluster configuration variables
block:
- name: Fail if passwords are not specified
ansible.builtin.fail:
msg: "{{ item }} must be specified"
when:
- lookup("vars", item, default="") | string | length < 1
- ha_cluster_cluster_present | bool
loop:
- ha_cluster_hacluster_password
run_once: true
marcelmamula marked this conversation as resolved.
Show resolved Hide resolved

- name: Fail if nodes do not have the same number of SBD devices specified
ansible.builtin.fail:
msg: All nodes must have the same number of SBD devices specified
when:
- ha_cluster_cluster_present | bool
- ha_cluster_sbd_enabled | bool
- >
ansible_play_hosts
| map('extract', hostvars, ['ha_cluster', 'sbd_devices'])
| map('default', [], true)
| map('length') | unique | length > 1
run_once: true

# Running a qnetd on a cluster node does't make sense, fencing would make
# the qnetd unavailable, even if temporarily.
- name: Fail if configuring qnetd on a cluster node
ansible.builtin.fail:
msg: >
Qnetd cannot be configured on a cluster node -
'ha_cluster_cluster_present' and 'ha_cluster_qnetd.present' cannot
be both set to true
when:
- ha_cluster_cluster_present | bool
- ha_cluster_qnetd.present | d(false)

- name: Fail if no valid level is specified for a fencing level
ansible.builtin.fail:
msg: Specify 'level' 1..9 for each fencing level
when:
- not((item.level | d() | int) > 0 and (item.level | d() | int) < 10)
loop: "{{ ha_cluster_stonith_levels }}"
run_once: true

- name: Fail if no target is specified for a fencing level
ansible.builtin.fail:
msg: >
Specify exactly one of 'target', 'target_pattern', 'target_attribute'
for each fencing level
when:
- >
[item.target is defined,
item.target_pattern is defined,
item.target_attribute is defined]
| select | list | length != 1
loop: "{{ ha_cluster_stonith_levels }}"
run_once: true

- name: Collect service information
ansible.builtin.service_facts:

- name: Assert that required services are available
ansible.builtin.assert:
that: "'{{ item }}' in ansible_facts.services"
fail_msg: >-
The service '{{ item }}' was not found on this system. Ensure that this
service is available before running this role.
success_msg: >-
The service '{{ item }}' was discovered on this system.
loop:
- 'logd.service'

- name: Discover cluster node names
ansible.builtin.set_fact:
__ha_cluster_node_name: "{{ ha_cluster.node_name | d(inventory_hostname) }}"

- name: Collect cluster node names
ansible.builtin.set_fact:
__ha_cluster_all_node_names: "{{
ansible_play_hosts
| map('extract', hostvars, '__ha_cluster_node_name')
| list
}}"

- name: Extract qdevice settings
ansible.builtin.set_fact:
__ha_cluster_qdevice_in_use: "{{ 'device' in ha_cluster_quorum }}"
__ha_cluster_qdevice_model: "{{ ha_cluster_quorum.device.model | d('') }}"
# This may set empty value, if it is not defined. Such value is not valid.
# It will be caught by crm validation before we try using it in the role.
__ha_cluster_qdevice_host: "{{
ha_cluster_quorum.device.model_options | d([])
| selectattr('name', 'match', '^host$')
| map(attribute='value') | list | last | d('')
}}"
__ha_cluster_qdevice_crm_address: "{{
ha_cluster_quorum.device.model_options | d([])
| selectattr('name', 'match', '^crm-address$')
| map(attribute='value') | list | last | d('')
}}"

- name: Figure out if ATB needs to be enabled for SBD
ansible.builtin.set_fact:
# SBD needs ATB enabled if all of these are true:
# - sbd does not use devices (In check-and-prepare-role-variables.yml it
# is verified that all nodes have the same number of devices defined.
# Therefore it is enough to check devices of any single node.)
# - number of nodes is even
# - qdevice is not used
__ha_cluster_sbd_needs_atb: "{{
ha_cluster_sbd_enabled
and not ha_cluster.sbd_devices | d([])
and __ha_cluster_all_node_names | length is even
and not __ha_cluster_qdevice_in_use
}}"

- name: Fail if SBD needs ATB enabled and the user configured ATB to be disabled
ansible.builtin.fail:
msg: Cannot set auto_tie_breaker to disabled when SBD needs it to be enabled
when:
- __ha_cluster_sbd_needs_atb | bool
- ha_cluster_quorum.options | d([])
| selectattr('name', 'match', '^auto_tie_breaker$')
| map(attribute='value') | select('in', ['0', 0]) | list | length > 0
4 changes: 4 additions & 0 deletions tasks/shell_crmsh/cluster-auth.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# SPDX-License-Identifier: MIT
---
# Placeholder for potential auth tasks for crmsh
# There are no authentication steps for crmsh currently.
63 changes: 63 additions & 0 deletions tasks/shell_crmsh/cluster-destroy-crm.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# SPDX-License-Identifier: MIT
---
- name: Get stat of cluster configuration files
ansible.builtin.stat:
path: "{{ item }}"
loop:
- /etc/corosync/corosync.conf
- /var/lib/pacemaker/cib/cib.xml
register: __ha_cluster_config_files_stat

- name: Stop cluster
ansible.builtin.command:
cmd: crm cluster stop --all
when: not __ha_cluster_config_files_stat.results |
selectattr('stat.exists', 'equalto', false) | list | length > 0
changed_when: true

- name: Stop cluster daemons
ansible.builtin.service:
name: "{{ item }}"
state: stopped # noqa no-handler
loop:
- pacemaker
- corosync
- corosync-qdevice

- name: Backup configuration files by renaming to _backup
ansible.builtin.copy:
src: "{{ config_file.item }}"
dest: "/root/{{ config_file.stat.path | basename }}_backup"
owner: root
group: root
mode: '0600'
remote_src: true
backup: true
loop: "{{ __ha_cluster_config_files_stat.results }}"
loop_control:
loop_var: config_file
when: config_file.stat.exists

- name: Remove cluster configuration files
ansible.builtin.file:
path: "{{ config_file.item }}"
state: absent
loop: "{{ __ha_cluster_config_files_stat.results }}"
loop_control:
loop_var: config_file
when: config_file.stat.exists

- name: Find all files in /var/lib/pacemaker/cib/
ansible.builtin.find:
paths: /var/lib/pacemaker/cib
recurse: true
patterns:
- 'cib*'
- 'shadow*'
register: __ha_cluster_cib_files

- name: Remove all files in /var/lib/pacemaker/cib/
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
loop: "{{ __ha_cluster_cib_files.files }}"
53 changes: 53 additions & 0 deletions tasks/shell_crmsh/cluster-setup-corosync.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# SPDX-License-Identifier: MIT
---
- name: Create a corosync.conf tempfile
ansible.builtin.tempfile:
state: file
suffix: _ha_cluster_corosync_conf
register: __ha_cluster_tempfile_corosync_conf
run_once: true # noqa: run_once[task]
# We always need to create corosync.conf file to see whether it's the same as
# what is already present on the cluster nodes. However, we don't want to
# report it as a change since the only thing which matters is copying the
# resulting corosync.conf to cluster nodes.
check_mode: false
changed_when: not ansible_check_mode

- name: Generate corosync.conf using template
ansible.builtin.template:
src: crmsh_corosync.j2
dest: "{{ __ha_cluster_tempfile_corosync_conf.path }}"
owner: root
group: root
mode: '0644'
run_once: true # noqa: run_once[task]

- name: Fetch created corosync.conf file
ansible.builtin.slurp:
src: "{{ __ha_cluster_tempfile_corosync_conf.path }}"
register: __ha_cluster_data_corosync_conf
run_once: true # noqa: run_once[task]
when: __ha_cluster_tempfile_corosync_conf.path is defined

- name: Distribute corosync.conf file
ansible.builtin.copy:
content: "{{ __ha_cluster_data_corosync_conf['content'] | b64decode }}"
dest: /etc/corosync/corosync.conf
owner: root
group: root
mode: '0644'
register: __ha_cluster_distribute_corosync_conf
when: __ha_cluster_data_corosync_conf is defined

- name: Remove a corosync.conf tempfile
ansible.builtin.file:
path: "{{ __ha_cluster_tempfile_corosync_conf.path }}"
state: absent
when: __ha_cluster_tempfile_corosync_conf.path is defined
run_once: true # noqa: run_once[task]
# We always need to create corosync.conf file to see whether it's the same as
# what is already present on the cluster nodes. However, we don't want to
# report it as a change since the only thing which matters is copying the
# resulting corosync.conf to cluster nodes.
check_mode: false
changed_when: not ansible_check_mode
37 changes: 37 additions & 0 deletions tasks/shell_crmsh/cluster-setup-keys.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# SPDX-License-Identifier: MIT
---
- name: Get corosync authkey
ansible.builtin.include_tasks: ../presharedkey.yml
vars:
preshared_key_label: corosync authkey
preshared_key_src: "{{ ha_cluster_corosync_key_src }}"
preshared_key_dest: /etc/corosync/authkey
preshared_key_length: 256

- name: Distribute corosync authkey
ansible.builtin.copy:
content: "{{ __ha_cluster_some_preshared_key | b64decode }}"
dest: /etc/corosync/authkey
owner: root
group: root
mode: '0400'
register: __ha_cluster_distribute_corosync_authkey
marcelmamula marked this conversation as resolved.
Show resolved Hide resolved
no_log: true

- name: Get pacemaker authkey
ansible.builtin.include_tasks: ../presharedkey.yml
vars:
preshared_key_label: pacemaker authkey
preshared_key_src: "{{ ha_cluster_pacemaker_key_src }}"
preshared_key_dest: /etc/pacemaker/authkey
preshared_key_length: 256

- name: Distribute pacemaker authkey
ansible.builtin.copy:
content: "{{ __ha_cluster_some_preshared_key | b64decode }}"
dest: /etc/pacemaker/authkey
owner: hacluster
group: haclient
mode: '0400'
register: __ha_cluster_distribute_pacemaker_authkey
no_log: true
Loading