diff --git a/.README.html b/.README.html index 1d549e16..a60bca30 100644 --- a/.README.html +++ b/.README.html @@ -207,6 +207,8 @@
ha_cluster_resource_defaults
ha_cluster_resource_operation_defaults
ha_cluster_stonith_levels
ha_cluster_constraints_location
ha_cluster_sbd_options
resource op defaults set create
of pcs(8)
man
page.
ha_cluster_stonith_levels
structure, default: no stonith levels
+ha_cluster_stonith_levels:
+ - level: 1..9
+ target: node_name
+ target_pattern: node_name_regular_expression
+ target_attribute: node_attribute_name
+ target_value: node_attribute_value
+ resource_ids:
+ - fence_device_1
+ - fence_device_2
+ - level: 1..9
+ target: node_name
+ target_pattern: node_name_regular_expression
+ target_attribute: node_attribute_name
+ target_value: node_attribute_value
+ resource_ids:
+ - fence_device_1
+ - fence_device_2
This variable defines stonith levels, also known as fencing topology. +They configure the cluster to use multiple devices to fence nodes. You +may define alternative devices in case one fails, or require multiple +devices to all be executed successfully in order to consider a node +successfully fenced, or even a combination of the two.
+The items are as follows:
+level
(mandatory) - Order in which to attempt the
+levels. Levels are attempted in ascending order until one succeeds.target
(optional) - Name of a node this level applies
+to.target_pattern
(optional) - Regular expression (as
+defined in POSIX)
+matching names of nodes this level applies to.target_attribute
and target_value
+(optional) - Name and value of a node attribute that is set for nodes
+this level applies to.target
, target_pattern
,
+target_attribute
must be specified.resource_ids
(mandatory) - List of stonith resources
+that must all be tried for this level.ha_cluster_constraints_location
structure, default: no constraints
This variable defines resource location constraints. They tell the @@ -1016,17 +1065,17 @@
ha_cluster_sbd_options
Structure for constraints with resource ID and node name:
-ha_cluster_constraints_location:
- - resource:
- id: resource-id
- node: node-name
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: option-name
- value: option-value
ha_cluster_constraints_location:
+ - resource:
+ id: resource-id
+ node: node-name
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: option-name
+ value: option-value
resource
(mandatory) - Specification of a resource the
constraint applies to.ha_cluster_sbd_options
You may take a look at an example.
Structure for constraints with resource pattern and node name:
-ha_cluster_constraints_location:
- - resource:
- pattern: resource-pattern
- node: node-name
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: resource-discovery
- value: resource-discovery-value
ha_cluster_constraints_location:
+ - resource:
+ pattern: resource-pattern
+ node: node-name
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: resource-discovery
+ value: resource-discovery-value
ha_cluster_sbd_options
You may take a look at an example.
Structure for constraints with resource ID and a rule:
-ha_cluster_constraints_location:
- - resource:
- id: resource-id
- role: resource-role
- rule: rule-string
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: resource-discovery
- value: resource-discovery-value
ha_cluster_constraints_location:
+ - resource:
+ id: resource-id
+ role: resource-role
+ rule: rule-string
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: resource-discovery
+ value: resource-discovery-value
resource
(mandatory) - Specification of a resource the
constraint applies to.
@@ -1100,18 +1149,18 @@ ha_cluster_sbd_options
You may take a look at an example.
Structure for constraints with resource pattern and a rule:
-ha_cluster_constraints_location:
- - resource:
- pattern: resource-pattern
- role: resource-role
- rule: rule-string
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: resource-discovery
- value: resource-discovery-value
ha_cluster_constraints_location:
+ - resource:
+ pattern: resource-pattern
+ role: resource-role
+ rule: rule-string
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: resource-discovery
+ value: resource-discovery-value
ha_cluster_sbd_options
Structure for simple constraints:
-ha_cluster_constraints_colocation:
- - resource_follower:
- id: resource-id1
- role: resource-role1
- resource_leader:
- id: resource-id2
- role: resource-role2
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: option-name
- value: option-value
ha_cluster_constraints_colocation:
+ - resource_follower:
+ id: resource-id1
+ role: resource-role1
+ resource_leader:
+ id: resource-id2
+ role: resource-role2
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: option-name
+ value: option-value
resource_follower
(mandatory) - A resource that should
be located relative to resource_leader
.
@@ -1180,21 +1229,21 @@ ha_cluster_sbd_options
You may take a look at an example.
Structure for set constraints:
-ha_cluster_constraints_colocation:
- - resource_sets:
- - resource_ids:
- - resource-id1
- - resource-id2
- options:
- - name: option-name
- value: option-value
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: option-name
- value: option-value
ha_cluster_constraints_colocation:
+ - resource_sets:
+ - resource_ids:
+ - resource-id1
+ - resource-id2
+ options:
+ - name: option-name
+ value: option-value
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: option-name
+ value: option-value
resource_sets
(mandatory) - List of resource sets.
ha_cluster_sbd_options
Structure for simple constraints:
-ha_cluster_constraints_order:
- - resource_first:
- id: resource-id1
- action: resource-action1
- resource_then:
- id: resource-id2
- action: resource-action2
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: option-name
- value: option-value
ha_cluster_constraints_order:
+ - resource_first:
+ id: resource-id1
+ action: resource-action1
+ resource_then:
+ id: resource-id2
+ action: resource-action2
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: option-name
+ value: option-value
resource_first
(mandatory) - Resource that the
resource_then
depends on.
@@ -1257,21 +1306,21 @@ ha_cluster_sbd_options
You may take a look at an example.
Structure for set constraints:
-ha_cluster_constraints_order:
- - resource_sets:
- - resource_ids:
- - resource-id1
- - resource-id2
- options:
- - name: option-name
- value: option-value
- id: constraint-id
- options:
- - name: score
- value: score-value
- - name: option-name
- value: option-value
ha_cluster_constraints_order:
+ - resource_sets:
+ - resource_ids:
+ - resource-id1
+ - resource-id2
+ options:
+ - name: option-name
+ value: option-value
+ id: constraint-id
+ options:
+ - name: score
+ value: score-value
+ - name: option-name
+ value: option-value
resource_sets
(mandatory) - List of resource sets.
ha_cluster_sbd_options
Structure for simple constraints:
-ha_cluster_constraints_ticket:
- - resource:
- id: resource-id
- role: resource-role
- ticket: ticket-name
- id: constraint-id
- options:
- - name: loss-policy
- value: loss-policy-value
- - name: option-name
- value: option-value
ha_cluster_constraints_ticket:
+ - resource:
+ id: resource-id
+ role: resource-role
+ ticket: ticket-name
+ id: constraint-id
+ options:
+ - name: loss-policy
+ value: loss-policy-value
+ - name: option-name
+ value: option-value
resource
(mandatory) - Specification of a resource the
constraint applies to.
@@ -1328,20 +1377,20 @@ ha_cluster_sbd_options
You may take a look at an example.
Structure for set constraints:
-ha_cluster_constraints_ticket:
- - resource_sets:
- - resource_ids:
- - resource-id1
- - resource-id2
- options:
- - name: option-name
- value: option-value
- ticket: ticket-name
- id: constraint-id
- options:
- - name: option-name
- value: option-value
ha_cluster_constraints_ticket:
+ - resource_sets:
+ - resource_ids:
+ - resource-id1
+ - resource-id2
+ options:
+ - name: option-name
+ value: option-value
+ ticket: ticket-name
+ id: constraint-id
+ options:
+ - name: option-name
+ value: option-value
resource_sets
(mandatory) - List of resource sets.
ha_cluster_sbd_options
ha_cluster_qnetd
structure and default value:
-ha_cluster_qnetd:
- present: boolean
- start_on_boot: boolean
- regenerate_keys: boolean
ha_cluster_qnetd:
+ present: boolean
+ start_on_boot: boolean
+ regenerate_keys: boolean
This configures a qnetd host which can then serve as an external quorum device for clusters. The items are as follows:
Example inventory with targets node1
and
node2
:
all:
- hosts:
- node1:
- ha_cluster:
- node_name: node-A
- pcs_address: node1-address
- corosync_addresses:
- - 192.168.1.11
- - 192.168.2.11
- node2:
- ha_cluster:
- node_name: node-B
- pcs_address: node2-address:2224
- corosync_addresses:
- - 192.168.1.12
- - 192.168.2.12
all:
+ hosts:
+ node1:
+ ha_cluster:
+ node_name: node-A
+ pcs_address: node1-address
+ corosync_addresses:
+ - 192.168.1.11
+ - 192.168.2.11
+ node2:
+ ha_cluster:
+ node_name: node-B
+ pcs_address: node2-address:2224
+ corosync_addresses:
+ - 192.168.1.12
+ - 192.168.2.12
node_name
- the name of a node in a clusterpcs_address
- an address used by pcs to communicate
@@ -1426,28 +1475,28 @@ Example inventory with targets node1
and
node2
:
all:
- hosts:
- node1:
- ha_cluster:
- sbd_watchdog_modules:
- - module1
- - module2
- sbd_watchdog: /dev/watchdog2
- sbd_devices:
- - /dev/vdx
- - /dev/vdy
- node2:
- ha_cluster:
- sbd_watchdog_modules:
- - module1
- sbd_watchdog_modules_blocklist:
- - module2
- sbd_watchdog: /dev/watchdog1
- sbd_devices:
- - /dev/vdw
- - /dev/vdz
all:
+ hosts:
+ node1:
+ ha_cluster:
+ sbd_watchdog_modules:
+ - module1
+ - module2
+ sbd_watchdog: /dev/watchdog2
+ sbd_devices:
+ - /dev/vdx
+ - /dev/vdy
+ node2:
+ ha_cluster:
+ sbd_watchdog_modules:
+ - module1
+ sbd_watchdog_modules_blocklist:
+ - module2
+ sbd_watchdog: /dev/watchdog1
+ sbd_devices:
+ - /dev/vdw
+ - /dev/vdz
sbd_watchdog_modules
(optional) - Watchdog kernel
modules to be loaded (creates /dev/watchdog*
devices).
@@ -1472,476 +1521,524 @@ true
in your playbooks using the ha_cluster
role.
-- name: Manage HA cluster and firewall and selinux
- hosts: node1 node2
- vars:
- ha_cluster_manage_firewall: true
- ha_cluster_manage_selinux: true
-
- roles:
- - linux-system-roles.ha_cluster
- name: Manage HA cluster and firewall and selinux
+ hosts: node1 node2
+ vars:
+ ha_cluster_manage_firewall: true
+ ha_cluster_manage_selinux: true
+
+ roles:
+ - linux-system-roles.ha_cluster
certificate
roleThis example creates self-signed pcsd certificate and private key files in /var/lib/pcsd with the file name FILENAME.crt and FILENAME.key, respectively.
-- name: Manage HA cluster with certificates
- hosts: node1 node2
- vars:
- ha_cluster_pcsd_certificates:
- - name: FILENAME
- common_name: "{{ ansible_hostname }}"
- ca: self-sign
- roles:
- - linux-system-roles.ha_cluster
- name: Manage HA cluster with no resources
+class="sourceCode yaml">- name: Manage HA cluster with certificates
hosts: node1 node2
vars:
- ha_cluster_cluster_name: my-new-cluster
- ha_cluster_hacluster_password: password
-
- roles:
- - linux-system-roles.ha_cluster
- name: Manage HA cluster with Corosync options
+class="sourceCode yaml">- name: Manage HA cluster with no resources
hosts: node1 node2
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: password
- ha_cluster_transport:
- type: knet
- options:
- - name: ip_version
- value: ipv4-6
- - name: link_mode
- value: active
- links:
- -
- - name: linknumber
- value: 1
- - name: link_priority
- value: 5
- -
- - name: linknumber
- value: 0
- - name: link_priority
- value: 10
- compression:
- - name: level
- value: 5
- - name: model
- value: zlib
- crypto:
- - name: cipher
- value: none
- - name: hash
- value: none
- ha_cluster_totem:
- options:
- - name: block_unlisted_ips
- value: 'yes'
- - name: send_join
- value: 0
- ha_cluster_quorum:
- options:
- - name: auto_tie_breaker
- value: 1
- - name: wait_for_all
- value: 1
-
- roles:
- - linux-system-roles.ha_cluster
- name: Manage HA cluster with Corosync options
+ hosts: node1 node2
+ vars:
+ ha_cluster_cluster_name: my-new-cluster
+ ha_cluster_hacluster_password: password
+ ha_cluster_transport:
+ type: knet
+ options:
+ - name: ip_version
+ value: ipv4-6
+ - name: link_mode
+ value: active
+ links:
+ -
+ - name: linknumber
+ value: 1
+ - name: link_priority
+ value: 5
+ -
+ - name: linknumber
+ value: 0
+ - name: link_priority
+ value: 10
+ compression:
+ - name: level
+ value: 5
+ - name: model
+ value: zlib
+ crypto:
+ - name: cipher
+ value: none
+ - name: hash
+ value: none
+ ha_cluster_totem:
+ options:
+ - name: block_unlisted_ips
+ value: 'yes'
+ - name: send_join
+ value: 0
+ ha_cluster_quorum:
+ options:
+ - name: auto_tie_breaker
+ value: 1
+ - name: wait_for_all
+ value: 1
+
+ roles:
+ - linux-system-roles.ha_cluster
These variables need to be set in inventory or via
host_vars
. Of course the SBD kernel modules and device path
might differ depending on your setup.
all:
- hosts:
- node1:
- ha_cluster:
- sbd_watchdog_modules:
- - iTCO_wdt
- sbd_watchdog_modules_blocklist:
- - ipmi_watchdog
- sbd_watchdog: /dev/watchdog1
- sbd_devices:
- - /dev/vdx
- - /dev/vdy
- - /dev/vdz
- node2:
- ha_cluster:
- sbd_watchdog_modules:
- - iTCO_wdt
- sbd_watchdog_modules_blocklist:
- - ipmi_watchdog
- sbd_watchdog: /dev/watchdog1
- sbd_devices:
- - /dev/vdx
- - /dev/vdy
- - /dev/vdz
all:
+ hosts:
+ node1:
+ ha_cluster:
+ sbd_watchdog_modules:
+ - iTCO_wdt
+ sbd_watchdog_modules_blocklist:
+ - ipmi_watchdog
+ sbd_watchdog: /dev/watchdog1
+ sbd_devices:
+ - /dev/vdx
+ - /dev/vdy
+ - /dev/vdz
+ node2:
+ ha_cluster:
+ sbd_watchdog_modules:
+ - iTCO_wdt
+ sbd_watchdog_modules_blocklist:
+ - ipmi_watchdog
+ sbd_watchdog: /dev/watchdog1
+ sbd_devices:
+ - /dev/vdx
+ - /dev/vdy
+ - /dev/vdz
After setting the inventory correctly, use this playbook to configure a complete SBD setup including loading watchdog modules and creating the SBD stonith resource.
-- hosts: node1 node2
- vars:
- ha_cluster_cluster_name: my-new-cluster
- ha_cluster_hacluster_password: password
- ha_cluster_sbd_enabled: true
- ha_cluster_sbd_options:
- - name: delay-start
- value: 'no'
- - name: startmode
- value: always
- - name: timeout-action
- value: 'flush,reboot'
- - name: watchdog-timeout
- value: 30
- # Best practice for setting SBD timeouts:
- # watchdog-timeout * 2 = msgwait-timeout (set automatically)
- # msgwait-timeout * 1.2 = stonith-timeout
- ha_cluster_cluster_properties:
- - attrs:
- - name: stonith-timeout
- value: 72
- ha_cluster_resource_primitives:
- - id: fence_sbd
- agent: 'stonith:fence_sbd'
- instance_attrs:
- - attrs:
- # taken from host_vars
- - name: devices
- value: "{{ ha_cluster.sbd_devices | join(',') }}"
- - name: pcmk_delay_base
- value: 30
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: password
- ha_cluster_cluster_properties:
- - attrs:
- - name: stonith-enabled
- value: 'true'
- - name: no-quorum-policy
- value: stop
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: password
- ha_cluster_resource_primitives:
- - id: xvm-fencing
- agent: 'stonith:fence_xvm'
- instance_attrs:
- - attrs:
- - name: pcmk_host_list
- value: node1 node2
- - id: simple-resource
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: resource-with-options
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- instance_attrs:
- - attrs:
- - name: fake
- value: fake-value
- - name: passwd
- value: passwd-value
- meta_attrs:
- - attrs:
- - name: target-role
- value: Started
- - name: is-managed
- value: 'true'
- operations:
- - action: start
- attrs:
- - name: timeout
- value: '30s'
- - action: monitor
- attrs:
- - name: timeout
- value: '5'
- - name: interval
- value: '1min'
- - id: example-1
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-2
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-3
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: simple-clone
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: clone-with-options
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: bundled-resource
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- ha_cluster_resource_groups:
- - id: simple-group
- resource_ids:
- - example-1
- - example-2
- meta_attrs:
- - attrs:
- - name: target-role
- value: Started
- - name: is-managed
- value: 'true'
- - id: cloned-group
- resource_ids:
- - example-3
- ha_cluster_resource_clones:
- - resource_id: simple-clone
- - resource_id: clone-with-options
- promotable: true
- id: custom-clone-id
- meta_attrs:
- - attrs:
- - name: clone-max
- value: '2'
- - name: clone-node-max
- value: '1'
- - resource_id: cloned-group
- promotable: true
- ha_cluster_resource_bundles:
- - id: bundle-with-resource
- resource-id: bundled-resource
- container:
- type: podman
- options:
- - name: image
- value: my:image
- network_options:
- - name: control-port
- value: 3121
- port_map:
- -
- - name: port
- value: 10001
- -
- - name: port
- value: 10002
- - name: internal-port
- value: 10003
- storage_map:
- -
- - name: source-dir
- value: /srv/daemon-data
- - name: target-dir
- value: /var/daemon/data
- -
- - name: source-dir-root
- value: /var/log/pacemaker/bundles
- - name: target-dir
- value: /var/log/daemon
- meta_attrs:
- - attrs:
- - name: target-role
- value: Started
- - name: is-managed
- value: 'true'
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: password
- # Set a different `resource-stickiness` value during and outside work
- # hours. This allows resources to automatically move back to their most
- # preferred hosts, but at a time that (in theory) does not interfere with
- # business activities.
- ha_cluster_resource_defaults:
- meta_attrs:
- - id: core-hours
- rule: date-spec hours=9-16 weekdays=1-5
- score: 2
- attrs:
- - name: resource-stickiness
- value: INFINITY
- - id: after-hours
- score: 1
- attrs:
- - name: resource-stickiness
- value: 0
- # Default the timeout on all 10-second-interval monitor actions on IPaddr2
- # resources to 8 seconds.
- ha_cluster_resource_operation_defaults:
- meta_attrs:
- - rule: resource ::IPaddr2 and op monitor interval=10s
- score: INFINITY
- attrs:
- - name: timeout
- value: 8s
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: password
- # In order to use constraints, we need resources the constraints will apply
- # to.
- ha_cluster_resource_primitives:
- - id: xvm-fencing
- agent: 'stonith:fence_xvm'
- instance_attrs:
- - attrs:
- - name: pcmk_host_list
- value: node1 node2
- - id: example-1
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-2
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-3
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-4
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-5
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- - id: example-6
- # wokeignore:rule=dummy
- agent: 'ocf:pacemaker:Dummy'
- # location constraints
- ha_cluster_constraints_location:
- # resource ID and node name
- - resource:
- id: example-1
- node: node1
- options:
- - name: score
- value: 20
- # resource pattern and node name
- - resource:
- pattern: example-\d+
- node: node1
- options:
- - name: score
- value: 10
- # resource ID and rule
- - resource:
- id: example-2
- rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28'
- # resource pattern and rule
- - resource:
- pattern: example-\d+
- rule: node-type eq weekend and date-spec weekdays=6-7
- # colocation constraints
- ha_cluster_constraints_colocation:
- # simple constraint
- - resource_leader:
- id: example-3
- resource_follower:
- id: example-4
- options:
- - name: score
- value: -5
- # set constraint
- - resource_sets:
- - resource_ids:
- - example-1
- - example-2
- - resource_ids:
- - example-5
- - example-6
- options:
- - name: sequential
- value: "false"
- options:
- - name: score
- value: 20
- # order constraints
- ha_cluster_constraints_order:
- # simple constraint
- - resource_first:
- id: example-1
- resource_then:
- id: example-6
- options:
- - name: symmetrical
- value: "false"
- # set constraint
- - resource_sets:
- - resource_ids:
- - example-1
- - example-2
- options:
- - name: require-all
- value: "false"
- - name: sequential
- value: "false"
- - resource_ids:
- - example-3
- - resource_ids:
- - example-4
- - example-5
- options:
- - name: sequential
- value: "false"
- # ticket constraints
- ha_cluster_constraints_ticket:
- # simple constraint
- - resource:
- id: example-1
- ticket: ticket1
- options:
- - name: loss-policy
- value: stop
- # set constraint
- - resource_sets:
- - resource_ids:
- - example-3
- - example-4
- - example-5
- ticket: ticket2
- options:
- - name: loss-policy
- value: fence
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
+ vars:
+ ha_cluster_cluster_name: my-new-cluster
+ ha_cluster_hacluster_password: password
+ ha_cluster_resource_primitives:
+ - id: apc1
+ agent: 'stonith:fence_apc_snmp'
+ instance_attrs:
+ - attrs:
+ - name: ip
+ value: apc1.example.com
+ - name: username
+ value: user
+ - name: password
+ value: secret
+ - name: pcmk_host_map
+ value: node1:1;node2:2
+ - id: apc2
+ agent: 'stonith:fence_apc_snmp'
+ instance_attrs:
+ - attrs:
+ - name: ip
+ value: apc2.example.com
+ - name: username
+ value: user
+ - name: password
+ value: secret
+ - name: pcmk_host_map
+ value: node1:1;node2:2
+ # Nodes have redundant power supplies, apc1 and apc2. Cluster must ensure
+ # that when attempting to reboot a node, both power supplies are turned off
+ # before either power supply is turned back on.
+ ha_cluster_stonith_levels:
+ - level: 1
+ target: node1
+ resource_ids:
+ - apc1
+ - apc2
+ - level: 1
+ target: node2
+ resource_ids:
+ - apc1
+ - apc2
+
+ roles:
+ - linux-system-roles.ha_cluster
- hosts: node1 node2
+ vars:
+ ha_cluster_cluster_name: my-new-cluster
+ ha_cluster_hacluster_password: password
+ # In order to use constraints, we need resources the constraints will apply
+ # to.
+ ha_cluster_resource_primitives:
+ - id: xvm-fencing
+ agent: 'stonith:fence_xvm'
+ instance_attrs:
+ - attrs:
+ - name: pcmk_host_list
+ value: node1 node2
+ - id: example-1
+ # wokeignore:rule=dummy
+ agent: 'ocf:pacemaker:Dummy'
+ - id: example-2
+ # wokeignore:rule=dummy
+ agent: 'ocf:pacemaker:Dummy'
+ - id: example-3
+ # wokeignore:rule=dummy
+ agent: 'ocf:pacemaker:Dummy'
+ - id: example-4
+ # wokeignore:rule=dummy
+ agent: 'ocf:pacemaker:Dummy'
+ - id: example-5
+ # wokeignore:rule=dummy
+ agent: 'ocf:pacemaker:Dummy'
+ - id: example-6
+ # wokeignore:rule=dummy
+ agent: 'ocf:pacemaker:Dummy'
+ # location constraints
+ ha_cluster_constraints_location:
+ # resource ID and node name
+ - resource:
+ id: example-1
+ node: node1
+ options:
+ - name: score
+ value: 20
+ # resource pattern and node name
+ - resource:
+ pattern: example-\d+
+ node: node1
+ options:
+ - name: score
+ value: 10
+ # resource ID and rule
+ - resource:
+ id: example-2
+ rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28'
+ # resource pattern and rule
+ - resource:
+ pattern: example-\d+
+ rule: node-type eq weekend and date-spec weekdays=6-7
+ # colocation constraints
+ ha_cluster_constraints_colocation:
+ # simple constraint
+ - resource_leader:
+ id: example-3
+ resource_follower:
+ id: example-4
+ options:
+ - name: score
+ value: -5
+ # set constraint
+ - resource_sets:
+ - resource_ids:
+ - example-1
+ - example-2
+ - resource_ids:
+ - example-5
+ - example-6
+ options:
+ - name: sequential
+ value: "false"
+ options:
+ - name: score
+ value: 20
+ # order constraints
+ ha_cluster_constraints_order:
+ # simple constraint
+ - resource_first:
+ id: example-1
+ resource_then:
+ id: example-6
+ options:
+ - name: symmetrical
+ value: "false"
+ # set constraint
+ - resource_sets:
+ - resource_ids:
+ - example-1
+ - example-2
+ options:
+ - name: require-all
+ value: "false"
+ - name: sequential
+ value: "false"
+ - resource_ids:
+ - example-3
+ - resource_ids:
+ - example-4
+ - example-5
+ options:
+ - name: sequential
+ value: "false"
+ # ticket constraints
+ ha_cluster_constraints_ticket:
+ # simple constraint
+ - resource:
+ id: example-1
+ ticket: ticket1
+ options:
+ - name: loss-policy
+ value: stop
+ # set constraint
+ - resource_sets:
+ - resource_ids:
+ - example-3
+ - example-4
+ - example-5
+ ticket: ticket2
+ options:
+ - name: loss-policy
+ value: fence
+
+ roles:
+ - linux-system-roles.ha_cluster
Note that you cannot run a quorum device on a cluster node.
-- hosts: nodeQ
- vars:
- ha_cluster_cluster_present: false
- ha_cluster_hacluster_password: password
- ha_cluster_qnetd:
- present: true
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: nodeQ
+ vars:
+ ha_cluster_cluster_present: false
+ ha_cluster_hacluster_password: password
+ ha_cluster_qnetd:
+ present: true
+
+ roles:
+ - linux-system-roles.ha_cluster
- hosts: node1 node2
- vars:
- ha_cluster_cluster_name: my-new-cluster
- ha_cluster_hacluster_password: password
- ha_cluster_quorum:
- device:
- model: net
- model_options:
- - name: host
- value: nodeQ
- - name: algorithm
- value: lms
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
+ vars:
+ ha_cluster_cluster_name: my-new-cluster
+ ha_cluster_hacluster_password: password
+ ha_cluster_quorum:
+ device:
+ model: net
+ model_options:
+ - name: host
+ value: nodeQ
+ - name: algorithm
+ value: lms
+
+ roles:
+ - linux-system-roles.ha_cluster
- hosts: node1 node2
- vars:
- ha_cluster_cluster_present: false
-
- roles:
- - linux-system-roles.ha_cluster
- hosts: node1 node2
+ vars:
+ ha_cluster_cluster_present: false
+
+ roles:
+ - linux-system-roles.ha_cluster
MIT