Pacemaker Notes


Show pacemaker config

$ cibadmin -Q

Show status of resources

$ crm_mon -1r

Show cluster status

$ crm status

$ crm configure show

Make a node a standby server

$ crm node standby X

Make a node active

$ crm node online X

Freeze a resource

$ crm resource meta RESOURCE-NAME set is-managed false

Unfreeze a resource

$ crm resource meta RESOURCE-NAME set is-managed true

Back up the cluster config

$ crm configure save /tmp/cluster-config.xml

Load the backed up config

$ crm configure load replace /tmp/cluster-config.xml

Disable auto failback

$ crm configure property default-resource-stickiness=1

Remove node from cluster

$ pcs cluster node remove node_to_remove

Create a new cluster node

$ yum install -y pcs fence-agents-all

$ passwd hacluster

$ systemctl start pcsd.service

$ systemctl enable pcsd.service

Add node to an existing cluster

$ pcs cluster auth new_node

$ pcs cluster node add new_node

Start and enable cluster services

$ pcs cluster start

$ pcs cluster enable

Prevent resources from failing back (should be adjusted for each cluster)

$ pcs resource update myResource meta resource-stickiness=200

Enable cluster resources on all nodes

$ pcs cluster enable --all

Setup a new cluster

$ pcs cluster setup --name mycluster node1 node2

Show cluster communications

$ corosync-cfgtool -s

Check cluster members

$ corosync-cmapctl | grep members

View cluster config in XML

$ pcs cluster cib

Check the cluster configuration integrity

$ crm_verify -L -V

Create a shared IP resource

$ pcs resource create ClusterIP ocf:heartbeat:IPaddr2 \ ip=10.10.10.1 cidr_netmask=32 op monitor interval=30s

Get status of fence agent

$ fence_ipmilan -P -a <drac_IP> -l fenceUser -p <password> -o status -v

Fence config for IPMI and iDRACs

$ fence_drac5 -x -l user -p pass -a 10.14.0.12 -o status -n 1 -v -c "admin1->" $ fence_ipmilan -P -a <drac_IP> -l fencer -p <password> -o status -v

Adjust the monitoring timeout for a resource

$ pcs resource update <resource> op monitor timeout=60

Stop cluster services from starting at boot time

$ pcs cluster disable foo.prefetch.net

Enable cluster services that were disabled with disable

$ pcs cluster enable foo.prefetch.net

Put a node into standby mode (when this mode is active the node can’t host resources):

$ pcs cluster standby foo.prefetch.net

Take a node out of standby mode so it can host resources

$ pcs cluster unstandby foo.prefetch.net

Start cluster services to allow the node to join an existing cluster

$ pcs cluster start foo.prefetch.net

THIS INFORMATION IS PROVIDED UNDER A GPLV2 LICENSE
THE GPLV2 LICENSE CAN BE VIEWED HERE