20 Red Hat Clustering (Pacemaker) Interview Questions and Answers

Deploying services in high availability is one of the most important and complex task for Linux geeks. Red Hat clustering a.k.a Pacemaker is used to configure the services like NFS, Apache, Squid and SMB etc in high availability, here high availability means services will be available in active-passive mode.

RedHat-Clustering-Interview-Questions

In this article I will try to cover all important Red Hat Clustering or Pacemaker Interview questions, these questions will help you to prepare for your interview

Q:1 What is the role of Corosync ?

Ans: It is one of the important component of pacemaker, used for handling the communication between cluster nodes, apart from this pacemaker also uses it to check cluster membership and quorum data.

Q:2 What is the use of Quorum in Red Hat Clustering?

Ans:  A Healthy cluster requires quorum to operate, if any case cluster lose the quorum then cluster will stop or terminate resources and resources group to maintain the data integrity.

So quorum can be defined as voting system which is required to maintain the cluster integrity. In a cluster each node or member has one vote, depending on the number of nodes in cluster, cluster will form the quorum when either half or more than half votes are present.

Q:3 What are the different type of fencing supported by Red Hat cluster?

Ans: Red Hat Cluster support two type of fencing,

a)       Power Fencing

b)      Storage or Fabric Fencing

Q:4: How to open ports in firewall for Cluster communication?

Ans: Let’s assume you have two node cluster, then run the following command on each node to open the firewall ports related to Red Hat clustering,

~]# firewall-cmd --permanent --add-service=high-availability
~] # firewall-cmd --reload
Q: 5 What is use of pcs command?

Ans: pcs is command line utility, used to configure and manage cluster nodes. In other terms we can say pcs mange every aspect of Pacemaker cluster.

Q: 6 How to check your Cluster status?

Ans: Cluster status be displayed by using two commands,

~]# pcs cluster status
~]# pcs status
Q:7 How to enable automatic startup of cluster on all the configured cluster nodes?

Ans: Let’s assume you have three node cluster and want to start the cluster services and join the cluster automatically after reboot. So to achieve run the below command from any of the cluster node,

~]# pcs cluster enable --all
Q:8 How to prohibit a cluster node from hosting the services?

Ans: There are some situations where admin need to temporarily suspend a cluster node without affecting the cluster operation, this can be easily achieved by marking that cluster node as “standby“,

Run the below command from one cluster node,

~]# pcs cluster standby {Cluster_Node_Name}

To Resume the cluster services on this node , run the below command,

~]# pcs cluster unstandby {Cluster_Node_Name}
Q:9 How to check the status of Quorum?

Ans: Current state of Quorum of your cluster can be viewed by the command line utility “corosync-quorumtool

Execute the below command from any of the cluster node.

~]# corosync-quorumtool

Above command output will display the information regarding the quorum like Number of nodes, Quorate Status, Total Votes, Expected Votes and Quorum etc.

To keep running the running “corosync-quorumtool” use “-m” flag.

Q:10 What is fencing and how it is configured in Red Hat Cluster / Pacemaker?

Ans: Fencing is a technique or method to power off or terminate the faulty node from the cluster. Fencing is very important component of a cluster, Red Hat Cluster will not start resource and service recovery for non responsive node until that node has been fenced.

In Red Hat Clustering, fencing is configured via “pcs stonith“, here stonith stands for “Shoot The Other Node In The Head”

~]# pcs stonith create name fencing_agent  parameters
Q:11 How to view fencing configuration and how to fence a cluster node?

Ans: To view all the configuration of fencing execute the following command from any of node,

~]# pcs stonith show --full

To fence a manually , use the following command

~]# pcs stonith fence nodeb.example.com
Q:12 What is Storage based fencing device and how to create storage based fencing device?

Ans: As the name suggests, storage based fence device will cut off the faulty cluster node from storage access, it will not power off or terminate the cluster node.

Let’s assume shared storage like “/dev/sda” is assigned to all the cluster node, then you create the storage based fencing device using the below command,

~]# pcs stonith create {Name_Of_Fence_Device} fence_scsi devices=/dev/sda meta provides=unfencing

Use the following command to fence any cluster node for fence testing,

~]# pcs stonith fence {Cluster_Node_Name}
Q:13 How to display the useful information about the cluster resource?

Ans: To display the information about any cluster resource use the following command from any of the cluster node,

~]#  pcs resource describe {resource_name}

Example:

~]# pcs resource describe Filesystem

To Display the list of all the resources of a cluster, use the beneath command,

~]# pcs resource list
Q:14 Tell me the syntax to create a resource in Red Hat cluster?

Ans: Use the below syntax to create a resource in Red Hat Cluster / Pacemaker,

~]# pcs resource create {resource_name} {resource_provider} {resource_parameters} --group {group_name}

Let’s assume we want to create Filesystem resource,

~]# pcs resource create my_fs Filesystem device=/dev/sdb1 directory=/var/www/html fstype=xfs –group my_group
Q:15 How to list and clear the fail count of a cluster resource?

Ans: Fail count of a cluster resource can be displayed by using the following command,

~]# pcs resource failcount show

To clear or reset the failcount of a cluster resource, use the below pcs command,

~]# pcs resource failcount reset {resource_name} {cluster_node_name}
Q:16 How to move cluster resource from one node to another ?

Ans: Cluster resources and resource groups can be moved away from the cluster node using the below command,

~]# pcs resource move {resource_or_resources_group}  {cluster_node_name}

When a cluster resource or resources group moved away from a cluster node then a temporary constraint rule is enabled on the cluster for that node , means that resource / resources group can’t be run that cluster node, so to remove that constraint use the following command,

~]# pcs resource clear {resource_or_resource_group} {cluster_node_name}
Q:17 What is the default log file for pacemaker and corosync ?

Ans: Default log file for pacemaker is “/var/log/pacemaker.log” and for corosync is “/var/log/messages”

Q:18: what are constraints and its type?

Ans: Constraints can be defined as restrictions or rules which determine in which orders cluster resources will started and stopped. Constraints are classified into three types,

  • Order Constraints – It decides the orders how resources or resource group will be started or stopped.
  • Location constraints – It decides on which nodes resources or resource group may run
  • Colocation constraints- It decides whether two resources or resource group may run on the same node.
Q:19 How to use LVM (logical Volume) on shared storage in Red Hat clustering / Pacemaker?

Ans: There are two different ways to use LVM on shared storage in a cluster,

  • HA-LVM (Volume Group and its logical volumes can be accessed only one node at a time, can be used with traditional file systems ext4 and xfs)
  • Clustered LVM (It is commonly used while working with shared file system like GFS2)
Q:20 What are logical steps to configure HA-LVM in Red Hat Cluster?

Ans: Below are the logical steps to configure HA-LVM,

Let’s assume shared storage is provisioned on all the cluster nodes,

a) On any of the cluster node, do pvcreate, vgcreate and lvcreate on shared storage disk

b) Format the logical volume on storage disk

c) on each cluster node, enable HA-LVM tagging in file “/etc/lvm/lvm.conf

locking_type = 1

Also define logical volume group that are not shared in the cluster,

Volume_list = [rootvg,logvg]

rootvg & logvg are OS volume group and not shared among the cluster nodes.

d) On each cluster node, rebuild initramfs using the following command,

~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) ; reboot

e) Once all the cluster nodes are rebooted, verify the cluster status using “pcs status” command,

f)  On any of cluster node , create LVM resource using below command,

~]# pcs resource create ha_lvm LVM volumegroup=cluster_vg exclusive=true --group halvm_fs

g) Now create FileSystem resource from any of the cluster node,

~]# pcs resource create xfs_fs Filesystem device=”/dev/{volume-grp}/{logical_volume}” directory=”/mnt” fstype=”xfs” --group halvm_fs

2 thoughts on “20 Red Hat Clustering (Pacemaker) Interview Questions and Answers”

Leave a Comment

Your email address will not be published. Required fields are marked *