Configure High Availability Cluster in CentOS 7 (Step by Step Guide)


Cluster, Linux

In my last article I had explained about the different kinds of clustering and their architecture. Before you start with the configuration of High Availability Cluster, you must be aware of the basic terminologies related to Clustering. In this article I will share step by step guide to configure high availability cluster in CentOS Linux 7 using 3 virtual machines. These virtual machines are running on my Oracle VirtualBox installed on my Linux Server.

Configure High Availability Cluster in CentOS 7 (Step by Step Guide)

NOTE:
The steps to configure High Availability Cluster on Red Hat 7 will be same as CentOS 7. On RHEL system you must have an active subscription to RHN or you can configure a local offline repository using which "yum" package manager can install the provided rpm and it's dependencies.

 

Features of Highly Available Clusters?

The ClusterLabs stack, incorporating Corosync and Pacemaker defines an Open Source, High Availability cluster offering suitable for both small and large deployments.

  • Detection and recovery of machine and application-level failures
  • Supports practically any redundancy configuration
  • Supports both quorate and resource-driven clusters
  • Configurable strategies for dealing with quorum loss (when multiple machines fail)
  • Supports application startup/shutdown ordering, regardless of which machine(s) the applications are on
  • Supports applications that must/must-not run on the same machine
  • Supports applications which need to be active on multiple machines
  • Supports applications with multiple modes (eg. master/slave)

 

What Is Pacemaker?

We will use pacemaker and corosync to configure High Availability Cluster. Pacemaker is a cluster resource manager, that is, a logic responsible for a life-cycle of deployed software — indirectly perhaps even whole systems or their interconnections — under its control within a set of computers (a.k.a. nodes) and driven by prescribed rules.

It achieves maximum availability for your cluster services (a.k.a. resources) by detecting and recovering from node- and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either Corosync or Heartbeat), and possibly by utilizing other parts of the overall cluster stack.

 

Bring up Environment

First of all before we start to Configure High Availability Cluster, let us bring up our virtual machines with CentOS 7. I am using Oracle VirtualBox. You can also install Oracle VirtualBox on Linux environment. Below are my vm's configuration details

properties node1 node2 node3
OS CentOS 7 CentOS 7 CentOS 7
vCPU 2 2 2
Memory 2GB 2GB 2GB
Disk 10GB 10GB 10GB
FQDN node1.example.com node2.example.com node3.example.com
Hostname node1 node2 node3
IP Address (Internal) 10.0.2.20 10.0.2.21 10.0.2.22
IP Address (External) DHCP DHCP DHCP

 

Edit the /etc/hosts file and add the IP address, followed by an FQDN and a short cluster node name for every available cluster node network interface.

[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.2.20 node1.example.com node1
10.0.2.21 node2.example.com node2
10.0.2.22 node3.example.com node3

[root@node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.2.20 node1.example.com node1
10.0.2.21 node2.example.com node2
10.0.2.22 node3.example.com node3

[root@node3 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.2.20 node1.example.com node1
10.0.2.21 node2.example.com node2
10.0.2.22 node3.example.com node3

To finish, you must check and confirm connectivity among the cluster nodes. You can do this by simply releasing a ping command to every cluster node.

 

Stop and disable Network Manager on all the nodes

[root@node1 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
NOTE:
You must remove or disable the NetworkManager service because you will want to avoid any automated configuration of network interfaces on your cluster nodes.
After removing or disabling the NetworkManager service, you must restart the networking service.

 

Configure NTP

To configure High Availability Cluster it is important that all your nodes in the cluster are connected and synced to a NTP server. Since my machines are in IST timezone I will use the India pool of NTP servers.

[root@node1 ~]# systemctl start ntpd
[root@node1 ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

 

Install pre-requisite rpms

The high availability package is not part of CentOS repo so you will need epel-release repo.

[root@node1 ~]# yum install epel-release -y

pcs is the pcaemaker software and all it's dependencies The fence-agents-all will install all the default fencing agents which is available for Red Hat Cluster

[root@node1 ~]# yum install pcs fence-agents-all -y

Add firewall rules

[root@node1 ~]# firewall-cmd --permanent --add-service=high-availability; firewall-cmd --reload
success
success
NOTE:
If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports: TCP ports 2224, 3121, and 21064, and UDP port 5404, 5405.
If you run into any problems during testing, you might want to disable the firewall and SELinux entirely until you have everything working. This may create significant security issues and should not be performed on machines that will be exposed to the outside world, but may be appropriate during development and testing on a protected host.

 

Configure High Availability Cluster

The installed packages will create a hacluster user with a disabled password. While this is fine for running pcs commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes.

Set the password for the Pacemaker cluster on each cluster node using the following command. Here my password is password

[root@node1 ~]# echo password | passwd --stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.

Start the Pacemaker cluster manager on each node:

[root@node1 ~]# systemctl enable --now pcsd
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

 

Configure Corosync

To configure Openstack High Availability we need to configure corosync on any one of the node, use pcs cluster auth to authenticate as the hacluster user:

[root@node1 ~]# pcs cluster auth node1.example.com node2.example.com node3.example.com
Username: hacluster
Password:
node2.example.com: Authorized
node1.example.com: Authorized
node3.example.com: Authorized
NOTE:
If you face any issues at this step, check your firewalld/iptables or selinux policy

Finally, run the following commands on the first node to create the cluster and start it. Here our cluster name will be mycluster

[root@node1 ~]# pcs cluster setup --start --name mycluster node1.example.com node2.example.com node3.example.com
Destroying cluster on nodes: node1.example.com, node2.example.com, node3.example.com...
node3.example.com: Stopping Cluster (pacemaker)...
node2.example.com: Stopping Cluster (pacemaker)...
node1.example.com: Stopping Cluster (pacemaker)...
node1.example.com: Successfully destroyed cluster
node2.example.com: Successfully destroyed cluster
node3.example.com: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'node1.example.com', 'node2.example.com', 'node3.example.com'
node1.example.com: successful distribution of the file 'pacemaker_remote authkey'
node2.example.com: successful distribution of the file 'pacemaker_remote authkey'
node3.example.com: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
node1.example.com: Succeeded
node2.example.com: Succeeded
node3.example.com: Succeeded

Starting cluster on nodes: node1.example.com, node2.example.com, node3.example.com...
node2.example.com: Starting Cluster...
node1.example.com: Starting Cluster...
node3.example.com: Starting Cluster...

Synchronizing pcsd certificates on nodes node1.example.com, node2.example.com, node3.example.com...
node2.example.com: Success
node1.example.com: Success
node3.example.com: Success
Restarting pcsd on the nodes in order to reload the certificates...
node1.example.com: Success
node3.example.com: Success
node2.example.com: Success

Enable the cluster service i.e. pacemaker and corosync so they can automatically start on boot

[root@node1 ~]# pcs cluster enable --all
node1.example.com: Cluster Enabled
node2.example.com: Cluster Enabled
node3.example.com: Cluster Enabled

Lastly check the cluster status

[root@node1 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: node2.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
 Last updated: Sat Oct 27 08:41:52 2018
 Last change: Sat Oct 27 08:41:18 2018 by hacluster via crmd on node2.example.com
 3 nodes configured
 0 resources configured

PCSD Status:
  node3.example.com: Online
  node1.example.com: Online
  node2.example.com: Online

To check the cluster's Quorum status using the corosync-quorumtool command.

[root@node1 ~]# corosync-quorumtool
Quorum information
------------------
Date:             Sat Oct 27 08:43:22 2018
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          1
Ring ID:          1/8
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
         1          1 node1.example.com (local)
         2          1 node2.example.com
         3          1 node3.example.com

To get the LIVE status of the cluster use crm_mon

[root@node1 ~]# crm_mon
Connection to the CIB terminated

 

Verify the cluster configuration

Before we make any changes, it’s a good idea to check the validity of the configuration.

[root@node1 ~]#  crm_verify -L -V
   error: unpack_resources:     Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:     Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:     NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid

As you can see, the tool has found some errors.

In order to guarantee the safety of your data, [5] fencing (also called STONITH) is enabled by default. However, it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster will not be able to make progress if a situation requiring node fencing arises).

We will disable this feature for now and configure it later. To disable STONITH, set the stonith-enabled cluster option to false on both the controller nodes:

WARNING:
The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that the nodes which fails are safely in powered off state. Some vendors will refuse to support clusters that STONITH disabled.
[root@node1 ~]# pcs property set stonith-enabled=false

Next re-validate the cluster

[root@node1 ~]# crm_verify -L -V

 

This all about Configure High Availability Cluster on Linux, Below are some more articles on Cluster which you can use to understand about cluster architecture, resource group and resource constraints etc.

 

 

Lastly I hope the steps from this article to configure high availability cluster on Linux was helpful. So, let me know your suggestions and feedback using the comment section.

 

Deepak Prasad

Deepak Prasad

He is the founder of GoLinuxCloud and brings over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, AWS, Networking, and Security. With extensive experience, he excels in various domains, from development to DevOps, Networking, and Security, ensuring robust and efficient solutions for diverse projects. You can connect with him on his LinkedIn profile.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

8 thoughts on “Configure High Availability Cluster in CentOS 7 (Step by Step Guide)”

  1. Hi,
    I am going to create 2 master and 3 worker nodes for HA cluster. Please let me know the corrrect process. Is this process works fine. Thanks

    Reply
  2. Hello, i have an issue and maybe you can help…hope so.
    after restoring a vm snapshot on both master /slave servers, the synchronisation is out of order.
    cat /proc/drbd shows :
    version: 8.4.11-1 (api:1/proto:86-101)
    GIT-hash: 66145a308421e9c124ec391a7848ac20203bb03c build by root@aastraDRBD, 2019-09-02 14:02:44
    0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r—–
    ns:21519 nr:0 dw:33372 dr:163230 al:24 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:16960

    and

    crm_mon -f shows this : 
    Attempting connection to the cluster...
    Stack: corosync
    Current DC: a5000slave (version 1.1.16-12.el7-94ff4df) - partition with quorum
    Last updated: Wed Apr 29 14:55:45 2020
    Last change: Tue Apr 28 22:01:42 2020 by root via crm_resource on a5000master
    
    2 nodes configured
    7 resources configured
    
    Online: [ a5000master a5000slave ]
    
    Active resources:
    
     Master/Slave Set: resource_drbd_ms [resource_drbd]
         Masters: [ a5000master ]
         Slaves: [ a5000slave ]
     Resource Group: group5000
         resource_fs        (ocf::heartbeat:Filesystem):    Started a5000master
         resource_ip        (ocf::heartbeat:IPaddr2):       Started a5000master
         resource_pabx      (lsb:a5000server):      Started a5000master
     Clone Set: ping-clone [ping]
         Started: [ a5000master a5000slave ]
    
    Migration Summary:
    * Node a5000master:
    * Node a5000slave:
    
    Failed Actions:
    * resource_ip_monitor_10000 on a5000master 'not running' (7): call=38, status=complete, exitreason='none',
        last-rc-change='Wed Apr 29 14:32:48 2020', queued=0ms, exec=0ms

    Can you please give me some kind of solution ?

    Thanks in advance and bests regards
    Yann

    Reply
    • Hello Yann,

      Since it is a two node cluster, there can be a split brain situation. You can try re-defining the primary and secondary cluster
      On secondary cluster
      # drbdadm secondary all
      # drbdadm disconnect all
      # drbdadm — –discard-my-data connect all

      On Primary Cluster
      # drbdadm primary all
      # drbdadm disconnect all
      # drbdadm connect all

      Hope this helps

      Reply
  3. Very Nice article ..! thnx a lot for sharing can please also share how to clusterize different application services., just to show as example.

    Reply

Leave a Comment