In my last article I explained about the High Availability Cluster Architecture and the steps to configure a HA Cluster with pacemaker and corosync on CentOS 7 Linux machine using 3 virtual machines. Now the next steps is to create a cluster resource and add it to our Cluster nodes. But before we start with the commands and examples, let us understand some basic terminologies.

How to create cluster resource in HA Cluster (with examples)

 

What is cluster resource?

A cluster resource is anything that is managed by the cluster. So a cluster resource can be an IP address, a web service and it can also be a test that can be automated by the cluster like sending a email to a user when something happens

 

What Makes Clustered Resources Different?

The purpose of high availability is to make sure your vital resources will be available at all times. To accomplish that goal, you have to make sure that the resources are not started by the node’s init system but that they are managed by the cluster. That means that you must take the resources out of the systemd enabled services, so that the cluster is the only software taking care of starting them.

To create resources, you need resource agents (RAs). A resource agent is like a service load script, as they were used in System-V runlevels, but a service script that contains cluster parameters as well. The resource agents are installed with the cluster, and they are subdivided into different classes.

  • LSB (Linux Standard base)
  • OCF (Open Cluster Framework)
  • systemd
  • Heartbeat
  • Stonith

Resource configuration are stored in CIB (Cluster Information Base)

 

Before you start configuring a resource you should also know about different types of resource

  • Primitive ⇒ It is a singular resource that can be managed by the cluster. That is the resource can be started only once. An IP address for example can be primitive and this IP address should be running once and once only in the Cluster
  • Clone ⇒ This should be running on multiple nodes in the same time and is managed by the Cluster
  • Multi Stake (aka master/slave) ⇒ This applies to some specific resources only where one instance runs as the master and the other instance runs as slave and the cluster manages who is the master and who is the slave. For example: DRBD (Distributed replicated block device)
  • Group ⇒ This can be as the name suggest a group of primitive or a group of clone as well. Here we put multiple resource that belong together like a highly available web server which includes HA IP Address, Shared Files System and a web service it self. So you can manage all these resource in a single group. By putting them in a group we can define that the resources are started in the order defined. Group will make sure certain constraint are automatically applied like like orderical constraint, colocation constraint.

 

Resource Stickiness

Default resource stickiness defines where a resource should go after the original situation is restored.

Now imagine we have a three node cluster where node1 is currently running the database. The node1 goes down, and since database is configured as cluster resource it will failover to another node and that may be node2. So once a database is started again on node2, the users will connect to the database and node1 comes back.

Here resource stickiness defines what you want to happen to database resource after node1 has fully restored it’s operation. You can define that resource migrate back to node1
or you can define the resource stays where it currently is running

Latter is recommended as in a cluster you should try to avoid migration of resources as much as possible.

 

The syntax used to add a new cluster resource is as follows:

pcs resource create <resource_name> <resource_type> <resource_options>
NOTE:
The resource_name parameter is a unique cluster resource name.
The resource_type parameter is the full name of the resource with resource class and resource ID.
The resource_options parameter contains available options to be used with the resource.

The following command provides more information on resource creation:

# pcs resource create --help

 

There are six resource classes supported by Pacemaker, as follows:

  • OCF (Open Cluster Framework): This is an extension of the LSB conventions for init scripts and is the most preferred resource class for use in the cluster
  • LSB (Linux Standard Base):These are the standard Linux init scripts found in the /etc/init.d directory
  • Upstart: This is the resource class for distributions that use upstart
  • Systemd: This is the resource class for distributions that use the systemd command
  • Fencing: This is the resource class used exclusively for fencing-related resources
  • Service: This is the resource class to be used in the mixed cluster environments where cluster nodes use the systemd, upstart, and lsb commands
  • Nagios: This is the resource class used exclusively for Nagios plugins

 

NOTE:
The OCF resource class is designed with strict definitions of the exit codes that actions must return, and is therefore the best and the most commonly used resource class in cluster environments.

We can get a full list of supported OCF resource classes with the following command:

# pcs resource list heartbeat

Suppose we want to configure an IPv4 cluster’s IP address. We can see from the OCF resource list that there are two IP address OCF resource agents we can choose from, as follows:

  • ocf:heartbeat:Ipaddr: This manages virtual IPv4 and IPv6 addresses (Linux-specific version)
  • ocf:heartbeat:IPaddr2: This manages virtual IPv4 and IPv6 addresses (Linux-specific version)

 

NOTE:
The difference between the two is that the Ipaddr parameter uses the ifconfig command to create the interface and the IPaddr2 parameter uses the ip command to create the interface.
When running on CentOS 7, the preferable resource is the IPaddr2 parameter.

All the resource options are well documented and explained. Reading the resource documentation will give you an idea about the resource options you might prefer to use. We can get more detailed information about the ocf:heartbeat:IPaddr2 resource and options with the following command:

# pcs resource describe ocf:heartbeat:IPaddr2

We can add an IPv4 cluster IP resource and make it bind to the network interface enp0s8 with the following command:

[root@node1 ~]# pcs resource create apache-ip ocf:heartbeat:IPaddr2 ip=10.0.2.50 cidr_netmask=24

To check the resource

[root@node1 ~]# pcs resource show
 apache-ip      (ocf::heartbeat:IPaddr2):       Started node1.example.com

 

NOTE:
pcs resource create: This tells the cluster we are creating a new cluster resource
apache-ip: This is a unique cluster resource name
ocf:heartbeat:IPaddr2: This is the OCF cluster resource agent
ip=10.0.2.50: This is the cluster IP addresses. make sure this is a free IP as this will act as VIP
cidr_netmask=24: This is the IP address network mask

In the following snippet you can see the output of the command pcs status.

[root@node1 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node2.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Oct 27 09:11:27 2018
Last change: Sat Oct 27 09:11:15 2018 by root via cibadmin on node1.example.com

3 nodes configured
2 resources configured

Online: [ node1.example.com node2.example.com node3.example.com ]

Full list of resources:

 apache-ip      (ocf::heartbeat:IPaddr2):       Started node1.example.com

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

NOTE:
As you can see in the previous screenshot, under the Full list of resources section, the cluster resource called Cluster IP with the ocf:heartbeat:IPaddr2 resource agent was added to the cluster and started on the node1.example.com cluster node.

We can continue by adding an Apache web server cluster resource. Looking at the OCF resource agent list you can find the following Apache OCF resource agent:

  • ocf:heartbeat:apache: This Manages an Apache Web server instance

 

We can add an Apache web server cluster resource with the following command:

[root@node1 ~]# pcs resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf
NOTE:
pcs resource create: This tells the cluster we are creating a new cluster resource
WebServer: This is a unique cluster resource name
ocf:heartbeat:apache: This is the OCF cluster resource agent
configfile=/etc/httpd/conf/httpd.conf: This is the Apache configuration file to use
[root@node1 ~]# pcs resource show
 apache-ip      (ocf::heartbeat:IPaddr2):       Started node1.example.com
 WebServer      (ocf::heartbeat:apache):        Started node1.example.com

The result of the previous command should be a new Apache web server cluster resource as follows. In the following screenshot, we can see the output of the pcs status command.

[root@node1 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node2.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Oct 27 09:11:27 2018
Last change: Sat Oct 27 09:11:15 2018 by root via cibadmin on node1.example.com

3 nodes configured
2 resources configured

Online: [ node1.example.com node2.example.com node3.example.com ]

Full list of resources:

 apache-ip      (ocf::heartbeat:IPaddr2):       Started node1.example.com
 WebServer      (ocf::heartbeat:apache):        Started node1.example.com

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

NOTE:
As you can see in the previous screenshot under the Full list of resources section, the cluster resource called WebServer with the ocf:heartbeat:apache resource agent was added to the cluster and started on the node1.example.com cluster node.

 

In my next article I will share the steps to configure resource constraint and resource groups in a Cluster. Lastly I hope the steps from the article to create cluster resource and adding a resource to a Cluster was helpful. So, let me know your suggestions and feedback using the comment section.

 

Leave a Reply

Your email address will not be published. Required fields are marked *