Since this is the second part of my previous article where I shared the steps to configure OpenStack HA cluster using pacemaker and corosync. In this article I will share the steps to configure HAProxy in Openstack and move our keystone endpoints to load balancer using Virtual IP.

How to configure HAProxy in Openstack (High Availability)

 

Configure HAProxy in Openstack

To configure HAProxy in OpenStack, we will be using HAProxy to load-balance our control plane services in this lab deployment. Some deployments may also implement Keepalived and run HAProxy in an Active/Active configuration. For this deployment, we will run HAProxy Active/Passive and manage it as a resource along with our VIP in Pacemaker.

To start, install HAProxy on both nodes using the following command:

[[email protected] ~]# yum install -y haproxy
[[email protected] ~]# yum install -y haproxy

Verify installation with the following command:

[[email protected] ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

[[email protected] ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

Next, we will create a configuration file for HAProxy which load-balances the API services installed on the two controllers. Use the following example as a template, replacing the IP addresses in the example with the IP addresses of the two controllers and the IP address of the VIP that you’ll be using to load-balance the API services.

NOTE:
The IP Address which you plan to use for VIP must be free.

Take a backup of the existing config file on both the controller nodes

[[email protected] ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp
[[email protected] ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp

The following example /etc/haproxy/haproxy.cfg, will load-balance Horizon in our environment:

[[email protected] haproxy]# cat haproxy.cfg
global
  daemon
  group  haproxy
  maxconn  40000
  pidfile  /var/run/haproxy.pid
  user  haproxy

defaults
  log  127.0.0.1 local2 warning
  mode  tcp
  option  tcplog
  option  redispatch
  retries  3
  timeout  connect 10s
  timeout  client 60s
  timeout  server 60s
  timeout  check 10s

listen horizon
  bind 192.168.122.30:80
  mode http
  cookie SERVERID insert indirect nocache
  option tcplog
  timeout client 180s
  server controller1 192.168.122.20:80 cookie controller1 check inter 1s
  server controller2 192.168.122.22:80 cookie controller2 check inter 1s

In this example, controller1 has an IP address of 192.168.122.20 and controller2 has an IP address of 192.168.122.22. The VIP that we’ve chosen to use is 192.168.122.30. Copy this file, replacing the IP addresses with the addresses in your lab, to /etc/haproxy/haproxy.cfg on each of the controllers.

 

To configure HAProxy in OpenStack we must copy this haproxy.cfg file to the second controller

[[email protected] ~]# scp /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg

In order for Horizon to respond to requests on the VIP, we’ll need to add the VIP as a ServerAlias in the Apache virtual host configuration. This is found at /etc/httpd/conf.d/15-horizon_vhost.conf in our lab installation. Look for the following line on controller1:

ServerAlias 192.168.122.20

and below line on controller2

ServerAlias 192.168.122.22

Add an additional ServerAlias line with the VIP on both controllers:

ServerAlias 192.168.122.30

You’ll also need to tell Apache not to listen on the VIP so that HAProxy can bind to the address. To do this, modify /etc/httpd/conf/ports.conf and specify the IP address of the controller in addition to the port numbers. The following is an example:

[[email protected] ~]# cat /etc/httpd/conf/ports.conf
# ************************************
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ************************************

Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.20:35357
Listen 192.168.122.20:5000
Listen 192.168.122.20:80
Here 192.168.122.20 is the IP of controller1

On controller2 repeat the same with the IP of the respective controller node

[[email protected] ~(keystone_admin)]# cat /etc/httpd/conf/ports.conf
# ************************************
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ************************************

Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.22:35357
Listen 192.168.122.22:5000
Listen 192.168.122.22:80

Restart Apache to pick up the new alias:

[[email protected] ~]# systemctl restart httpd
[[email protected] ~]# systemctl restart httpd

Next, add the VIP and the HAProxy service to the Pacemaker cluster as resources. These commands should only be run on the first controller node. This tells Pacemaker three things about the resource you want to add:

  • The first field (ocf in this case) is the standard to which the resource script conforms and where to find it.
  • The second field (heartbeat in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in.
  • The third field (IPaddr2 in this case) is the name of the resource script.

 

[[email protected] ~]# pcs resource create VirtualIP IPaddr2 ip=192.168.122.30 cidr_netmask=24
Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')

[[email protected] ~]# pcs resource create HAProxy systemd:haproxy

Co-locate the HAProxy service with the VirtualIP to ensure that the two run together:

[[email protected] ~]# pcs constraint colocation add VirtualIP with HAProxy score=INFINITY

Verify that the resources have been started on both the controllers:

[[email protected] ~]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 12:44:27 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

At this point, you should be able to access Horizon using the VIP you specified. Traffic will flow from your client to HAProxy on the VIP to Apache on one of the two nodes.

 

Additional API service configuration

Now here configure HAProxy in Openstack is complete, the final configuration step is to move each of the OpenStack API endpoints behind the load balancer. There are three steps in this process, which are as follows:

  • Update the HAProxy configuration to include the service.
  • Move the endpoint in the Keystone service catalog to the VIP.
  • Reconfigure services to point to the VIP instead of the IP of the first controller.

 

In the following example, we will move the Keystone service behind the load balancer. This process can be followed for each of the API services.

First, add a section to the HAProxy configuration file for the authorization and admin endpoints of Keystone. So we are adding below template to our existing haproxy.cfg file on both the controllers

[[email protected] ~]# vim /etc/haproxy/haproxy.cfg
listen keystone-admin
  bind 192.168.122.30:35357
  mode tcp
  option tcplog
  server controller1 192.168.122.20:35357 check inter 1s
  server controller2 192.168.122.22:35357 check inter 1s

listen keystone-public
  bind 192.168.122.30:5000
  mode tcp
  option tcplog
  server controller1 192.168.122.20:5000 check inter 1s
  server controller2 192.168.122.22:5000 check inter 1s

Restart the haproxy service on the active node:

[[email protected] ~]# systemctl restart haproxy.service

You can determine the active node with the output from pcs status. Check to make sure that HAProxy is now listening on ports 5000 and 35357 using the following commands on both the controllers:

[[email protected] ~]# curl http://192.168.122.30:5000
{"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:5000/v2.0/", "rel": "self"}, {"href": "htt

[[email protected] ~]# curl http://192.168.122.30:5000/v3
{"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}}

[[email protected] ~]# curl http://192.168.122.30:35357/v3
{"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}}

[[email protected] ~]# curl http://192.168.122.30:35357
{"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:35357/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

All the above commands should output some JSON describing the status of the Keystone service. So all the respective ports are in listening state

 

Next, update the endpoint for the identity service in the Keystone service catalogue by creating a new endpoint and deleting the old one. So you can source your existing keystonerc_admin file

[[email protected] ~(keystone_admin)]# source keystonerc_admin

Below is the content from my keystonerc_admin

[[email protected] ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.20:5000/v3
    export PS1='[\[email protected]\h \W(keystone_admin)]\$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

As you see currently the OS_AUTH_URL reflects to the existing endpoint for the controller. We will update this in a while.

Get the list if current keystone endpoints on your active controller

[[email protected] ~(keystone_admin)]# openstack endpoint list | grep keystone
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |
| b0f5b7887cd346b3aec747e5b9fafcd3 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.20:35357/v3                |
| c1380d643f734cc1b585048b2e7a7d47 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.20:5000/v3                 |

Now since we want to move the endpoint in the keystone service to VIP, we will create new endpoints with the VIP url as below for admin, public and internal

[[email protected] ~(keystone_admin)]# openstack endpoint create --region RegionOne identity public http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 08a26ace08884b85a0ff869ddb20bea3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

[[email protected] ~(keystone_admin)]# openstack endpoint create --region RegionOne identity admin http://192.168.122.30:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ef210afef1da4558abdc00cc13b75185 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:35357/v3   |
+--------------+----------------------------------+

[[email protected] ~(keystone_admin)]# openstack endpoint create --region RegionOne identity internal http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5205be865e2a4cb9b4ab2119b93c7461 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

Last, update the auth_uri, auth_url and identity_uri parameters in each of the OpenStack services to point to the new IP address. The following configuration files will need to be edited:

/etc/ceilometer/ceilometer.conf
/etc/cinder/api-paste.ini
/etc/glance/glance-api.conf
/etc/glance/glance-registry.conf
/etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/nova/nova.conf
/etc/swift/proxy-server.conf

Next install openstack-utils to get the openstack tools which can help us restart all the services at once rather than manually restarting all the openstack related services

[[email protected] ~(keystone_admin)]# yum -y install openstack-utils

After editing each of the files, restart the OpenStack services on all of the nodes in the lab deployment using the following command:

[[email protected] ~(keystone_admin)]# openstack-service restart

Next update your keystonerc_admin file to point to the new OS_AUTH_URL with the VIP i.e. 192.168.122.30:5000/v3 as shown below

[[email protected] ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.30:5000/v3
    export PS1='[\[email protected]\h \W(keystone_admin)]\$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

Now re-source the updated keystonerc_admin file

[[email protected] ~(keystone_admin)]# source keystonerc_admin

Validate the new changes if the OS_AUTH_URL is pointing to the new VIP

[[email protected] ~(keystone_admin)]# echo $OS_AUTH_URL
http://192.168.122.30:5000/v3

Once the openstack services are restrated, delete the old endpoints for keystone service

[[email protected] ~(keystone_admin)]# openstack endpoint delete b0f5b7887cd346b3aec747e5b9fafcd3
[[email protected] ~(keystone_admin)]# openstack endpoint delete c1380d643f734cc1b585048b2e7a7d47

 

NOTE:
You may get below error while attempting to delete the old endpoints, these are most likely because the keystone database is still not properly refreshed so perform another round of “openstact-service restart” and then re-attempt to delete the endpoint
[[email protected] ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6
Failed to delete endpoint with ID '3ded2a2faffe4fd485f6c3c58b1990d6': More than one endpoint exists with the name '3ded2a2faffe4fd485f6c3c58b1990d6'.
1 of 1 endpoints failed to delete.

[[email protected] ~(keystone_admin)]# openstack endpoint list | grep 3ded2a2faffe4fd485f6c3c58b1990d6
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |

[[email protected] ~(keystone_admin)]# openstack-service restart

[[email protected] ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6

Repeat the same set of steps of controller2

 

After deleting the old endpoints and creating the new ones, below is the updated list of keystone endpoints on controller2

[[email protected] ~(keystone_admin)]# openstack endpoint list | grep keystone
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |

The OpenStack services will now be using the Keystone API endpoint provided by the VIP and the service will be highly available.

 

Perform a Cluster Failover

Since our ultimate goal is high availability, we should test failover of our new resource.

Before performing a failover let us make sure our cluster is UP and running properly

[[email protected] ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 14:54:45 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

As we see both our controller are online so let us stop the second controller

[[email protected] ~(keystone_admin)]# pcs cluster stop controller2
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...

Now let us try to check the pacemaker status from controller2

[[email protected] ~(keystone_admin)]# pcs status
Error: cluster is not currently running on this node

Since cluster service is not running on controller2 we cannot check the status. So let us get the status from controller1

[[email protected] ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:21:32 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 ]
OFFLINE: [ controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

As expected it shows controller2 is offline. So now let us check if our endpoint from keystone is readable

[[email protected] ~(keystone_admin)]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                           |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| 06473a06f4a04edc94314a97b29d5395 | RegionOne | cinderv3     | volumev3     | True    | internal  | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 07ad2939b59b4f4892d6a470a25daaf9 | RegionOne | aodh         | alarming     | True    | public    | http://192.168.122.20:8042                    |
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 0856cd4b276f490ca48c772af2be49a3 | RegionOne | gnocchi      | metric       | True    | internal  | http://192.168.122.20:8041                    |
| 08ff114d526e4917b5849c0080cfa8f2 | RegionOne | aodh         | alarming     | True    | admin     | http://192.168.122.20:8042                    |
| 1e6cf514c885436fb14ffec0d55286c6 | RegionOne | aodh         | alarming     | True    | internal  | http://192.168.122.20:8042                    |
| 20178fdd0a064b5fa91b869ab492d2d1 | RegionOne | cinderv2     | volumev2     | True    | internal  | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3524908122a44d7f855fd09dd2859d4e | RegionOne | nova         | compute      | True    | public    | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| 3a896bde051f4ae4bfa3694a1eb05321 | RegionOne | cinderv2     | volumev2     | True    | admin     | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3ef1f30aab8646bc96c274a116120e66 | RegionOne | nova         | compute      | True    | admin     | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 42a690ef05aa42adbf9ac21056a9d4f3 | RegionOne | nova         | compute      | True    | internal  | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 45fea850b0b34f7ca2443da17e82ca13 | RegionOne | glance       | image        | True    | admin     | http://192.168.122.20:9292                    |
| 46cbd1e0a79545dfac83eeb429e24a6c | RegionOne | cinderv2     | volumev2     | True    | public    | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 49f82b77105e4614b7cf57fe1785bdc3 | RegionOne | cinder       | volume       | True    | internal  | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 4aced9a3c17741608b2491a8a8fb7503 | RegionOne | cinder       | volume       | True    | public    | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 63eeaa5246f54c289881ade0686dc9bb | RegionOne | ceilometer   | metering     | True    | admin     | http://192.168.122.20:8777                    |
| 6e2fd583487846e6aab7cac4c001064c | RegionOne | gnocchi      | metric       | True    | public    | http://192.168.122.20:8041                    |
| 79f2fcdff7d740549846a9328f8aa993 | RegionOne | cinderv3     | volumev3     | True    | public    | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 9730a44676b042e1a9f087137ea52d04 | RegionOne | glance       | image        | True    | public    | http://192.168.122.20:9292                    |
| a028329f053841dfb115e93c7740d65c | RegionOne | neutron      | network      | True    | internal  | http://192.168.122.20:9696                    |
| acc7ff6d8f1941318ab4f456cac5e316 | RegionOne | placement    | placement    | True    | public    | http://192.168.122.20:8778/placement          |
| afecd931e6dc42e8aa1abdba44fec622 | RegionOne | glance       | image        | True    | internal  | http://192.168.122.20:9292                    |
| c08c1cfb0f524944abba81c42e606678 | RegionOne | placement    | placement    | True    | admin     | http://192.168.122.20:8778/placement          |
| c0c0c4e8265e4592942bcfa409068721 | RegionOne | placement    | placement    | True    | internal  | http://192.168.122.20:8778/placement          |
| d9f34d36bd2541b98caa0d6ab74ba336 | RegionOne | cinder       | volume       | True    | admin     | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| e051cee0d06e45d48498b0af24eb08b5 | RegionOne | ceilometer   | metering     | True    | public    | http://192.168.122.20:8777                    |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |
| ea6f1493aa134b6f9822eca447dfd1df | RegionOne | neutron      | network      | True    | admin     | http://192.168.122.20:9696                    |
| ed97856952bb4a3f953ff467d61e9c6a | RegionOne | gnocchi      | metric       | True    | admin     | http://192.168.122.20:8041                    |
| f989d76263364f07becb638fdb5fea6c | RegionOne | neutron      | network      | True    | public    | http://192.168.122.20:9696                    |
| fe32d323287c4a0cb221faafb35141f8 | RegionOne | ceilometer   | metering     | True    | internal  | http://192.168.122.20:8777                    |
| fef852af4f0d4f0cacd4620e5d5245c2 | RegionOne | cinderv3     | volumev3     | True    | admin     | http://192.168.122.20:8776/v3/%(tenant_id)s   |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+

yes we are still able to read the endpoint list for keystone so all looks fine..

 

Let us again start our cluster configuration on controller2

[[email protected] ~(keystone_admin)]# pcs cluster start
Starting Cluster...

And check the status

[[email protected] ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:23:17 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

So all is back to green and we were successfully able to configure HAProxy in Openstack.

 

Lastly I hope the steps from the article to configure HAProxy in Openstack (High Availability between controllers) was helpful. So, let me know your suggestions and feedback using the comment section.

 

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *