Since this is the second part of my previous article where I shared the steps to configure OpenStack HA cluster using pacemaker and corosync. In this article I will share the steps to configure HAProxy in Openstack and move our keystone endpoints to load balancer using Virtual IP.
Configure HAProxy in Openstack
To configure HAProxy in OpenStack, we will be using HAProxy to load-balance our control plane services in this lab deployment. Some deployments may also implement Keepalived
and run HAProxy
in an Active/Active configuration. For this deployment, we will run HAProxy Active/Passive and manage it as a resource along with our VIP
in Pacemaker.
To start, install HAProxy on both nodes using the following command:
[root@controller1 ~]# yum install -y haproxy [root@controller2 ~]# yum install -y haproxy
Verify installation with the following command:
[root@controller1 ~]# rpm -q haproxy haproxy-1.5.18-7.el7.x86_64 [root@controller2 ~]# rpm -q haproxy haproxy-1.5.18-7.el7.x86_64
Next, we will create a configuration file for HAProxy which load-balances the API services installed on the two controllers. Use the following example as a template, replacing the IP addresses in the example with the IP addresses of the two controllers and the IP address of the VIP that you'll be using to load-balance the API services.
Take a backup of the existing config file on both the controller nodes
[root@controller1 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp [root@controller2 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp
The following example /etc/haproxy/haproxy.cfg
, will load-balance Horizon in our environment:
[root@controller1 haproxy]# cat haproxy.cfg global daemon group haproxy maxconn 40000 pidfile /var/run/haproxy.pid user haproxy defaults log 127.0.0.1 local2 warning mode tcp option tcplog option redispatch retries 3 timeout connect 10s timeout client 60s timeout server 60s timeout check 10s listen horizon bind 192.168.122.30:80 mode http cookie SERVERID insert indirect nocache option tcplog timeout client 180s server controller1 192.168.122.20:80 cookie controller1 check inter 1s server controller2 192.168.122.22:80 cookie controller2 check inter 1s
In this example, controller1 has an IP address of 192.168.122.20
and controller2 has an IP address of 192.168.122.22
. The VIP that we've chosen to use is 192.168.122.30
. Copy this file, replacing the IP addresses with the addresses in your lab, to /etc/haproxy/haproxy.cfg
on each of the controllers.
To configure HAProxy in OpenStack we must copy this haproxy.cfg
file to the second controller
[root@controller1 ~]# scp /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg
In order for Horizon to respond to requests on the VIP, we'll need to add the VIP as a ServerAlias
in the Apache virtual host configuration. This is found at /etc/httpd/conf.d/15-horizon_vhost.conf
in our lab installation. Look for the following line on controller1:
ServerAlias 192.168.122.20
and below line on controller2
ServerAlias 192.168.122.22
Add an additional ServerAlias
line with the VIP on both controllers:
ServerAlias 192.168.122.30
You'll also need to tell Apache not to listen on the VIP so that HAProxy can bind to the address. To do this, modify /etc/httpd/conf/ports.conf
and specify the IP address of the controller in addition to the port numbers. The following is an example:
[root@controller1 ~]# cat /etc/httpd/conf/ports.conf # ************************************ # Listen & NameVirtualHost resources in module puppetlabs-apache # Managed by Puppet # ************************************ Listen 0.0.0.0:8778 #Listen 35357 #Listen 5000 #Listen 80 Listen 8041 Listen 8042 Listen 8777 Listen 192.168.122.20:35357 Listen 192.168.122.20:5000 Listen 192.168.122.20:80 Here 192.168.122.20 is the IP of controller1
On controller2 repeat the same with the IP of the respective controller node
[root@controller2 ~(keystone_admin)]# cat /etc/httpd/conf/ports.conf # ************************************ # Listen & NameVirtualHost resources in module puppetlabs-apache # Managed by Puppet # ************************************ Listen 0.0.0.0:8778 #Listen 35357 #Listen 5000 #Listen 80 Listen 8041 Listen 8042 Listen 8777 Listen 192.168.122.22:35357 Listen 192.168.122.22:5000 Listen 192.168.122.22:80
Restart Apache to pick up the new alias:
[root@controller1 ~]# systemctl restart httpd [root@controller2 ~]# systemctl restart httpd
Next, add the VIP and the HAProxy service to the Pacemaker cluster as resources. These commands should only be run on the first controller node. This tells Pacemaker three things about the resource you want to add:
- The first field (
ocf
in this case) is the standard to which the resource script conforms and where to find it. - The second field (
heartbeat
in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in. - The third field (
IPaddr2
in this case) is the name of the resource script.
[root@controller1 ~]# pcs resource create VirtualIP IPaddr2 ip=192.168.122.30 cidr_netmask=24 Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2') [root@controller1 ~]# pcs resource create HAProxy systemd:haproxy
Co-locate the HAProxy service with the VirtualIP to ensure that the two run together:
[root@controller1 ~]# pcs constraint colocation add VirtualIP with HAProxy score=INFINITY
Verify that the resources have been started on both the controllers:
[root@controller1 ~]# pcs status Cluster name: openstack Stack: corosync Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Tue Oct 16 12:44:27 2018 Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1 2 nodes configured 2 resources configured Online: [ controller1 controller2 ] Full list of resources: VirtualIP (ocf::heartbeat:IPaddr2): Started controller1 HAProxy (systemd:haproxy): Started controller1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
At this point, you should be able to access Horizon using the VIP you specified. Traffic will flow from your client to HAProxy on the VIP to Apache on one of the two nodes.
Additional API service configuration
Now here configure HAProxy in Openstack is complete, the final configuration step is to move each of the OpenStack API endpoints behind the load balancer. There are three steps in this process, which are as follows:
- Update the HAProxy configuration to include the service.
- Move the endpoint in the Keystone service catalog to the VIP.
- Reconfigure services to point to the VIP instead of the IP of the first controller.
In the following example, we will move the Keystone service behind the load balancer. This process can be followed for each of the API services.
First, add a section to the HAProxy configuration file for the authorization and admin endpoints of Keystone. So we are adding below template to our existing haproxy.cfg
file on both the controllers
[root@controller1 ~]# vim /etc/haproxy/haproxy.cfg listen keystone-admin bind 192.168.122.30:35357 mode tcp option tcplog server controller1 192.168.122.20:35357 check inter 1s server controller2 192.168.122.22:35357 check inter 1s listen keystone-public bind 192.168.122.30:5000 mode tcp option tcplog server controller1 192.168.122.20:5000 check inter 1s server controller2 192.168.122.22:5000 check inter 1s
Restart the haproxy service on the active node:
[root@controller1 ~]# systemctl restart haproxy.service
You can determine the active node with the output from pcs status. Check to make sure that HAProxy is now listening on ports 5000
and 35357
using the following commands on both the controllers:
[root@controller1 ~]# curl http://192.168.122.30:5000 {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:5000/v2.0/", "rel": "self"}, {"href": "htt [root@controller1 ~]# curl http://192.168.122.30:5000/v3 {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}} [root@controller1 ~]# curl http://192.168.122.30:35357/v3 {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}} [root@controller1 ~]# curl http://192.168.122.30:35357 {"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:35357/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}
All the above commands should output some JSON describing the status of the Keystone service. So all the respective ports are in listening state
Next, update the endpoint for the identity service in the Keystone service catalogue by creating a new endpoint and deleting the old one. So you can source your existing keystonerc_admin
file
[root@controller1 ~(keystone_admin)]# source keystonerc_admin
Below is the content from my keystonerc_admin
[root@controller1 ~(keystone_admin)]# cat keystonerc_admin unset OS_SERVICE_TOKEN export OS_USERNAME=admin export OS_PASSWORD='redhat' export OS_AUTH_URL=http://192.168.122.20:5000/v3 export PS1='[\u@\h \W(keystone_admin)]\$ ' export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_IDENTITY_API_VERSION=3
As you see currently the OS_AUTH_URL
reflects to the existing endpoint for the controller. We will update this in a while.
Get the list if current keystone endpoints on your active controller
[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone | identity | True | internal | http://192.168.122.20:5000/v3 |
| b0f5b7887cd346b3aec747e5b9fafcd3 | RegionOne | keystone | identity | True | admin | http://192.168.122.20:35357/v3 |
| c1380d643f734cc1b585048b2e7a7d47 | RegionOne | keystone | identity | True | public | http://192.168.122.20:5000/v3 |
Now since we want to move the endpoint in the keystone service to VIP, we will create new endpoints with the VIP url as below for admin, public and internal
[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity public http://192.168.122.30:5000/v3 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 08a26ace08884b85a0ff869ddb20bea3 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 555154c5facf4e96a8677362c62b2ac9 | | service_name | keystone | | service_type | identity | | url | http://192.168.122.30:5000/v3 | +--------------+----------------------------------+ [root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity admin http://192.168.122.30:35357/v3 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | ef210afef1da4558abdc00cc13b75185 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 555154c5facf4e96a8677362c62b2ac9 | | service_name | keystone | | service_type | identity | | url | http://192.168.122.30:35357/v3 | +--------------+----------------------------------+ [root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity internal http://192.168.122.30:5000/v3 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 5205be865e2a4cb9b4ab2119b93c7461 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 555154c5facf4e96a8677362c62b2ac9 | | service_name | keystone | | service_type | identity | | url | http://192.168.122.30:5000/v3 | +--------------+----------------------------------+
Last, update the auth_uri
, auth_url
and identity_uri
parameters in each of the OpenStack services to point to the new IP address. The following configuration files will need to be edited:
/etc/ceilometer/ceilometer.conf /etc/cinder/api-paste.ini /etc/glance/glance-api.conf /etc/glance/glance-registry.conf /etc/neutron/neutron.conf /etc/neutron/api-paste.ini /etc/nova/nova.conf /etc/swift/proxy-server.conf
Next install openstack-utils
to get the openstack tools which can help us restart all the services at once rather than manually restarting all the openstack related services
[root@controller1 ~(keystone_admin)]# yum -y install openstack-utils
After editing each of the files, restart the OpenStack services on all of the nodes in the lab deployment using the following command:
[root@controller1 ~(keystone_admin)]# openstack-service restart
Next update your keystonerc_admin
file to point to the new OS_AUTH_URL
with the VIP i.e. 192.168.122.30:5000/v3
as shown below
[root@controller1 ~(keystone_admin)]# cat keystonerc_admin unset OS_SERVICE_TOKEN export OS_USERNAME=admin export OS_PASSWORD='redhat' export OS_AUTH_URL=http://192.168.122.30:5000/v3 export PS1='[\u@\h \W(keystone_admin)]\$ ' export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_IDENTITY_API_VERSION=3
Now re-source the updated keystonerc_admin
file
[root@controller1 ~(keystone_admin)]# source keystonerc_admin
Validate the new changes if the OS_AUTH_URL
is pointing to the new VIP
[root@controller1 ~(keystone_admin)]# echo $OS_AUTH_URL
http://192.168.122.30:5000/v3
Once the openstack services are restrated, delete the old endpoints for keystone service
[root@controller1 ~(keystone_admin)]# openstack endpoint delete b0f5b7887cd346b3aec747e5b9fafcd3 [root@controller1 ~(keystone_admin)]# openstack endpoint delete c1380d643f734cc1b585048b2e7a7d47
[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6 Failed to delete endpoint with ID '3ded2a2faffe4fd485f6c3c58b1990d6': More than one endpoint exists with the name '3ded2a2faffe4fd485f6c3c58b1990d6'. 1 of 1 endpoints failed to delete. [root@controller1 ~(keystone_admin)]# openstack endpoint list | grep 3ded2a2faffe4fd485f6c3c58b1990d6 | 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone | identity | True | internal | http://192.168.122.20:5000/v3 | [root@controller1 ~(keystone_admin)]# openstack-service restart [root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6
Repeat the same set of steps of controller2
After deleting the old endpoints and creating the new ones, below is the updated list of keystone endpoints on controller2
[root@controller2 ~(keystone_admin)]# openstack endpoint list | grep keystone | 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone | identity | True | public | http://192.168.122.30:5000/v3 | | 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone | identity | True | internal | http://192.168.122.30:5000/v3 | | e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone | identity | True | admin | http://192.168.122.30:35357/v3 |
The OpenStack services will now be using the Keystone API endpoint provided by the VIP and the service will be highly available.
Perform a Cluster Failover
Since our ultimate goal is high availability, we should test failover of our new resource.
Before performing a failover let us make sure our cluster is UP and running properly
[root@controller2 ~(keystone_admin)]# pcs status Cluster name: openstack Stack: corosync Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Tue Oct 16 14:54:45 2018 Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1 2 nodes configured 2 resources configured Online: [ controller1 controller2 ] Full list of resources: VirtualIP (ocf::heartbeat:IPaddr2): Started controller1 HAProxy (systemd:haproxy): Started controller1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
As we see both our controller are online so let us stop the second controller
[root@controller2 ~(keystone_admin)]# pcs cluster stop controller2
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...
Now let us try to check the pacemaker status from controller2
[root@controller2 ~(keystone_admin)]# pcs status Error: cluster is not currently running on this node
Since cluster service is not running on controller2 we cannot check the status. So let us get the status from controller1
[root@controller1 ~(keystone_admin)]# pcs status Cluster name: openstack Stack: corosync Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Tue Oct 16 13:21:32 2018 Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1 2 nodes configured 2 resources configured Online: [ controller1 ] OFFLINE: [ controller2 ] Full list of resources: VirtualIP (ocf::heartbeat:IPaddr2): Started controller1 HAProxy (systemd:haproxy): Started controller1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
As expected it shows controller2 is offline. So now let us check if our endpoint from keystone is readable
[root@controller2 ~(keystone_admin)]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+ | 06473a06f4a04edc94314a97b29d5395 | RegionOne | cinderv3 | volumev3 | True | internal | http://192.168.122.20:8776/v3/%(tenant_id)s | | 07ad2939b59b4f4892d6a470a25daaf9 | RegionOne | aodh | alarming | True | public | http://192.168.122.20:8042 | | 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone | identity | True | public | http://192.168.122.30:5000/v3 | | 0856cd4b276f490ca48c772af2be49a3 | RegionOne | gnocchi | metric | True | internal | http://192.168.122.20:8041 | | 08ff114d526e4917b5849c0080cfa8f2 | RegionOne | aodh | alarming | True | admin | http://192.168.122.20:8042 | | 1e6cf514c885436fb14ffec0d55286c6 | RegionOne | aodh | alarming | True | internal | http://192.168.122.20:8042 | | 20178fdd0a064b5fa91b869ab492d2d1 | RegionOne | cinderv2 | volumev2 | True | internal | http://192.168.122.20:8776/v2/%(tenant_id)s | | 3524908122a44d7f855fd09dd2859d4e | RegionOne | nova | compute | True | public | http://192.168.122.20:8774/v2.1/%(tenant_id)s | | 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone | identity | True | internal | http://192.168.122.30:5000/v3 | | 3a896bde051f4ae4bfa3694a1eb05321 | RegionOne | cinderv2 | volumev2 | True | admin | http://192.168.122.20:8776/v2/%(tenant_id)s | | 3ef1f30aab8646bc96c274a116120e66 | RegionOne | nova | compute | True | admin | http://192.168.122.20:8774/v2.1/%(tenant_id)s | | 42a690ef05aa42adbf9ac21056a9d4f3 | RegionOne | nova | compute | True | internal | http://192.168.122.20:8774/v2.1/%(tenant_id)s | | 45fea850b0b34f7ca2443da17e82ca13 | RegionOne | glance | image | True | admin | http://192.168.122.20:9292 | | 46cbd1e0a79545dfac83eeb429e24a6c | RegionOne | cinderv2 | volumev2 | True | public | http://192.168.122.20:8776/v2/%(tenant_id)s | | 49f82b77105e4614b7cf57fe1785bdc3 | RegionOne | cinder | volume | True | internal | http://192.168.122.20:8776/v1/%(tenant_id)s | | 4aced9a3c17741608b2491a8a8fb7503 | RegionOne | cinder | volume | True | public | http://192.168.122.20:8776/v1/%(tenant_id)s | | 63eeaa5246f54c289881ade0686dc9bb | RegionOne | ceilometer | metering | True | admin | http://192.168.122.20:8777 | | 6e2fd583487846e6aab7cac4c001064c | RegionOne | gnocchi | metric | True | public | http://192.168.122.20:8041 | | 79f2fcdff7d740549846a9328f8aa993 | RegionOne | cinderv3 | volumev3 | True | public | http://192.168.122.20:8776/v3/%(tenant_id)s | | 9730a44676b042e1a9f087137ea52d04 | RegionOne | glance | image | True | public | http://192.168.122.20:9292 | | a028329f053841dfb115e93c7740d65c | RegionOne | neutron | network | True | internal | http://192.168.122.20:9696 | | acc7ff6d8f1941318ab4f456cac5e316 | RegionOne | placement | placement | True | public | http://192.168.122.20:8778/placement | | afecd931e6dc42e8aa1abdba44fec622 | RegionOne | glance | image | True | internal | http://192.168.122.20:9292 | | c08c1cfb0f524944abba81c42e606678 | RegionOne | placement | placement | True | admin | http://192.168.122.20:8778/placement | | c0c0c4e8265e4592942bcfa409068721 | RegionOne | placement | placement | True | internal | http://192.168.122.20:8778/placement | | d9f34d36bd2541b98caa0d6ab74ba336 | RegionOne | cinder | volume | True | admin | http://192.168.122.20:8776/v1/%(tenant_id)s | | e051cee0d06e45d48498b0af24eb08b5 | RegionOne | ceilometer | metering | True | public | http://192.168.122.20:8777 | | e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone | identity | True | admin | http://192.168.122.30:35357/v3 | | ea6f1493aa134b6f9822eca447dfd1df | RegionOne | neutron | network | True | admin | http://192.168.122.20:9696 | | ed97856952bb4a3f953ff467d61e9c6a | RegionOne | gnocchi | metric | True | admin | http://192.168.122.20:8041 | | f989d76263364f07becb638fdb5fea6c | RegionOne | neutron | network | True | public | http://192.168.122.20:9696 | | fe32d323287c4a0cb221faafb35141f8 | RegionOne | ceilometer | metering | True | internal | http://192.168.122.20:8777 | | fef852af4f0d4f0cacd4620e5d5245c2 | RegionOne | cinderv3 | volumev3 | True | admin | http://192.168.122.20:8776/v3/%(tenant_id)s | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
yes we are still able to read the endpoint list for keystone so all looks fine..
Let us again start our cluster configuration on controller2
[root@controller2 ~(keystone_admin)]# pcs cluster start
Starting Cluster...
And check the status
[root@controller2 ~(keystone_admin)]# pcs status Cluster name: openstack Stack: corosync Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Tue Oct 16 13:23:17 2018 Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1 2 nodes configured 2 resources configured Online: [ controller1 controller2 ] Full list of resources: VirtualIP (ocf::heartbeat:IPaddr2): Started controller1 HAProxy (systemd:haproxy): Started controller1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
So all is back to green and we were successfully able to configure HAProxy in Openstack.
Lastly I hope the steps from the article to configure HAProxy in Openstack (High Availability between controllers) was helpful. So, let me know your suggestions and feedback using the comment section.
I’ve a question
We’ll do these steps after the build is complete, or we’ll do it before building successful the Openstack services. because into Openstack i built-in ETCD, Memcached, NTPD, so i don’t know-how poiting to IP of Controller or IP VIP. Please explain, thank
You must have an up and running Openstak environment before following this guide. next you need a free IP Address which you will configure as Virtual IP (VIP) which will be mapped to all your controller IP using haproxy.
Hi bro
Thank for your reply. But Openstack had many services running include Memcached, ETCD, NTPD…these services using IP of Controller. I will be keeping it or replace it to VIP of HAPROXY + Keepalived ??
Thank you so much
All these services are part of controller node so you only have to worry about the main controller IP. At any point in time one of the controller will always be active using VIP.
So, I will configure all services include Memcached, NTPD, ETCD, on Controller node and nodes Block, Compute, Ceph, Glance all components integration with Controller will be pointing to VIP, right?
Hi,
I have some trouble need your help.
I always get the following error message:
[ALERT] 162/093745 (934329) : Starting proxy cinder-api: cannot bind socket [192.168.3.90:8776]
192.168.3.90 is VIP of 91,92,93
I guess the reason is I have start cinder-api service and take up port 8776 , so haproxy try to take up 8776 , error
but i don’t know how to solve it ,
Looking forward to your reply.
you can stop the service and let haproxy manage the service if you feel that is the root cause. Or the logs may have some more information
Dear admin
How is data distributed in database?
Which database are we discussing here?
Hello, I read other documents said PCS should manage Openstack service. Because if you not configure Openstack service under PCS. If service Keystone1 failed, and LB do not know it failed. It still send query to the Keystone1.
Generally, in this guide, there is no method for LB to check status of service in control nodes.
Thank you for your feedback. In this guide I have only covered some examples to setup a PCS cluster for some of the application services and not the openstack services. If you wish to add all the other compute services under pcs then it can be done separately and can be then monitored for any failure. But I will try to write an article to set up a cluster for openstack services as well.
I found your site from Google and I need to state it was
a fantastic find. Many thanks!
Hi,
Just a simple question, how about the databases between 2 controller?
Cheers!
Hi,
The redundancy is on controller level. Do you mean you need redundancy for your database?
Regards