I will assume that your undercloud installation is complete, so here we will continue with the steps to configure the director to deploy overcloud in Openstack using Red Hat Openstack Platform Director 10 and virt-manager.
In our last article we covered below areas
- First of all bring up a physical host
- Install a new virtual machine for undercloud-director
- Set hostname for the director
- Configure repo or subscribe to RHN
- Install
python-tripleoclient
- Configure
undercloud.conf
- Install Undercloud
Now in this article we will continue with the pending steps to configure Undercloud director node to deploy overcloud in Openstack
- Obtain and upload images for overcloud introspection and deployment
- Create virtual machines for overcloud nodes (compute and controller)
- Configure Virtual Bare Metal Controller
- Importing and registering the overcloud nodes
- Introspecting the overcloud nodes
- Tagging overcloud nodes to profiles
- Lastly start deploying Overcloud Nodes
Deploy Overcloud in Openstack
The director requires several disk images for provisioning overcloud nodes. This includes:
- An introspection kernel and ramdisk ⇒ Used for bare metal system introspection over PXE boot.
- A deployment kernel and ramdisk ⇒ Used for system provisioning and deployment.
- An overcloud kernel, ramdisk, and full image ⇒ A base overcloud system that is written to the node’s hard disk.
Obtaining Images for Overcloud
[stack@director ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa -y [stack@director ~]$ cp /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar ~/images/ [stack@director ~]$ cd images/
Extract the archives to the images directory on the stack user’s home (/home/stack/images):
[stack@director images]$ tar -xf overcloud-full-latest-10.0.tar [stack@director images]$ tar -xf ironic-python-agent-latest-10.0.tar
[stack@director images]$ ls -l total 3848560 -rw-r--r--. 1 stack stack 425703356 Aug 22 02:15 ironic-python-agent.initramfs -rwxr-xr-x. 1 stack stack 6398256 Aug 22 02:15 ironic-python-agent.kernel -rw-r--r--. 1 stack stack 432107520 Oct 8 10:14 ironic-python-agent-latest-10.0.tar -rw-r--r--. 1 stack stack 61388282 Aug 22 02:29 overcloud-full.initrd -rw-r--r--. 1 stack stack 1537239040 Oct 8 10:13 overcloud-full-latest-10.0.tar -rw-r--r--. 1 stack stack 1471676416 Oct 8 10:18 overcloud-full.qcow2 -rwxr-xr-x. 1 stack stack 6398256 Aug 22 02:29 overcloud-full.vmlinuz
Change root password for overcloud nodes
You need virt-customize to change the root password.
[stack@director images]$ sudo yum install -y libguestfs-tools
Execute below command. Replace highlighted text "password" with the password you wish to assign for "root"
[stack@director images]$ virt-customize -a overcloud-full.qcow2 --root-password password:password
[ 0.0] Examining the guest ...
[ 40.9] Setting a random seed
[ 40.9] Setting the machine ID in /etc/machine-id
[ 40.9] Setting passwords
[ 63.0] Finishing off
Import these images into the director:
[stack@director images]$ openstack overcloud image upload --image-path ~/images/ Image "overcloud-full-vmlinuz" was uploaded. +--------------------------------------+------------------------+-------------+---------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+------------------------+-------------+---------+--------+ | db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz | aki | 6398256 | active | +--------------------------------------+------------------------+-------------+---------+--------+ Image "overcloud-full-initrd" was uploaded. +--------------------------------------+-----------------------+-------------+----------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+-----------------------+-------------+----------+--------+ | 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd | ari | 61388282 | active | +--------------------------------------+-----------------------+-------------+----------+--------+ Image "overcloud-full" was uploaded. +--------------------------------------+----------------+-------------+------------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+----------------+-------------+------------+--------+ | 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full | qcow2 | 1471676416 | active | +--------------------------------------+----------------+-------------+------------+--------+ Image "bm-deploy-kernel" was uploaded. +--------------------------------------+------------------+-------------+---------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+------------------+-------------+---------+--------+ | 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel | aki | 6398256 | active | +--------------------------------------+------------------+-------------+---------+--------+ Image "bm-deploy-ramdisk" was uploaded. +--------------------------------------+-------------------+-------------+-----------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+-------------------+-------------+-----------+--------+ | 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk | ari | 425703356 | active | +--------------------------------------+-------------------+-------------+-----------+--------+
View the list of the images in the CLI:
[stack@director images]$ openstack image list +--------------------------------------+------------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------------+--------+ | 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk | active | | 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel | active | | 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full | active | | 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd | active | | db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz | active | +--------------------------------------+------------------------+--------+
This list will not show the introspection PXE images. The director copies these files to /httpboot
.
[stack@director images]$ ls -l /httpboot/ total 421988 -rwxr-xr-x. 1 root root 6398256 Oct 8 10:19 agent.kernel -rw-r--r--. 1 root root 425703356 Oct 8 10:19 agent.ramdisk -rw-r--r--. 1 ironic ironic 759 Oct 8 10:41 boot.ipxe -rw-r--r--. 1 ironic-inspector ironic-inspector 473 Oct 8 09:43 inspector.ipxe drwxr-xr-x. 2 ironic ironic 6 Oct 8 10:51 pxelinux.cfg
Setting a nameserver on the undercloud's neutron subnet
Overcloud nodes require a nameserver
so that they can resolve hostnames through DNS. For a standard overcloud without network isolation, the nameserver is defined using the undercloud’s neutron subnet
.
[stack@director images]$ neutron subnet-list +--------------------------------------+------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+------------------+--------------------------------------------------------+ | 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.150"} | +--------------------------------------+------+------------------+--------------------------------------------------------+
Update the nameserver
to your subnet
[stack@director images]$ neutron subnet-update 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 --dns-nameserver 192.168.122.1
Updated subnet: 7b7f251d-edfc-46ea-8d56-f9f2397e01d1
Validate the changes
[stack@director images]$ neutron subnet-show 7b7f251d-edfc-46ea-8d56-f9f2397e01d1
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| allocation_pools | {"start": "192.168.126.100", "end": "192.168.126.150"} |
| cidr | 192.168.126.0/24 |
| created_at | 2018-10-08T04:20:48Z |
| description | |
| dns_nameservers | 192.168.122.1 |
| enable_dhcp | True |
| gateway_ip | 192.168.126.1 |
| host_routes | {"destination": "169.254.169.254/32", "nexthop": "192.168.126.1"} |
| id | 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | |
| network_id | 7047a1c6-86ac-4237-8fe5-b0bb26538752 |
| project_id | 681d63dc1f1d4c5892941c68e6d07c54 |
| revision_number | 3 |
| service_types | |
| subnetpool_id | |
| tenant_id | 681d63dc1f1d4c5892941c68e6d07c54 |
| updated_at | 2018-10-08T04:50:09Z |
+-------------------+-------------------------------------------------------------------+
Create virtual machines for overcloud
My controller node configuration:
OS | RHEL 7.4 |
VM Name | controller0 |
vCPUs | 2 |
Memory | 8192 MB |
Disk | 60 GB |
NIC 1 (Provisioning Network) | MAC: 52:54:00:36:65:a6 |
NIC 2 (External Network) | MAC: 52:54:00:c4:34:ca |
My compute node configuration:
OS | RHEL 7.4 |
VM Name | compute1 |
vCPUs | 2 |
Memory | 8192 MB |
Disk | 60 GB |
NIC 1 (Provisioning Network) | MAC: 52:54:00:13:b8:aa |
NIC 2 (External Network) | MAC: 52:54:00:d1:93:28 |
For the overcloud we need one controller and one compute. Create two qcow disks each for controller and compute node on your physical host machine.
[root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata controller0.qcow2 60G Formatting 'controller0.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off [root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata compute1.qcow2 60G Formatting 'compute1.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
[root@openstack images]# ls -lh total 47G -rw-r--r--. 1 root root 61G Oct 8 10:35 compute1.qcow2 -rw-r--r--. 1 root root 61G Oct 8 10:34 controller0.qcow2 -rw-------. 1 qemu qemu 81G Oct 8 10:35 director-new.qcow2
Change the ownership of the qcow2 disk to "qemu:qemu"
[root@openstack images]# chown qemu:qemu * [root@openstack images]# ls -lh total 47G -rw-r--r--. 1 qemu qemu 61G Oct 8 10:35 compute1.qcow2 -rw-r--r--. 1 qemu qemu 61G Oct 8 10:34 controller0.qcow2 -rw-------. 1 qemu qemu 81G Oct 8 10:35 director-new.qcow2
Next install "virt-install" to be able to create a virtual machine using CLI.
[root@openstack images]# yum -y install virt-install
Here I am creating xml files for two virtual machines namely controller0
and compute1
[root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/controller0.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name controller0 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/controller0.xml [root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/compute1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name compute1 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/compute1.xml
Validate the files we created above
[root@openstack images]# ls -l /tmp/*.xml -rw-r--r--. 1 root root 1850 Oct 8 10:45 /tmp/compute1.xml -rw-r--r--. 1 root root 1856 Oct 8 10:45 /tmp/controller0.xml -rw-r--r--. 1 root root 207 Oct 7 15:52 /tmp/external.xml -rw-r--r--. 1 root root 117 Oct 6 19:45 /tmp/provisioning.xml
Now it is time to add those virtual machine
[root@openstack images]# virsh define --file /tmp/controller0.xml Domain controller0 defined from /tmp/controller0.xml [root@openstack images]# virsh define --file /tmp/compute1.xml Domain compute1 defined from /tmp/compute1.xml
Validate the currently active virtual machines on your host machine. We are running our undercloud director on director-new
[root@openstack images]# virsh list --all
Id Name State
----------------------------------------------------
6 director-new running
- compute1 shut off
- controller0 shut off
Configure Virtual Bare Metal Controller (VBMC)
The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. SInce we have a lab setup using KVM as my setup we will use VBMC to help register the nodes.
Since we are using virtual machines for our setup which does not has any iLO or similar utility for power management we will use VBMC. You can get the package from the openstack git repository.
[root@openstack ~]# wget https://git.openstack.org/openstack/virtualbmc
Next install the VBMC package
[root@openstack ~]# yum install -y python-virtualbmc
Start adding your virtual machines to the vbmc domain list
[root@openstack images]# vbmc add controller0 --port 6320 --username admin --password redhat [root@openstack images]# vbmc add compute1 --port 6321 --username admin --password redhat
To list the available domains
[root@openstack images]# vbmc list +-------------+--------+---------+------+ | Domain name | Status | Address | Port | +-------------+--------+---------+------+ | compute1 | down | :: | 6321 | | controller0 | down | :: | 6320 | +-------------+--------+---------+------+
Next start all the virtual BMCs:
[root@openstack images]# vbmc start compute1 [root@openstack images]# vbmc start controller0
Check the status again
[root@openstack images]# vbmc list +-------------+---------+---------+------+ | Domain name | Status | Address | Port | +-------------+---------+---------+------+ | compute1 | running | :: | 6321 | | controller0 | running | :: | 6320 | +-------------+---------+---------+------+
Now all our domains are in running state.
pxe_ipmitool
as the driver for executing all the IPMI commands so make sure this is loaded and available on your undercloudThe command-line utility to test the functionality of the power IPMI emulation uses this syntax
[root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6320 -U admin -R 3 -N 5 -P redhat power status Chassis Power is off [root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6321 -U admin -R 3 -N 5 -P redhat power status Chassis Power is off
Registering nodes for the overcloud
The director requires a node definition template, which you create manually. This file (instack-twonodes.json
) uses the JSON format file, and contains the hardware and power management details for your nodes.
[stack@director ~]$ cat instack-twonodes.json { "nodes":[ { "mac":[ "52:54:00:36:65:a6" ], "name":"controller0", "cpu":"2", "memory":"8192", "disk":"60", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_addr": "192.168.122.1", "pm_password": "redhat", "pm_port": "6320" }, { "mac":[ "52:54:00:13:b8:aa" ], "name":"compute1", "cpu":"2", "memory":"8192", "disk":"60", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_addr": "192.168.122.1", "pm_password": "redhat", "pm_port": "6321" } ] }
To deploy overcloud in Openstack the next step is to register the nodes part of Overcloud which for us are a single controller and compute node. The Workflow service manages this task set, which includes the ability to schedule and monitor multiple tasks and actions.
[stack@director ~]$ openstack baremetal import --json instack-twonodes.json
Started Mistral Workflow. Execution ID: 6ad7c642-275e-4293-988a-b84c28fd99c1
Successfully registered node UUID 633f53f7-7b3c-454a-8d39-bd9c4371d248
Successfully registered node UUID f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
Started Mistral Workflow. Execution ID: 5989359f-3cad-43cb-9ea3-e86ebee87964
Successfully set all nodes to available.
Check the available ironic node list after the import
[stack@director ~]$ openstack baremetal node list +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None | power off | available | False | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | None | power off | available | False | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
This assigns each node the bm_deploy_kernel and bm_deploy_ramdisk images
[stack@director ~]$ openstack baremetal configure boot
Set the provisioning state to manageable
using this command
[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done
The nodes are now registered and configured in the director. View a list of these nodes in the CLI:
[stack@director ~]$ openstack baremetal node list +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None | power off | manageable | False | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | None | power off | manageable | False | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
In the following output, verify that deploy_kernel and deploy_ramdisk are assigned to the new nodes.
[stack@director ~]$ for i in controller0 compute1 ; do ironic node-show $i| grep -1 deploy; done | driver | pxe_ipmitool | | driver_info | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': | | | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address': | | | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f- | | | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'} | | driver | pxe_ipmitool | | driver_info | {u'ipmi_port': u'6321', u'ipmi_username': u'admin', u'deploy_kernel': | | | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address': | | | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f- | | | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'} |
Inspecting the hardware of nodes
The director can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.
openstack overcloud node introspect --all-manageable --provide
command, as we initiate power on and off for virtual machines using port rather than IP address. So a bulk introspection is not possible on virtual machines.[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack overcloud node introspect $node --provide; done Started Mistral Workflow. Execution ID: 123c4290-82ba-4766-8fdc-65878eac03ac Waiting for introspection to finish... Successfully introspected all nodes. Introspection completed. Started Mistral Workflow. Execution ID: 5b6009a1-855a-492b-9196-9c0291913d2f Successfully set all nodes to available. Started Mistral Workflow. Execution ID: 7f9a5d65-c94a-496d-afe2-e649a85d5912 Waiting for introspection to finish... Successfully introspected all nodes. Introspection completed. Started Mistral Workflow. Execution ID: ffb4a0c5-3090-4d88-b407-2a8e06035485 Successfully set all nodes to available.
Monitor the progress of the introspection using the following command in a separate terminal window:
[stack@director ~]$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironicinspector-dnsmasq -u openstack-ironic-conductor -f
Check the introspection status
[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do echo -e "n"$node;openstack baremetal introspection status $node; done 633f53f7-7b3c-454a-8d39-bd9c4371d248 +----------+-------+ | Field | Value | +----------+-------+ | error | None | | finished | True | +----------+-------+ f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 +----------+-------+ | Field | Value | +----------+-------+ | error | None | | finished | True | +----------+-------+
Collect the introspection data for controller
You can check the introspection data which was collected for individual nodes. In this example I will show you the steps to get this information for the controller node
[stack@director ~]$ openstack baremetal node show controller0
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| clean_step | {} |
| console_enabled | False |
| created_at | 2018-10-08T04:55:22+00:00 |
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address': |
| | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'} |
| driver_internal_info | {} |
| extra | {u'hardware_swift_object': u'extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248'} |
| inspection_finished_at | None |
| inspection_started_at | None |
| instance_info | {} |
| instance_uuid | None |
| last_error | None |
| maintenance | False |
| maintenance_reason | None |
| name | controller0 |
| ports | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'self'}, {u'href': |
| | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'bookmark'}] |
| power_state | power off |
| properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'59', u'cpus': u'2', u'capabilities': |
| | u'cpu_vt:true,cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
| provision_state | available |
| provision_updated_at | 2018-10-08T05:00:44+00:00 |
| raid_config | {} |
| reservation | None |
| states | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'self'}, {u'href': |
| | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'bookmark'}] |
| target_power_state | None |
| target_provision_state | None |
| target_raid_config | {} |
| updated_at | 2018-10-08T05:00:51+00:00 |
| uuid | 633f53f7-7b3c-454a-8d39-bd9c4371d248 |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
Store the ironic user password from the undercloud-passwords.conf
file
[stack@director ~]$ grep ironic undercloud-passwords.conf
undercloud_ironic_password=f670269d38916530ac00e5f1af6bf8e39619a9f5
Here use ironic password as OS_PASSWORD
and the object as extra_hardware
value from the above highlighted section.
[stack@director ~]$ OS_TENANT_NAME=service OS_USERNAME=ironic OS_PASSWORD=f670269d38916530ac00e5f1af6bf8e39619a9f5 openstack object save ironic-inspector extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
Check if the object storage is created
[stack@director ~]$ ls -l
total 36
-rw-rw-r--. 1 stack stack 9013 Oct 8 10:34 extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
drwxrwxr-x. 2 stack stack 245 Oct 8 10:14 images
-rw-rw-r--. 1 stack stack 836 Oct 8 10:25 instack-twonodes.json
-rw-------. 1 stack stack 725 Oct 8 09:51 stackrc
-rw-r--r--. 1 stack stack 11150 Oct 8 09:05 undercloud.conf
-rw-rw-r--. 1 stack stack 1650 Oct 8 09:33 undercloud-passwords.conf
Now you can read your data using below command
[stack@director ~]$ jq . < extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
[
[
"disk",
"logical",
"count",
"1"
],
[
"disk",
"vda",
"size",
"64"
],
[
"disk",
"vda",
"vendor",
"0x1af4"
],
*** output trimmed ***
[
"system",
"kernel",
"version",
"3.10.0-862.11.6.el7.x86_64"
],
[
"system",
"kernel",
"arch",
"x86_64"
],
[
"system",
"kernel",
"cmdline",
"ipa-inspection-callback-url=http://192.168.126.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,numa-topology,logs systemd.journald.forward_to_console=yes BOOTIF=52:54:00:36:65:a6 ipa-debug=1 ipa-inspection-dhcp-all-interfaces=1 ipa-collect-lldp=1 initrd=agent.ramdisk"
]
]
Tagging nodes to profiles
So after registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. The following example shows the relationship across roles, flavors, profiles, and nodes for Controller nodes:
[stack@director ~]$ openstack flavor list +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | 06ab97b9-6d7e-4d4d-8d6e-c2ba1e781657 | baremetal | 4096 | 40 | 0 | 1 | True | | 17eec9b0-811d-4ff0-a028-29e7ff748654 | block-storage | 4096 | 40 | 0 | 1 | True | | 38cbb6df-4852-49d0-bbed-0bddee5173c8 | compute | 4096 | 40 | 0 | 1 | True | | 88345a7e-f617-4514-9aac-0d794a32ee80 | ceph-storage | 4096 | 40 | 0 | 1 | True | | dce1c321-32bb-4abf-bfd5-08f952529550 | swift-storage | 4096 | 40 | 0 | 1 | True | | febf52e2-5707-43b3-8f3a-069a957828fb | control | 4096 | 40 | 0 | 1 | True | +--------------------------------------+---------------+------+------+-----------+-------+-----------+
[stack@director ~]$ openstack baremetal node list +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None | power off | available | False | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | None | power off | available | False | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
The addition of the profile:compute
and profile:control
options tag the two nodes into each respective profiles. These commands also set the boot_option:local
parameter, which defines the boot mode for each node.
[stack@director ~]$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 633f53f7-7b3c-454a-8d39-bd9c4371d248 [stack@director ~]$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
After completing node tagging, check the assigned profiles or possible profiles:
[stack@director ~]$ openstack overcloud profiles list +--------------------------------------+-------------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-------------+-----------------+-----------------+-------------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | available | control | | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | available | compute | | +--------------------------------------+-------------+-----------------+-----------------+-------------------+
You can also check your flavor if the same flavor is assigned here as we assigned to our ironic node
[stack@director ~]$ openstack flavor show control -c properties +------------+------------------------------------------------------------------+ | Field | Value | +------------+------------------------------------------------------------------+ | properties | capabilities:boot_option='local', capabilities:profile='control' | +------------+------------------------------------------------------------------+ [stack@director ~]$ openstack flavor show compute -c properties +------------+------------------------------------------------------------------+ | Field | Value | +------------+------------------------------------------------------------------+ | properties | capabilities:boot_option='local', capabilities:profile='compute' | +------------+------------------------------------------------------------------+
Deploying the Overcloud
So now the final stage deploy Overcloud in OpenStack environment is by running openstack overcloud deploy
command.
[stack@director ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan Removing the current plan files Uploading new plan files Started Mistral Workflow. Execution ID: 5dd005ed-67c8-4cef-8d16-c196fc852051 Plan updated Deploying templates in the directory /tmp/tripleoclient-LDQ2md/tripleo-heat-templates Started Mistral Workflow. Execution ID: 23e8f1b0-6e4c-444b-9890-d48fef1a96a6 2018-10-08 17:11:42Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started 2018-10-08 17:11:42Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS state changed 2018-10-08 17:11:43Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS state changed 2018-10-08 17:11:43Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS Stack CREATE started 2018-10-08 17:11:43Z [overcloud.ServiceNetMap.ServiceNetMapValue]: CREATE_IN_PROGRESS state changed 2018-10-08 17:11:43Z [overcloud.Networks]: CREATE_IN_PROGRESS state changed *** Output Trimmed *** 2018-10-08 17:53:25Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS state changed 2018-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE state changed 2018-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE Stack CREATE completed successfully 2018-10-08 17:54:23Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE state changed 2018-10-08 17:54:23Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully 2018-10-08 17:54:24Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed 2018-10-08 17:54:24Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully Stack overcloud CREATE_COMPLETE Overcloud Endpoint: http://192.168.126.107:5000/v2.0
So our overcloud deployment is complete at this stage. Check the stack status
[stack@director ~]$ openstack stack list
+--------------------------------------+------------+-----------------+----------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+-----------------+----------------------+--------------+
| 952eeb74-0c29-4cdc-913c-5d834c8ad6c5 | overcloud | CREATE_COMPLETE | 2018-10-08T17:11:41Z | None |
+--------------------------------------+------------+-----------------+----------------------+--------------+
To get the list of overcloud nodes
[stack@director ~]$ nova list +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+ | 9a8307e3-7e53-44f8-a77b-7e0115ac75aa | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.168.126.112 | | 3667b67f-802f-4c13-ba86-150576cd2b16 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.126.113 | +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
You can get your horizon dashboard credential from the overcloudrc file available at the home folder of user stack (~/stack)
[stack@director ~]$ cat overcloudrc # Clear any old environment that may conflict. for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done export OS_USERNAME=admin export OS_TENANT_NAME=admin export NOVA_VERSION=1.1 export OS_PROJECT_NAME=admin export OS_PASSWORD=tZQDQsbGG96t4KcXYfAM22BzN export OS_NO_CACHE=True export COMPUTE_API_VERSION=1.1 export no_proxy=,192.168.126.107,192.168.126.107 export OS_CLOUDNAME=overcloud export OS_AUTH_URL=http://192.168.126.107:5000/v2.0 export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
So you can login to your horizon dashboard at 192.168.126.107
as shown below using the OS_USERNAME
and OS_PASSWORD
from overcloudrc
file.
Lastly I hope the steps from the article to configure tripleo Undercloud to deploy Overcloud in OpenStack was helpful. So, let me know your suggestions and feedback using the comment section.
Hi I am getting error on `TASK [Ensure system is NTP time synced]`
I am getting this error, How can I fix this ? THank you in advance.
And my final output is
Thanks for posting this article.Looking for some information ramdisk, please provide pointers.
Hi, Its really nice. Thanks for sharing.
I followed along the article but failed at node registration. To be more precise during VBMC.
1) Why do we need to do this ==>[root@openstack ~]# wget https://git.openstack.org/openstack/virtualbmc
And then yum install ?
2) Where do we need to install vbmc, on my kvm physical machine or on undercloud?
3) my openstack baremetal driver list show empty and hence node registration is failing.
1. This is explained under Configure VBMC
2. In my setup I have the entire environment running on one physical server under different VMs. You can check my lab environment for more details.
3. Are you able to connect your overcloud nodes using ipmitool?
1: I think its optional in case you have any problem with yum install
2: install vbmc on kvm physical machine where you also have virsh tool too
3: there is a chance you hace network problem which you can check with ‘ipmitool’ if you can turn on/off with this then make sure your instack-twonodes.json file is correct
I am trying to install openstack 15 but getting some error with overcloud installation. Can you please help me. I have a hardware for setup.
Can you please share more details on the error and environment.