The flexible control plane (FCP) installation is a process used for Proof of concepts (PoCs), evaluations and demonstrations. It is not intended to be used in production. It was developed to reduce the physical server count required for HOS 1.X from a minimum of 8 servers (excluding block storage) to 4 servers.
The same pre-requisites apply to the seed host and kvm hosts as apply to the standard seed host. Configure packages,NTP etc.
vm-plan file
Create a vm-plan file on the seed host:
,root,,[seedvm-IP],4,32768,,Undercloud,
,root,,[seedvm-IP],4,32768,,OvercloudControl,
,root,,[kvmhostA-IP],4,32768,,OvercloudControl,
,root,,[kvmhostB-IP],4,32768,,OvercloudControl,
,root,,[kvmhostA-IP],4,32768,,OvercloudSwiftStorage,
,root,,[kvmhostB-IP],4,32768,,OvercloudSwiftStorage,
Note: Field 1 is the BRIDGE_INTERFACE nic on the remote host.
(If empty, this will default to the same value of BRIDGE_INTERFACE as used on the seed host) For example: “em59”.
overcloud-config.json file
Create an overcloud-config.json file on the seed host:
{
“cloud_type”: “KVM”,
“vsa_scale”: 0,
“vsa_ao_scale”: 0,
“so_swift_storage_scale”: 0,
“so_swift_proxy_scale”: 0,
“compute_scale”: 2,
“bridge_interface”: “[e.g. em59 or eth7]”,
“virtual_interface”: “eth0”,
“fixed_range_cidr”: “172.0.100.0/24”,
“control_virtual_router_id”: “117”,
“baremetal”: {
“network_seed_ip”: “xx.xx.6.27”,
“network_cidr”: “xx.xx.6.0/24”,
“network_gateway”: “xx.xx.6.1”,
“network_seed_range_start”: “xx.xx.6.28”,
“network_seed_range_end”: “xx.xx.6.29”,
“network_undercloud_range_start”: “xx.xx.6.30”,
“network_undercloud_range_end”: “xx.xx.6.60”
},
“neutron”: {
“overcloud_public_interface”: “vlanxx07”,
“public_interface_raw_device”: “eth0”,
“undercloud_public_interface”: “eth0”
},
“dns”: {
“seed_server”: “8.8.8.8”,
“overcloud_server”: “8.8.8.8”,
“undercloud_server”: “8.8.8.8”
},
“ntp”: {
“overcloud_server”: “8.8.8.123”,
“undercloud_server”: “8.8.8.123”,
“seed_server”: “8.8.8.123”
},
“floating_ip”: {
“start”: “yy.yy.250.242”,
“end”: “yy.yy.250.254”,
“cidr”: “yy.yy.250.240/28”
},
“svc”: {
“interface”: “vlanxx17”,
“interface_default_route”: “xx.xx.5.129”,
“allocate_start”: “xx.xx.5.130”,
“allocate_end”: “xx.xx.5.158”,
“allocate_cidr”: “xx.xx.5.128/27”,
“overcloud_bridge_mappings”: “svcnet1:br-svc”,
“overcloud_flat_networks”: “svcnet1”,
“customer_router_ip”: “xx.xx.5.129”
},
“hypervisor”: {
“public_interface”: “vlanxx07”,
“public_interface_raw_device”: “eth0”
}
}
Setup Passwordless Login
Best practise would dictate that you don’t enable root access for passwordless ssh but as this is only for an evaluation environment the root account was utilised.
#ssh-copy-id -i /root/.ssh/id_rsa.pub root@localhost
#ssh-copy-id -i /root/.ssh/id_rsa.pub root@[kvmhostA-IP]
#ssh-copy-id -i /root/.ssh/id_rsa.pub root@[kvmhostB-IP]
Baremetal Virtual Bridge
Identify the physical nic that will be used to bridge between VMs and the physical compute servers – for example see em1 in this post.
If the system running the installer and seed VM does not use the external device name eth0, then determine the device name before running the next step on seed host –
#export BRIDGE_INTERFACE=[e.g. em59]
On the seed host –
#export HP_VM_MODE=hybrid
#bash /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh –local-setup –vm-plan vm-plan
Copy hp_ced_host_manager.sh to each remote hosts.
#scp /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh root@[kvmhostA-IP]:hp_ced_host_manager.sh
#scp /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh root@[kvmhostB-IP]:hp_ced_host_manager.sh
Copy hp_ced_ensure_host_bridge.sh to each remote host.
#scp /root/tripleo/tripleo-incubator/scripts/hp_ced_ensure_host_bridge.sh root@[kvmhostA-IP]:hp_ced_ensure_host_bridge.sh
#scp /root/tripleo/tripleo-incubator/scripts/hp_ced_ensure_host_bridge.sh root@[kvmhostB-IP]:hp_ced_ensure_host_bridge.sh
As root on each remote host, run
#export BRIDGE_INTERFACE=[em59]
#bash -x ~root/hp_ced_host_manager.sh –remote-setup
Start the seed build process
#source /root/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh overcloud-config.json
#export BRIDGE_INTERFACE=[em59]
#export HP_VM_MODE=hybrid
#bash -x /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh –create-seed –vm-plan tripleo/vm-plan 2>&1|tee seedvm.log
Once this has completed successfully copy the overcloud-config.json file to the newly created seed vm.
#scp overcloud-config.json root@xx.xx.6.27:/
Start the undercloud & overcloud build process
Log in to the seed vm –
#ssh root@10.51.6.27
Edit the baremetal.csv file and add the details of the two new compute nodes.
For example:
00:aa:bb:cc:dd:77,root,undefined,xx.xx.6.24,4,32768,512,Undercloud,VM
00:aa:bb:cc:dd:84,root,undefined,xx.xx.6.24,4,32768,512,OvercloudControl,VM
00:aa:bb:cc:dd:16,root,undefined,xx.xx.6.25,4,32768,512,OvercloudControl,VM
00:aa:bb:cc:dd:d8,root,undefined,xx.xx.6.26,4,32768,512,OvercloudControl,VM
00:aa:bb:cc:dd:14,root,undefined,xx.xx.6.25,4,32768,512,OvercloudSwiftStorage,VM
00:aa:bb:cc:dd:19,root,undefined,xx.xx.6.26,4,32768,512,OvercloudSwiftStorage,VM
5c:aa:bb:cc:dd:e4,admin,password,xx.xx.3.63,24,524288,931,OvercloudCompute,IPMI
5c:aa:bb:cc:dd:1c,admin,password,xx.xx.3.64,24,524288,931,OvercloudCompute,IPMI
#source /root/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh /overcloud-config.json
#bash -x /root/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh 2>&1|tee fcpbuild.log
——— The End of Installation Process ————
Installation Notes
1. FCP mode does not support Proliant Gen9 UEFI bios mode – ensure all Gen9 servers are set to Legacy boot mode
2. Not all Gen9 PCIe NIC support PXE boot when using Legacy boot mode – the management nics must be able to PXE boot
3. The pre-requisites dictate Ubuntu 14.04 however the backup/restore procedure calls for features that were introduced in Ubuntu 14.10