OpenStack Liberty CLi & Multi NAT Gateways

Customers, they always look for real world production solutions, never the simple greenfield lab setup that we’ve all trained on – if we’re lucky! The current project is no exception and though it does indeed deviate from the lab setup I imagine they’re by no means unique in their requirements. Single Region “MidScale” OpenStack HA…

Customers, they always look for real world production solutions, never the simple greenfield lab setup that we’ve all trained on – if we’re lucky! The current project is no exception and though it does indeed deviate from the lab setup I imagine they’re by no means unique in their requirements. Single Region “MidScale” OpenStack HA control plane with both ESX and KVM hypervisors and of course not just one NAT gateway but multiple NAT gateways. If still interested read on – the environment has already been built including the multiple NAT gateways [Note to self: possibly warrants another blog post]. VLANs are used for the Tenant networks with 3 dedicated network nodes setup with CVR – Centralised Virtual Routing – also known as Legacy Routing.

Overview

This blog details the steps required to build the following environment using only the OpenStack command line interface (Cli)

  • dev/test project
    • single /24 private tenant network
    • 2 x 3 tier application servers: Web-1, App-1, DB-1 & Web-2, App-2, DB-2.
    • Security Groups to provide isolation between both 3 tier applications.
  • production project
    • 3 x /24 private tenant networks – one for each application tier
    •  2 x 3 tier application servers: Web-1, App-1, DB-1 & Web-2, App-2, DB-2 with the server VMs on their respective network tiers.
    • Security Groups to provide isolation between both 3 tier applications.
  • remote servers project
    • This project will be used to host two different database servers that will be assigned floating-ips and used to represent external server for access demonstrations.
  • SNAT for all servers
  • Multiple NAT Gateways or Multiple External Networks with Floating-Ips.

The multi-hypervisor HOS 3.0 installation has already been successfully completed and it’s high level architecture can be seen below.

B1

Procedure

  • Create 3 new projects – dev_test, production & remote
    • openstack project create –domain default –description “Dev-Test Project” dev_test
    • openstack project create –domain default –description “Production Project” production
    • openstack project create –domain default –description “Remote Servers for Demo Only” remote

 

b2b3b4

  • Create two new users – Bart the Admin & Lisa the Operator
    • openstack user create –domain default –password homer bart
    • openstack user create –domain default –password homer lisa

 

b5

  • Add the admin role to the projects and bart
    • openstack role add –project remote –user bart admin
    • openstack role add –project production –user bart admin
    • openstack role add –project dev_test –user bart admin
  • Add the member role to the projects and lisa
    • openstack role add –project remote –user lisa _member_
    • openstack role add –project production –user lisa _member_
    • openstack role add –project dev_test –user lisa _member_

 

b6b7

  • Create the new VLAN tenant networks using VLAN IDs 4000-4010 and associate them with the respective projects as outlined above
    •  neutron net-create –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –provider:physical_network physnet1 –provider:segmentation_id 4000 –provider:network_type vlan dev_test_net

 

b8

    • neutron net-create –tenant-id cd506c50142d4a6e9618087ecbd5599f –provider:physical_network physnet1 –provider:segmentation_id 4001 –provider:network_type vlan prod_web_net
    • neutron net-create –tenant-id cd506c50142d4a6e9618087ecbd5599f –provider:physical_network physnet1 –provider:segmentation_id 4002 –provider:network_type vlan prod_app_net
    • neutron net-create –tenant-id cd506c50142d4a6e9618087ecbd5599f  –provider:physical_network physnet1 –provider:segmentation_id 4003 –provider:network_type vlan prod_db_net

 

b9

  • neutron net-create –tenant-id 3857ea74aa9e47b081d2781e9a661fa9  –provider:physical_network physnet1 –provider:segmentation_id 4004 –provider:network_type vlan remote_net

 

b10

  • Create the subnets for each network
    • neutron subnet-create dev_test_net 172.17.175.0/24 –name subnet_dev_test –gateway 172.17.175.1 –allocation-pool start=172.17.175.20,end=172.17.175.250 –enable-dhcp

 

b11b12

  • neutron subnet-create prod_web_net 172.17.176.0/24 –name subnet_prod_web –gateway 172.17.176.1 –allocation-pool start=172.17.176.20,end=172.17.176.250 –enable-dhcp

 

b13

  • neutron subnet-create prod_app_net 172.17.177.0/24 –name subnet_prod_app –gateway 172.17.177.1 –allocation-pool start=172.17.177.20,end=172.17.177.250 –enable-dhcp

 

b14

  • neutron subnet-create prod_db_net 172.17.178.0/24 –name subnet_prod_db –gateway 172.17.178.1 –allocation-pool start=172.17.178.20,end=172.17.178.250 –enable-dhcp

 

b15b16

  • neutron subnet-create remote_net 172.17.179.0/24 –name subnet_remote –gateway 172.17.179.1 –allocation-pool start=172.17.179.20,end=172.17.179.250 –enable-dhcp

 

b17b18

  • Create the application tier vms as outlined earlier
    •  nova flavor-list

b19

  • nova image-list

b20

  • neutron net-list

 

b21

  • nova secgroup-list

 

b22

  • Source the project admin rc file
    • nova boot –flavor m1.tiny –image cirros-0.3.3-x86_64 –nic net-id=7271c4de-3f91-4a12-bb8c-45113610ce08 –security-group default dev-web-1

 

b23

  • nova boot –flavor m1.tiny –image cirros-0.3.3-x86_64 –nic net-id=7271c4de-3f91-4a12-bb8c-45113610ce08 –security-group default dev-app-1
  • nova boot –flavor m1.tiny –image cirros-0.3.3-x86_64 –nic net-id=7271c4de-3f91-4a12-bb8c-45113610ce08 –security-group default dev-db-1
  • nova boot –flavor m1.tiny –image debian-vmware –nic net-id=7271c4de-3f91-4a12-bb8c-45113610ce08 –security-group default dev-web-2
  • nova boot –flavor m1.tiny –image debian-vmware –nic net-id=7271c4de-3f91-4a12-bb8c-45113610ce08 –security-group default dev-app-2
  • nova boot –flavor m1.tiny –image debian-vmware –nic net-id=7271c4de-3f91-4a12-bb8c-45113610ce08 –security-group default dev-db-2

 

b24b25

  • Example of how to access the console of an instance [both vmware and kvm]
    • nova get-vnc-console dev-app-1 novnc
    • nova get-vnc-console dev-web-1 novnc
    • nova get-vnc-console dev-db-1 novnc
    • nova get-vnc-console dev-app-2 novnc
    • nova get-vnc-console dev-web-2 novnc
    • nova get-vnc-console dev-db-2 novnc

 

b26

b27

b28

  • Add SNAT route for Servers (set Ha to false below as I’m constrained by spare VLAN IDs)
    • neutron router-create –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –distributed False –ha False db-router
    • neutron router-create –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –distributed False –ha False app-router
    • neutron router-create –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –distributed False –ha False web-router

 

b29b30

  • Set the router external gateway
    • neutron router-gateway-set app-router NGOne
    • neutron router-gateway-set web-router NGTwo
    • neutron router-gateway-set db-router NGPhysical2

 

b31b32

  • Plug the required subnet into the router
    • neutron router-interface-add d33d1ca0-e25c-4972-83b8-c7f1b62a4156 167f394a-ac53-432b-8077-169b0389c725

 

b33

 

  • The default gateway port will have already been consumed by the first router on the network. We need to create two new ports on the subnet and add these details to the second and third routers.
    • neutron port-create  –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –name NGTwo-if dev_test_net
    • neutron port-create  –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –name NGPhysical2 dev_test_net

 

b34

  • Now we can associate the new interface port with the router
    • neutron router-interface-add  8e7c9b58-1f25-4f9e-8a8d-7335149512de port=e3df0352-ebf8-4869-8ee5-44c6fc866a31
    • neutron router-interface-add 019439b1-c525-4047-b9d5-5c6cf0cca24d port=bf8832a1-d13b-4416-965b-3e09137452c6

 

b35

  • The dev-test project should now look something like this:

missingb99

  • The 3 different NAT-Gateways can provide both SNAT and DNAT functionality. Security Groups will be used to provide isolation between the servers and the gateways.

  • The default security group permits traffic between virtual machines on the same tenant network. In this scenario we’re using the kvm based vm dev-app-1 and will show the isolation from esx based vm dev-app-2

 

 

b36

  • The following Security Group will provide isolation between these two servers.
    • neutron security-group-create –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e AppGroup-1 –description “Allow traffic flow within application group 1 only”

 

b37

    • neutron security-group-rule-create –tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e –direction ingress –ethertype IPv4 –remote-group-id  2a659276-9797-45c5-b9f7-b02cc4470147 AppGroup-1

 

b38

    • nova  –os-tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e add-secgroup dev-app-1 AppGroup-1
  • Remove the already applied “default” security group
    • nova –os-tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e remove-secgroup dev-app-1 default

 

b39

  • Ping now fails between dev-app-1 and dev-app2

 

b40

 

  • Applying the new security group the dev-app-2 will enable targeted group communication
    • nova  –os-tenant-id 244fbcbe8c444dacbfc6dbbbf89f744e add-secgroup dev-app-2 AppGroup-1

 

b41

This is obviously a very basic demonstration of security groups – it’s just a matter of clearing defining the communication requirements and translating them into rules.

As for the creation of the remaining two projects, production & remote, the process is exactly the same and you should end up with these additional topologies when complete.

 

b42b43

Leave a comment