Helion OpenStack 1.1.1 Ceph Installation and Configuration

More and more customers are choosing Ceph as the open and unified storage solution for their OpenStack based IaaS platforms. One of the key advantages that Ceph offers is both object storage (e.g. dropbox, google drive) and block storage (e.g. luns, drives) from a single interface. In theory this reduces the number of products that…

More and more customers are choosing Ceph as the open and unified storage solution for their OpenStack based IaaS platforms. One of the key advantages that Ceph offers is both object storage (e.g. dropbox, google drive) and block storage (e.g. luns, drives) from a single interface. In theory this reduces the number of products that a customer needs to manage and train their staff on.

Limitations worth noting are a current lack of production grade asynchronous replication if attempting to span Ceph across large distances. The object storage also lacks many features currently available with dedicated products such as Swift, so be sure that the business requirements are clearly identified.

All that said, if multi-region isn’t required and security is not a primary concern then Ceph is a very performant storage solution.

ceph
Figure 1: HOS Ceph Communication

Ceph Installation

 
The procedure used to complete the Ceph installation with HOS can be located here: http://docs.hpcloud.com/#commercial/GA1/ceph/1.1commercial.ceph-automated-install.html
  
The following configuration files were used in conjunction with the above procedure to complete the successful installation:

Server.json file

Modify /server/server.json to include OpenStack credentials from the Undercloud (stackrc), network, and the keypair from Helion OpenStack.

{
"authentication": {
   "HOST": "172.16.0.131",
   "PORT": "8085",
   "OS_VERSION": "2",
   "OS_USER": "admin",
   "OS_PASSWORD": "password",
   "OS_TENANT_NAME": "admin",
   "OS_AUTH_URL": "http://172.16.0.135:5000/v2.0",
   "keypair": "cephadmin",
   "netid": "2ff3d218-86a9-4875-9d38-5f7860c581b1"
    }
}

Orchestration.json file
This file configures the undercloud flavors for the physical Ceph nodes.

{
    "authentication": {
        "ws_url": "http://172.16.0.131:8085/"
    },
 
    "api": {
        "imagepath": "/helion-ceph/images/",
        "deploy-image-prefix": "bm-deploy"
    },
 
    "orchestration": {
        "hypervisorsleepduration": "300",
        "hypervisorsmoniteringfrequency": "10",
        "bootsleepduration": "1200",
        "bootinitialwaitduration": "30",
        "hypervisortype": "baremetal",
        "hypervisordriver": "ironic",
        "bootmoniteringfrequency": "5",
        "destinationpath": "/helion-ceph/"
    },
 
 
    "flavor": {
            "001": {
                "ram": "163840",
                "vcpus": "2",
                "disk": "275",
                "architecture": "x86_64",
                "version" : "001"
            },
    "002": {
                "ram": "65536",
                "vcpus": "12",
                "disk": "900",
                "architecture": "x86_64",
                "version" : "001"
            }
    },
 
 
    "ironic": {
        "batchsize": "1",
        "driver": "pxe_ipmitool",
        "cpu_arch": "x86_64",
        "pxe_root_gb": "4"
    },
 
    "logger": {
        "filename": "orchestration.log",
        "filemode": "w",
        "level": 20,
        "format": "%(levelname)s:%(asctime)s:%(message)s"
    }
}

Baremetal.csv file

As with the bare metal file in the previous HOS section this file is used to define the characteristics of the physical nodes that will be integrated using the installer.

root@hLinux:/helion-ceph/node-provisioner/client# cat baremetal.csv
55:b9:22:92:c0:23,Administrator,password,10.99.10.12,12,65536,1637
55:b9:22:92:c2:cc,Administrator,password,10.99.10.13,12,65536,1637

Ceph Configuration

The procedure used to complete the Ceph configuration with HOS can be located here:
http://docs.hpcloud.com/#commercial/GA1/ceph/1.1commercial.ceph-cluster-client-node-configuration-ansible.html

The following configuration files were used in conjunction with the above procedure to complete the successful configuration:

cluster.csv file
This is the cluster definition file for roles and disks

hlinux@c1admin-overcloud-ceph-admin:/helion-ceph/cephconfiguration/ansible-playbooks$ cat cephcluster.csv
172.16.0.165,mon-master-1,mon-master,hlinux
172.16.0.165,admin-1,admin,hlinux
172.16.0.165,ceph-osd-1,osd,hlinux,xfs,/dev/sdb,xfs,/dev/sde5
172.16.0.165,ceph-osd-2,osd,hlinux,xfs,/dev/sdc,xfs,/dev/sde6
172.16.0.165,ceph-osd-3,osd,hlinux,xfs,/dev/sdd,xfs,/dev/sde7
172.16.0.164,ceph-osd-4,osd,hlinux,xfs,/dev/sdb,xfs,/dev/sde5
172.16.0.164,ceph-osd-5,osd,hlinux,xfs,/dev/sdc,xfs,/dev/sde6
172.16.0.164,ceph-osd-6,osd,hlinux,xfs,/dev/sdd,xfs,/dev/sde7
172.16.0.149,compute0,computes
172.16.0.150,compute1,computes
172.16.0.151,compute2,computes
172.16.0.141,controller0,controllers
172.16.0.146,controller1,controllers
172.16.0.145,controller2,controllers
172.16.0.131,seed0,seed

[Please note: the journal device file details have the partition number added by the installation process – for example /dev/sdn becomes /dev/sdnX following the installation]

Ansible – /group_vars/all file

Defines environment variables for installation.

hlinux@c1admin-overcloud-ceph-admin:/helion-ceph/cephconfiguration/ansible-playbooks$ cat group_vars/all
---
# Variables here are available to all host groups
cephmon_user:   root                                           #Leave this value as is
cephmon_group:  root                                           #Leave this value as is
runrados:       0                                              # Set this to 0 if you do not have rados nodes, to 1 if you have the rados nodes
radosgwHA:      0                                              # Set this to true if you want to setup rados in HA mode where you need min two rados nodes
secretuuid:     123456789123456789123456789           # This the UUID that will be used to setup the helion nodes. Change this prio to running the ceph-client and ceph-admin roles, if you wish to newly generated UUID. The same UUID will work too.
clienttarname:  ceph_client_setup-0.80.7_h1.1.fix7_newdebs.tar # Set this to the tar ball name that is being used for helion client setup. Make sure the tarball has been copied under roles/ceph-client/files folder
passthrough_path: "/helion-ceph/cephconfiguration/ansible-playbooks/roles/helion-seed/files/hp_ceph_passthrough"

Ansible – /group_vars/ceph-cluster

hlinux@c1admin-overcloud-ceph-admin:/helion-ceph/cephconfiguration/ansible-playbooks$ cat group_vars/ceph-cluster
---
# Variables here are applicable to the ceph-cluster host group
osd_journal_size: 10000
mon_master: 172.16.0.165
fsid: 123456789123456789123456789
fssize: 2048
env: baremetal
journal: 1
dependencies:

Ceph Integration

The procedure used to complete the Ceph integration with HOS can be located here:
http://docs.hpcloud.com/#commercial/GA1/ceph/1.1commercial.ceph-cluster-client-node-configuration-ansible.html
 
This process is automated through Ansible scripts. The manual integration steps are also available here: http://docs.hpcloud.com/#commercial/GA1/ceph/1.1commercial.ceph-manual-install.html
 
 

Tags:

Leave a comment