Ceph for Cinder and Glance in OpenStack Juno Part 3 – Building the Ceph Cluster

Part 3 of the post series is going to walk you through how to integrate cinder and glance with ceph on RHEL OSP 6. Most of the instructions on what you need to do can be found here Block Devices and OpenStack

1. Delete the default pools as you won’t need them

[root@ceph-mgmt ~]# cd ~/ceph-config
[root@ceph-mgmt ~]# ceph osd pool delete data data --yes-i-really-really-mean-it
[root@ceph-mgmt ~]# ceph osd pool delete metadata metadata --yes-i-really-really-mean-it
[root@ceph-mgmt ~]# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it

2. Create new pools matching the upstream documentation.

[root@ceph-mgmt ~]# ceph osd pool create volumes 4096
[root@ceph-mgmt ~]# ceph osd pool create images 128
[root@ceph-mgmt ~]# ceph osd pool create backups 128
[root@ceph-mgmt ~]# ceph osd pool create vms 4096

3. As I am using a packstack installed OpenStack my controller runs glance-api, cinder-volume and cinder backup. This menas I need to install the following packages on my controller

[root@controller ~]# yum -y install python-rbd ceph 

4. On the compute nodes we need to install the ceph package

[root@controller ~]# for i in controller compute1 compute2 compute3 compute4 compute5 compute6 compute7 compute8;do 
  ssh $i yum -y install python-rbd ceph; 
  done

5. As we have cephx enabled we need to create the 3 new users for Nova and Cinder

[root@ceph-mgmt ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[root@ceph-mgmt ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[root@ceph-mgmt ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

6. The controller needs the following keys (ceph.client.glance.keyring, ceph.client.cinder.keyring, ceph.client.cinder-backup.keyring)

[root@ceph-mgmt ~]# ceph auth get-or-create client.glance | ssh controller sudo tee /etc/ceph/ceph.client.glance.keyring
[root@ceph-mgmt ~]# ssh controller sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@ceph-mgmt ~]# ceph auth get-or-create client.cinder | ssh controller sudo tee /etc/ceph/ceph.client.cinder.keyring
[root@ceph-mgmt ~]# ssh controller sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
[root@ceph-mgmt ~]# ceph auth get-or-create client.cinder-backup | ssh controller sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
[root@ceph-mgmt ~]# ssh controller sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

7. For nova compute you will only need the client.cinder.key

[root@ceph-mgmt ~]# for i in compute1 compute2 compute3 compute4 compute5 compute6 compute7 compute8;do 
    ceph auth get-or-create client.cinder | ssh $i sudo tee /etc/ceph/ceph.client.cinder.keyring    
    ceph auth get-key client.cinder | ssh $i tee client.cinder.key ; 
  done

8. Ceph was complaining on my installation that the ceph.client.admin.keyring could not be found on the controller. So I had to add it to get it working

[root@ceph-mgmt ~]# scp ceph.client.admin.keyring controller:/etc/ceph/

9. Create the following /etc/ceph/ceph.conf for the controller. the line rbd default format = 2 is essential if you want to create layered images. Layered images are needed if you need to convert qcow2 to raw in rbd.

[global]
fsid = cfdf7859-08a5-4fe4-9942-e19c6b522945
mon_initial_members = ceph-mon01, ceph-mon02, ceph-mon03
mon_host = 192.168.1.231,192.168.1.242,192.168.1.238
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_journal_size = 1000
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128

[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd default format = 2


[client.glance]
keyring = /etc/ceph/ceph.client.glance.keyring

[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring

[client.cinder-backup]
keyring = /etc/ceph/ceph.client.cinder-backup.keyring

10. Copy the ceph config from the controller to the compute nodes and remove the lines client.glance and client.cinder-backup.

[root@controller ~]# for i in compute1 compute2 compute3 compute4 compute5 compute6 compute7 compute8;do 
   scp /etc/ceph/ceph.conf $i:/etc/ceph/
  done

11. Add the following line in the ceph.conf client section for the compute nodes. You might need to create the /var/run ceph directory.

[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok

12. Next we need to add a secret key to libvirt. The uuid only needs to be created on the first compute host and then copied to the others. the rest of the commands need to run on all hosts.

[root@compute1 ~]# uuidgen
5b2a0b64-983d-4c5b-9d53-834a69c21489
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>5b2a0b64-983d-4c5b-9d53-834a69c21489</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 5b2a0b64-983d-4c5b-9d53-834a69c21489 created
sudo virsh secret-set-value --secret 5b2a0b64-983d-4c5b-9d53-834a69c21489 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

13. On the controller edit /etc/glance/glance-api.conf and add the following lines. A complete glance-api.conf can be found here glance-api.conf

[DEFAULT]
...
default_store = rbd
show_image_direct_url = True
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

14. Now we can configure cinder for ceph/rbd. Add the following lines under the ceph section. A complete cinder.conf can be found here cinder.conf

volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 5b2a0b64-983d-4c5b-9d53-834a69c21489

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
rbd_user = cinder
rbd_secret_uuid = 5b2a0b64-983d-4c5b-9d53-834a69c21489

15. Also make sure that you enable ceph as a backend in cinder.conf

enabled_backends=ceph

16. Add the following config to nova.conf on the compute nodes

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 5b2a0b64-983d-4c5b-9d53-834a69c21489
inject_password = false
inject_key = false
inject_partition = -2

17. And ensure that the live migration options are set correctly

live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

18. Restart the openstack services

[root@controller ~]# service openstack-glance-api restart
[root@controller ~]# for i in compute1 compute2 compute3 compute4 compute5 compute6 compute7 compute8;do 
     ssh $i service openstack-nova-compute restart; 
done
[root@controller ~]# openstack-cinder-volume restart
[root@controller ~]# openstack-cinder-backup restart

19. Create a new cinder type ceph

[root@controller ~(keystone_admin)]# cinder type-create ceph
[root@controller ~(keystone_admin)]# cinder type-key ceph set volume_backend_name=ceph
[root@controller ~(keystone_admin)]# cinder service-list
+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |             Host             | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
|  cinder-backup   |   controller.fidelity.com    | nova | enabled |   up  | 2015-05-26T01:56:45.000000 |       None      |
| cinder-scheduler |   controller.fidelity.com    | nova | enabled |   up  | 2015-05-26T01:56:41.000000 |       None      |
|  cinder-volume   | controller.fidelity.com@ceph | nova | enabled |   up  | 2015-05-26T01:56:47.000000 |       None      |
|  cinder-volume   | controller.fidelity.com@lvm  | nova | enabled |   down| 2015-05-26T01:56:43.000000 |       None      |
+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+

20. Test if you can create a cinder volume in rbd

[root@controller ~(keystone_admin)]#  cinder create --volume-type ceph --display-name ceph-test 1
[root@controller ~(keystone_admin)]#  cinder list
[root@controller ~(keystone_admin)]#  rbd -p volumes ls

If you see your volume here your all good and ready to go to part 4 uploading images to glance via rbd.