Ceph for Cinder and Glance in OpenStack Juno Part 1 – Building the Ceph Cluster

The goal of the ceph / OpenStack integration post series is to give you a step by step guide on how you can build a ceph cluster and integrate it with your openstack environment for a POC as well as demonstrate how you can upload qcow2 images into rbd, transform them into raw and import the raw image into glance. This setup is by no means a production setup as I do not have SSD for the ceph journals. Meaning that the journals will reside on the osd’s them self.

Part 1 of these post series is going to walk you through how to get your RHEL 7.1 hosts ready for ceph 1.2.3 and install the calamari web interface on the admin / ceph-mgmt node.

For this post series I have the following hardware/ software available:

For the OpenStack environment
1x Dell r720 (Controller)
6x Dell r720xd (Compute)
I am using RHEL OSP 6 on RHEL 7.1

For the Ceph environment
1x Dell r720 (ceph-mgmt/admin)
3x Dell r720xd (ceph-mon / This could also be a VM in the real world)
6x Dell r720xd (ceph-osd) a 24 disks
I am using RHEL 7.1 with CEPH 1.2.3

1. My POC environment does not need a host based firewall. Therefore I will turn it off.

2. Create the correct network config for your 10GBE interface in /etc/sysconfig/network-script/ifcfg-Your_interface_name. It is very important that your ceph nodes are connected with at least 1 10GBE Nic which is available across the cluster and MTU 9000 is set.

3. Disable NetworkManager and set the network service as a default.

4. Disable selinux as ceph doesn’t like it yet.

5. If you do not have a DNS server make sure your ceph server has the correct host entries

6. Register your host to rhn with the correct subscription and attach the following repos.

7. Update and restart the system

8. Create the root ssh key on the ceph-mgmt host (will also act as admin node) and distribute the keys to the other monitors and osd’s in the cluster

9. Install chrony on all hosts and enable it

10. Adjust the pid_max count on all the osd servers. As I have more than 20 disks I have to do that.

11. Now its time to download the Ceph components from RHN (rhceph-1.2.3-rhel-7-x86_64.iso). Once downloaded mount the iso on your ceph-mgmt host and copy the following files:

12. Install the ice_setup-*.rpm’s

13. Before we start configuring or installing any further ceph components we need to create a ceph-config folder which will store future ceph configs

14. We now launch the ice server setup which creates a cephdeploy.conf for us which is needed for the ceph deployment later on.

15. Update the ice_setup packages on the calamari admin node.

16. Initialize the calamari server. During the setup it will ask you for an admin email address and password for user root (weblogin)

17. After a successful calamari install you should be able to login to the calamari webserver with root/your_password (You won’t see much yet)

1 thought on “Ceph for Cinder and Glance in OpenStack Juno Part 1 – Building the Ceph Cluster”

  1. Pingback: Ceph for Cinder and Glance in OpenStack Juno Part 2 – Building the Ceph Cluster | Laurent Domb OSS Blog (RHCA, PCP, ITIL)

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.