Ceph for Cinder and Glance in OpenStack Juno Part 2 – Building the Ceph Cluster

Part 2 of the post series is going to walk you through how to build your ceph cluster after you performed the initial configuration for ceph.

NOTE: Ceph 1.2.3 fails to install the Diamond package on minion nodes during state.highstate. You can find the bug report and fix here Diamond bug . Thanks to my co worker James for pointing that out.

1. We can now go ahead and create the cluster configurations. It is critical that all commands will be run from that folder to get a successful installation.

2. Deploy the ceph monitors

3. As your ceph-mgmt node will be the admin node we need ceph and ceph-common installed

4. Next we can install the ceph software on the nodes. Make sure you followed the initial installation of part 1!

5. We now need to add the initial monitors and gather the keys. This will place the keys into your ceph-deploy directory.

6. As we installed all the nodes and initialized the monitors we can connect them to calamari. Your ceph cluster will still be in an error state as you need at least 3 active osd to get the cluster into a healthy state.

7. Define the ceph-mgmt node as admin node

8. Prepare the disks and add the osd’s to the cluster. As I do not have SSD for a cache pool the journals will reside on the OSD’s which is not optimal from a performance point of view but enough for a poc setup. It is recommended to “dd” out your osd disks before you run the next stage. This step can run for a while depending on the amount of disks you will add to the cluster (It is not necessary to partition your disks as the “–zap” option will do that for you).

9. After all your OSD are added successfully to your cluster you should be able to run the following command

or login into calamari and see an active healthy cluster.

cephhealth

Congratulations you have now a running ceph cluster!

Leave a Reply

Your email address will not be published. Required fields are marked *

*


Hit Counter provided by laptop reviews