The goal of the ceph / OpenStack integration post series is to give you a step by step guide on how you can build a ceph cluster and integrate it with your openstack environment for a POC as well as demonstrate how you can upload qcow2 images into rbd, transform them into raw and import the raw image into glance. This setup is by no means a production setup as I do not have SSD for the ceph journals. Meaning that the journals will reside on the osd’s them self.
Part 1 of these post series is going to walk you through how to get your RHEL 7.1 hosts ready for ceph 1.2.3 and install the calamari web interface on the admin / ceph-mgmt node.
For this post series I have the following hardware/ software available:
For the OpenStack environment
1x Dell r720 (Controller)
6x Dell r720xd (Compute)
I am using RHEL OSP 6 on RHEL 7.1
For the Ceph environment
1x Dell r720 (ceph-mgmt/admin)
3x Dell r720xd (ceph-mon / This could also be a VM in the real world)
6x Dell r720xd (ceph-osd) a 24 disks
I am using RHEL 7.1 with CEPH 1.2.3
1. My POC environment does not need a host based firewall. Therefore I will turn it off.
# systemctl disable firewalld # systemctl stop firewalld
2. Create the correct network config for your 10GBE interface in /etc/sysconfig/network-script/ifcfg-Your_interface_name. It is very important that your ceph nodes are connected with at least 1 10GBE Nic which is available across the cluster and MTU 9000 is set.
TYPE="Ethernet" BOOTPROTO=none DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" DEVICE=p2p1 ONBOOT=yes UUID=25f57a8c-5cc9-41d2-9ae4-9e9d70161442 NAME="System p2p1" HWADDR=dc:0e:a1:8c:a0:2f IPADDR=192.168.1.25 PREFIX=16 GATEWAY=192.168.1.1 DOMAIN="local.domb.com" DNS1=192.168.1.23 MTU=9000 #####> Essential for ceph cluster
3. Disable NetworkManager and set the network service as a default.
# systemctl disable NetworkManager.service # systemctl stop NetworkManager.service # systemctl start network.service # systemctl enable network.service
4. Disable selinux as ceph doesn’t like it yet.
# sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
5. If you do not have a DNS server make sure your ceph server has the correct host entries
cat > /etc/hosts << EOF 192.168.1.180 compute1.local.domb.com compute1 192.168.1.195 compute2.local.domb.com compute2 192.168.1.198 compute3.local.domb.com compute3 192.168.1.201 compute4.local.domb.com compute4 192.168.1.204 compute5.local.domb.com compute5 192.168.1.207 compute6.local.domb.com compute6 192.168.1.210 compute7.local.domb.com compute7 192.168.1.213 compute8.local.domb.com compute8 192.168.1.216 ceph-mgmt.local.domb.com ceph-mgmt 192.168.1.235 ceph-osd01.local.domb.com ceph-osd01 192.168.1.219 ceph-osd02.local.domb.com ceph-osd02 192.168.1.222 ceph-osd03.local.domb.com ceph-osd03 192.168.1.225 ceph-osd04.local.domb.com ceph-osd04 192.168.1.228 ceph-osd05.local.domb.com ceph-osd05 192.168.1.231 ceph-mon01.local.domb.com ceph-mon01 192.168.1.242 ceph-mon02.local.domb.com ceph-mon02 192.168.1.238 ceph-mon03.local.domb.com ceph-mon03 EOF
6. Register your host to rhn with the correct subscription and attach the following repos.
# subscription-manager register --username your_rhn_username --password your_rhn_password # subscription-manager attach --pool=your_pool_id # subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhceph-1.2-calamari-rpms --enable=rhel-7-server-rhceph-1.2-installer-rpms --enable=rhel-7-server-rhceph-1.2-mon-rpms --enable=rhel-7-server-rhceph-1.2-osd-rpms
7. Update and restart the system
# yum -y update # init 6
8. Create the root ssh key on the ceph-mgmt host (will also act as admin node) and distribute the keys to the other monitors and osd's in the cluster
[root@ceph-mgmt ~]# ssh-keygen [root@ceph-mgmt ~]# for i in ceph-mgmt ceph-mon01 ceph-mon02 ceph-mon03 ceph-osd01 ceph-osd02 ceph-osd03 ceph-osd04 ceph-osd05;do ssh-copy-id $i; done
9. Install chrony on all hosts and enable it
[root@ceph-mgmt ~]# for i in ceph-mgmt ceph-mon01 ceph-mon02 ceph-mon03 ceph-osd01 ceph-osd02 ceph-osd03 ceph-osd04 ceph-osd05; do ssh $i "yum -y install chrony ; systemctl enable chronyd ; systemctl start chronyd"; done
10. Adjust the pid_max count on all the osd servers. As I have more than 20 disks I have to do that.
[root@ceph-mgmt ~]# for i in ceph-osd01 ceph-osd02 ceph-osd03 ceph-osd04 ceph-osd05; do ssh $i "cat 'kernel.pid_max = 4194303' > /etc/sysctl.d/99-pid_max.conf ; sysctl -p /etc/sysctl.d/99-pid_max.conf"; done
11. Now its time to download the Ceph components from RHN (rhceph-1.2.3-rhel-7-x86_64.iso). Once downloaded mount the iso on your ceph-mgmt host and copy the following files:
[root@ceph-mgmt ~]# mount -o loop rhceph-1.2.3-rhel-7-x86_64.iso /mnt/ [root@ceph-mgmt ~]# cp /mnt/RHCeph-Calamari-1.2-x86_64-c1e8ca3b6c57-285.pem /etc/pki/product/285.pem [root@ceph-mgmt ~]# cp /mnt/RHCeph-Installer-1.2-x86_64-8ad6befe003d-281.pem /etc/pki/product/281.pem [root@ceph-mgmt ~]# cp /mnt/RHCeph-MON-1.2-x86_64-d8afd76a547b-286.pem /etc/pki/product/286.pem [root@ceph-mgmt ~]# cp /mnt/RHCeph-OSD-1.2-x86_64-25019bf09fe9-288.pem /etc/pki/product/288.pem
12. Install the ice_setup-*.rpm's
[root@ceph-mgmt ~]# yum install /mnt/ice_setup-*.rpm
13. Before we start configuring or installing any further ceph components we need to create a ceph-config folder which will store future ceph configs
[root@ceph-mgmt ~]# mkdir ~/ceph-config [root@ceph-mgmt ~]# cd ~/ceph-config
14. We now launch the ice server setup which creates a cephdeploy.conf for us which is needed for the ceph deployment later on.
[root@ceph-mgmt ~]# ice_setup -d /mnt
15. Update the ice_setup packages on the calamari admin node.
[root@ceph-mgmt ~]# yum -y update && ice_setup update all
16. Initialize the calamari server. During the setup it will ask you for an admin email address and password for user root (weblogin)
[root@ceph-mgmt ~]# calamari-ctl initialize
17. After a successful calamari install you should be able to login to the calamari webserver with root/your_password (You won't see much yet)