CloudForms the Swiss army knife of Hybrid Cloud Management

Today a dream came true for me by getting the honor to present at Red Hat Summit in SF 2016 “Automation and configuration management across hybrid clouds with CloudForms, Satellite6 and Ansible Tower“. When I joined Trivadis in 2006 Daniel Steiner who was a Senior Linux Engineer took me under his wings (He had earned a Red Hat Fedora when he passed his RHCE before 2006) I told him that one day I will be speaking at Red Hat Summit. So today is that day. My gratitude goes to him for inspiring me to go down this path.

I am posting my slide deck as well as the 2 videos so that you can review the potential of the Red Hat management suite.

Red Hat CloudForms made huge progress in the last few releases. As you may know we added an Azure provider in 4.0 and now a Google Compute Provider in 4.1. As a bonus we also have integration into Ansible Tower which makes automation a whole lot easier. Having Google as a provider is great as we now can triage application (OpenShift Dedicated, Puppet, Ansible) and instance provisioning between the 3 major cloud providers Google, Azure and AWS.

On top of that you are now able to use Satellite 6 and Ansible Tower for configuration management. This opens unlimited possibilities in terms of system and application configuration management as you can provision on premise or off premise across hybrid clouds and run the same configuration management role/class/container stack everywhere you go.

My talk at Summit shows the above but more importantly on how CloudForms, Satellite 6, and Ansible Tower integrate with each other and what you can do with it. That said with a CMP that integrates automation/orchestration and configuration management/content management the “sky” is the limit.

Other improvements are highlighted by Lucy Kerner she shows the capability of doing scap scans on VM’s and re mediating the non compliant configurations via CloudForms Ansible Tower and Satellite 6. Compliance, security automation, and remediation with Red Hat CloudForms, Red Hat Satellite, and Ansible Tower by Red Hat

Here is my presentation Automation and configuration management across hybrid clouds with Red Hat CloudForms, Red Hat Satellite 6, and Ansible Tower by Red Hat


Sources: (Satellite 6 CI/CD) (CloudForms CI/CD) (puppet modules + cloud init)

Posted in ansible, Cloud, CloudForms, Openstack, Puppet | Tagged , , , , | Leave a comment

CloudForms Hybrid Cloud Sessions at Red Hat Summit SF 2016

Please join us at Red Hat Summit in SF and attend the sessions below. Those sessions highlight how versatile CloudForms is and what problems it can solve for you.
Red Hat Summit pass discounted rate of $1,195: RHSRAF

Tuesday, 10:15am
Enabling digital transformation via the Red Hat management portfolio
Alessandro Perilli, Red Hat
Joe Fitzgerald, Red Hat
William Nix, Red Hat

Tuesday, 3:30pm
Red Hat Cloud roadmap

James Labocki, Red Hat
Rob Young – Principal Product Manager, Red Hat
Xavier Lecauchois, Red Hat

Tuesday, 3:30pm
Red Hat containers roadmap
Mike McGrath – Managing Architect, Platform, Red Hat
Xavier Lecauchois, Red Hat
Sayan Saha – Sr. Manager, Product Management, Red Hat
Stephen Gordon, Red Hat
Ben Breard – Technology Product Manager, Red Hat
Joe Fernandes – Senior Director of Product Management, Red Hat
Rich Sharples – Senior Director of Product Management, Red Hat

Wednesday, 11:30am
Red Hat CloudForms 2016 roadmap
Scott Drennan – Product Manager, Nuage Networks
Eric Johnson, Google
John Hardy, Red Hat

Wednesday, 4:45pm
Automating Azure public and private clouds with Red Hat CloudForms 4
Jason Ritenour, Red Hat

Wednesday, 4:45pm
Automation and configuration management across hybrid clouds with Red Hat CloudForms, Red Hat Satellite 6, and Ansible Tower by Red Hat
Laurent Domb – Sr. Cloud Solutions Architect, Red Hat
John Hoffer, Red Hat
Mike Dahlgren – Red Hat Solutions Architect, Red Hat

Thursday, 10:15am
Red Hat CloudForms: Cutting VM creation time by 75% at General Mills
Ashley Nelson, General Mills
Mike Dahlgren – Red Hat Solutions Architect, Red Hat

Thursday, 11:30am
Continuous integration with Red Hat cloud solutions
Oded Ramraz, Red Hat
Sim Zacks, Red Hat

Thursday, 3:30pm
Compliance, security automation, and remediation with Red Hat CloudForms, Red Hat Satellite, and Ansible Tower by Red Hat
Matthew Micene – Solution Architect, DLT Solutions
Lucy Kerner – Senior Cloud Solutions Architect, Red Hat

Thursday, 4:45pm
OpenShift advanced management with Red Hat CloudForms
Itamar Heim, Red Hat
Federico Simoncelli – Associate Manager, Red Hat

Book Signing
Thursday, 11:15am – North Upper Lobby
Mastering CloudForms Automation
Peter McGowan

Posted in ansible, Cloud, CloudForms, Openstack, Puppet | Leave a comment

Build a RHEL Cloud Image for GCE

This is a brief tutorial on how you can create a Red Hat Enterprise Linux cloud image for Googles Compute Engine. These instructions are meant for a Linux hosts which runs KVM.

1. Download the GCE Tools google-cloud-sdk so you can create and upload the future image. You will need to have internet connection as the tool will comunicate with GCE’s api.

2. Untar the Google cloud SDK

[root@host191 ~]# tar -xzvf google-cloud-sdk-112.0.0-linux-x86_64.tar.gz
[root@host191 ~]# cd google-cloud-sdk/

3. Install the SDK. You will need to be able to communicate to the outside world here as well as visit a website

[root@host191 ~]#  ./

4. Once installed build your linux VM. Create a qcow2 disk. You will have to convert the disk later on to raw format.

[root@host191 ~]# qemu-img create -f qcow2 /var/lib/libvirt/images/gce.qcow2 10G

5. Follow:
Google also supports meta data service so you can install cloud-init as well which you can find in the rhel-7-server-rh-common-rpms. Once your done you can shutdown your vm and convert it into raw. IMPORTANT, the name of your disk needs to be disk.raw .

6. Convert the image to raw. The image needs to be named disk.raw

[root@host191 ~]#  qemu-img convert -f qcow2 -O raw gce.qcow2 disk.raw

7. Tar it up.

[root@host191 ~]#  tar -Sczf gcerhel7.tar.gz disk.raw

8. If there is no data store create one. Rhelimages is my datastore name

[root@host191 ~]# ./gsutil mb gs://rhelimages/

9. Upload the tarball to the gce storage

[root@host191 ~]# ./gsutil cp /var/lib/libvirt/images/gcerhel7.tar.gz gs://rhelimages/

10. Create the compute image. Once done you will be able to create GCE compute instances from that image.

[root@host191 ~]#  ./gcloud compute images create rhel7-custom-amd64 --source-uri gs://rhelimages/gcerhel7.tar.gz
Posted in Cloud, CloudForms, Linux | Tagged , , | Leave a comment

A Recipe to Build a Successful Cloud Environment: Stop Thinking Legacy, Think Cloud!

The awesome Narendra Narang and myself got invited to speak as alternate speakers at OpenStack Summit in Austin 2016. Unfortunately nobody backed out and we were not able to present our talk about building a successful cloud environment. Attached find our presentation. The presentation will walk you through the journey of what you will have to think of and what you will have to prepare for to get to a successful cloud environment.

Posted in Cloud | Leave a comment

CloudForms sample provisioning, metrics collection and events workflows

Recently I had a customer asking about how our solutions suite works and what we can do with it. RedHat for me is one of the only companies in the world who can pretty much deliver the full stack from infrastructure up to application and back. With RHEL OSP8 we even include OpenDaylight which means we now cover the network as well. So where does CloudForms fit in and how does it integrate with the rest of the suite like Satellite6, Ansible, OpenShift, OpenStack, RHEV, VMWare, SCVMM, Aamazon Ec2, Azure?

The next 3 diagrams walk you through a sample provisioning workflow on how CloudForms interacts with the different components, as well as a high level overview of metrics collection and the events mechanism.

Sample Provisioning Workflow
General Provisoining workflow

Metrics Collection High Level Diagram
metrics highlevel workflow

Events Collection High Level Diagram
events workflow

Posted in CloudForms | Tagged , , , | Leave a comment

Red Hat Summit San Francisco 2016 here we come

This June 26-28, 2016 Mike Dahlgren and I will speak at Red Hat summit in San Francisco. The topic is:

Automation and configuration management across hybrid clouds with CloudForms, Satellite6 and Ansible Tower

Have you ever wondered what you need to be able to automate and orchestrate you data centers as well as cloud environments? Did you start your configuration management and orchestration projects and you realized you were thinking to small and underestimated the effort for cultural change in the company? This talk will take you on a journey of how you need to think and what tools Red Hat is providing to build a successful automation suite with CloudForms, Satellite6 and Ansible Tower. The talk will showcase examples and integration’s between CloudForms, Satellite6 and Ansible Tower and will give you advice on how to motivate your dev and ops teams to work together and change mindsets.

Posted in CloudForms, Linux, Openstack, Puppet | Tagged | Leave a comment

CloudForms smart state analysis preparation for VSphere 6.0

In the past we used the VMware-vix-disklib-5.5.2-1890828.x86_64.tar.gz for VSphere 5.5 which was pretty easy to install. VMWare included a script called which did all the work for you. In 6.0 the script is gone.

Attached find the steps to get it working with CloudForms 4 and VSphere 6.
You can find the VDDK 6.0 here:

1 Copy the downloaded file VMware-vix-disklib-6.0.0-2498720.x86_64.tar.gz to /tmp on the appliance.
2. Untar the VMware-vix-disklib-6.0.0-2498720.x86_64.tar.gz

[root@miq ~] tar -xzvf VMware-vix-disklib-6.0.0-2498720.x86_64.tar.gz

3. Create the directory /usr/lib/vmware-vix-disklib

[root@miq ~] mkdir -p /usr/lib/vmware-vix-disklib

4. Move the following directories and their contents into /usr/lib/vmware-vix-disklib:
bin64, include, lib64

[root@miq ~] mv /tmp/vmware-vix-disklib-distrib/bin64 /usr/lib/vmware-vix-disklib/
[root@miq ~] mv /tmp/vmware-vix-disklib-distrib/lib64 /usr/lib/vmware-vix-disklib/
[root@miq ~] mv /tmp/vmware-vix-disklib-distrib/include /usr/lib/vmware-vix-disklib/

5. Create symlinks to the libvixdiskLib so that introspection will work.

[root@miq ~] ln -s /usr/lib/vmware-vix-disklib/lib64/ /usr/lib/
[root@miq ~] ln -s /usr/lib/vmware-vix-disklib/lib64/ /usr/lib/

6. Load the libs

[root@miq ~] ldconfig
[root@miq ~] ldconfig -p | grep vix
[root@miq ~] reboot
Posted in CloudForms | Leave a comment

Container Metrics And Introspection With CloudForms 4.0 And OpenShift 3.1 – Updated for OpenShift 3.2 and CF 4.1

If you were wondering how CloudForms 4.0 and OpenShift 3.1 work together then you are at the right place. This post is about the integration of CloudForms 4.0 and OpenShift 3.1. I will describe how-to install OSE 3.1 and how-to configure it so that you can connect CloudForms with OpenShift 3.1. My steps here are for a small POC and by no means production. The goal is that CloudForms will discover the OpenShift environment and collect metrics of the containers as well as package information through smart state analysis/introspection. For introspection to work properly you CloudForms appliance needs to have the smart-proxy role enabled.

I wrote 2 scripts which do the whole work for you. If everything works fine you should be able to do the same as I show in the following video

If you don’t wont to do this step by step here is the github repo to it.

Step 1. This will prepare the master and nodes for the ose install.

Step 2. Execute You will have to enter the root password during ssh-copy-id.

Step 3. The script runs ansible to install OSE v3 and creates all the user/service accounts so that you can connect to OpenShift from CloudForms.

Step 4. Execute the script. Make sure to visit the hawkular URL in the browser and accept the cert for https://$HAWKULARFQDN/hawkular/metrics and https://$HAWKULARFQDN:5000 . After the install you will find the needed token for CloudForms in /root/cfme4token.txt. You can add it to your CloudForms OpenShift Provider.

Screen Shot 2016-07-20 at 8.35.52 AM

Step 5. For Hawkular add the hawkular URL. If this is not your master like in this example you will have to point to the node where hawkular is running or in a HA setup to the load balancer which forwards the port 443.
Screen Shot 2016-07-20 at 8.36.06 AM

Step 6. In CloudForms make sure that all the metrics collection check-boxes are enabled under Configure->Configuration->Server-Server Control.
Screen Shot 2016-07-22 at 10.58.26 AM

Posted in CloudForms | 5 Comments

16 steps to a fully high available Red Hat OpenStack Platform environment with ceph build by OSP Director 7.1

This post is going to walk you through a full HA Network isolated Red Hat OpenStack installation with 3 Controllers, 3 Compute and 3 Ceph OSD nodes in only 16 simple steps.

To be successful with this deployment you will have to follow this network diagram (create the trunks and vlans on the switch) and work with OSP Director 7.1. Working NTP and IPMI access (from director and controllers) is critical for this environment:

OSP7.1 Network IPMI

Ok lets start the installation. All scripts you are going to run are a collection of the steps described in the Red Hat documentation. The scripts can be found here: ospdirectorinstall

Step 1 I assume that you already have a RHEL 7.1 installation. Download the following script.

Step 2 Once downloaded fill out the variables RHNUSER to UNDERCLOUD_DEBUG_BOOL. Note I set DISCOVERY_RUNBENCH_BOOL=false. If you are in a PROD environment this is not recommended as it disables the gathering of host metrics during introspection.

Step 3 Run the script to prepare for the actual undercloud/director software and follow the commands which the script tells you to execute.

Step 4 Download the Red Hat images Deployment Ramdisk for RHEL-OSP director 7.1 , Discovery Ramdisk for RHEL-OSP director 7.1, Overcloud Image for RHEL-OSP director 7.1 and place them in the image folder. You can find those images here:
OSP Images

Step 5 Download all the scripts you’ll need for this install

Step 6 Create the flavor compute, controller, ceph-storage and baremetal

Step 7 Add DNS for the overcloud nodes (You will have to edit your own DNS in

Step 8 Create your instackenv.json file and start introspection. Remember to choose the correct ipmi_type

Please edit the and add your macaddress,ipmi url,ipmi user, ipmi password, ipmi tool. I am using Joe talerico python script to create the instackenv.json files.

Step 9 Assign nodes to their profiles. If you used benchmark equals true you can also run the ahc-match instead of assigning the hosts manually.

Step 10 Copy the Director template files from /usr/share/openstack-tripleo-heat-templates/ into your /home/stack/templates/ folder

Step 11 Create the network-environment.yaml file so it maps to your environment. If you do not use lacp remove the bonding option as it might cause problem.

Step 12 Now we download the files fix_rabbit.yaml, limits.yaml, ceph-environment.yaml firstboot-environment.yaml , ceph_wipe.yaml (edit this files so it reflects your disk setup) and place them in the templates and firstboot folder (we are doing some maintenance on first boot. Set the root password of the instances in the firstboot-environment.yaml if you like. The default will be redhat.

Step 13 Make the necessary changes to /home/stack/templates/puppet/hieradata/ceph.yaml. In my case the journal is on the osd. Thats why the journal is blank.

If you want to have the journals on a separate partition your can exchange the ceph::profile::params::osds: with

Step 14 Go ahead and launch the install.

Step 15 As a postinstall task you could enable fencing for the PCS cluster nodes. Repeat this for each cluster/controller node:

Step 16 Once done enable fencing

Enjoy your Full HA 9 node environment.

Posted in Openstack | Tagged , , , , , , | 5 Comments

Red Hat OpenStack Platform 7 director (OSP director) cli simple install part 2

Part 2 of the blog series will show you how you can register the nodes to your OSP director via command line create flavors and deploy a simple non HA OpenStack environment.

1. To register the blades I created the following json file. Please keep in mind that if you use hp blades with ilo2 you will have to use the default pxe_ipmitool. The MAC address you see in the json is the mac address from the interface you would like to boot from.

2. Import the node definitions into OSP director

3. Assign the kernel and ramdisk to all nodes defined in the blades.json file

4. Check if everything got imported correctly


5. Lets inspect the nodes. IPMI will start the nodes and boot them into discovery mode.

Booting into discovery

Discovered hosts

6. Create the flavor for the test scenario (we only create one flavor for all hosts)

7. Set the boot mode for the flavors to local

8. Now we are ready to deploy.

9. Look at the progress

10. During the install you will see the following screen if you look at the ilo


11. If you would like to login to your overcloud instances you have to source the overcloud.rc file which you find in stacks home directory.

Posted in Openstack | Tagged , , , | 2 Comments