Alternative way of working with OpenStack as a provider


I had an interesting discussion with a colleague today, who happens to be our team’s OpenStack guru. We’re on a project together implementing CloudForms and OpenStack, and the question came up as to why don’t we (CloudForms) just use Heat templates to do all of our provisioning into OpenStack. His compelling argument was that a CloudForms service should purely create a Heat template, and pass the work to Heat to actually do the work. Performing actions on the resultant service should merely involve reconfiguring the Heat template and re-applying.

The argument was that in a cloud model our orchestration tool (CloudForms) should be ordering services, and we shouldn’t actually care about the underlying ‘things’ (such as VMs/instances/networks) that make up the service.

It would certainly make life easier in some respects, and we’d only have to talk to a Heat API, and not Nova/Neutron etc, so automation would be simpler. The CloudForms/ManageIQ UI might only see icons representing services (the Heat templates), rather than the instances making up the services but would this matter? Could we ascertain the ‘status’ of the service from Ceilometer?



Is the Heat service guaranteed to be available in all OpenStack implementations/deployments? If not, this would limit OpenStack provisioning to those that have it configured.

In general, we don’t assume many services are required, except for Identity/Keystone and compute/Nova. For the others, we use them if they’re available, fall-back if we can, and restrict features if we must. If Heat isn’t required, we’d still need a fall-back.


I know that heat integration is on the roadmap. This approach is definitely interesting to consider. I could see this becoming a “best practice” discussion, of sorts. I’m not sure we’re ready to abandon the current workflow. But, this definitely could be a compelling alternative workflow.


In my view a solid workflow solution is an essential component for a system like ManageIQ. It is where a lot of the value is delivered. Also we support other important non-OS providers e.g VMware and AWS.

I think the question here is about granularity. Say you have a workflow with 10 steps, 5 of which are consecutive steps into OpenStack. You could decide to collapse those 5 steps and outsource them to Heat. But I’m not sure you’d want to force this approach. Ideally Heat is just another OS component with which we can integrate, and it would be up to the customer to use it, or not.

PS: looking forward for our workflows to become visual, both for design, and showing the current state of a workflow instance :slight_smile:



The suggestion was offered to provide a abstraction layer for the ever expanding and changing OpenStack API’s. Just upload the heat template and OpenStack will deliver your recipe to you, same way as it would by operating all the API’s individually.http:// You would limit the number if api calls dramatically. Compress it to a almost readable model in a heat template. It is the way OpenStack and AWS is used by user implementing DevOps, spawning of whole environments and collapsing them again.

The approach differs a bit from enterprise virtualualization where you ask for a VM on existing networks. The recipe for OpenStack starts by creating tenant networks, probably one or more routers, volitile instances of defined types and possibly connecting persistent storage to them.

And yes Heat should be installed to be able to operate it. You could name it as a requirement to your users. This is actually true for all components, we are still in a very early stage in the OpenStack life span (hopefully) and currently clouds have more or less the same building blocks, it can’t run without Keystone, Glance, Nova and Neutron. Other supported components like Cinder, Swift, Ceilometer and Heat are optional. There are many more to be added, LBaaS, FWaaS, DBaaS, VPNaaS. All of these will be represented by Heat. Are you sure you want to study, implement and maintain compatibility of all these API’s?

It is a suggestion, HEAT is inteded to be the interface for automation. There is something to say about accessing API’s directly. 100% of OpenStack’s functionality is available through the API’s. In my opinion it is better to focus on being excellent in high level enterprise orchestration rather then trying to prying some functionality out of each API.
Same approach can be used for AWS in cloudformation, there is a cloudformation compatibilty API in HEAT.

Hoping this helps,

Bart van den Heuvel
EMEA Architect - OpenStack


We try not to impose restrictions that the management system itself does not impose. This is part of the reason for the reluctance to completely rely on Heat. That’s not to say that if we detect Heat we can’t somehow use it to optimize the provisioning workflow.

We have a similar problem with networks right now, for example. In your list there, you wrote Neutron as a basic requirement, but it’s not. There are many users out there using nova-network, and as much as we want to move to Neutron completely, we can’t. In fact, we are finding that many users need more support for nova-networks. In effect we have to support both.


Why couldn’t this be to have the capability to add on “roles” to an openstack provider. Similar to the cloudforms appliance roles but within the Openstack Provider Role you could enable/disable certain functions. Or alternatively since most of these services have their own services accounts to function you could just have different tabs like C&U for RHEV that will add on functionality if you provide the heat user and password. We are having to create catalog items using the “generic” method to call custom code to send heat templates, etc. Although this works, it’s still sort of a workaround as some of our customers want to use anything that is already out of the box, and not completely customized for them. I would be ok with using the Service Catalogs if we could have a “browse” button to upload a yaml or whatever file up for it to re-use within the code.


From my point, ManageIQ should give the choice to the user. The idea of
using Heat to delegate multi-resource service provisioning on OpenStack is
great, but not generic enough. And, as pointed @rpo, it might not even be

I would rather see ManageIQ as a service designer, that would possibly
delegate deployment to a native “stack” manager, like OpenStack Heat or AWS
CloudFormation. We could use our own description language or implement
existing one, like TOSCA, and create translators.

The user would be able to choose between automatic translation to the
native language or letting ManageIQ handle the individual items. The latter
would require ManageIQ to handle far more objects than today : network,
storage, load balancer, database…

In terms of UI, a pretty designer would be great. In OpenStack, you can
visualize a stack with the relationships between the components. It would
be awesome to provide a graphical tool that creates the stack.

My 2 cents :wink: