Run workers in containers


#1

The main idea of this blueprint is to be able to run ManageIQ components is separate containers. This would allow a better scale out of the application with the provisionning of more workers, possibly on different nodes. With an [anti-]affinity policy for container placement, ManageIQ could enforce high availability.

One of the direct application could be the deployement of containers on top of OpenShift, leveraging the auto-scaling features of OpenShift to handle ManageIQ. I know that ManageIQ will have the ability to deploy an OpenShift environment, so we might fall into a chicken/egg issue.

This could also be used to deploy ManageIQ on top of OpenStack with the Docker Nova driver and use Heat to handle high availability and auto-scaling. Same feeling with chicken/egg.

This feature will probably need ManageIQ to run on CentOS 7 to use Docker. BTW, migrating ManageIQ to CentOS 7 would be a nice move…


#2

Containers look like a good solution to group similar code together that is a) updated as a unit and b) running as the same security privileges. We’re looking at ways of leveraging this within our appliance.

The current ManageIQ system allows you to pick and choose how many of each components/services are run on each machine, thus allowing you to scale based upon your needs. Load balancers allow for high availability. The use of individual memcache instances per server is the main problem with affinity, and you can alleviate this if you do something like storing sessions in a common location (e.g.: the database).

Having said that, we are looking at how we can split up and simplify our codebase. And running each of these component in a container does have merit. Unfortunately it will not give us [anti-]affinity, auto scaling, or the ability to run ManageIQ components outside a ManageIQ appliance.

OpenShift is great. I wonder if it will allow us the freedom we need to perform tasks like mounting disks for inventory.

As for the chicken/egg. Currently, ManageIQ is running in VMs in the infrastructure that it is managing. So while I agree it feels like this could be an issue, it ends up not being a problem at all.

We are updating our code to work on newer versions of RHEL/CentOS. But also do note that RHEL 6.5 does run with containers.


#3

Well, for orchestration purpose, e.g. [anti]-affinity, autoscaling…, I thought that Kubernetes pod could be a nice fit. And fortunately, refering to the OpenShift 3 PEP, Kubernetes will be part of it.

But is it supported ? :wink: