Managing tenants and users of multiple Openstack instances

Hi,

We are contemplating the use of ManageIQ as the CMP managing several Openstack clusters installed in different locations.
Question: is it possible to declare and manage openstack users and tenants in ManageIQ once, and have them propagated over all Openstack instances?
Thank you for your help.

Alireza

Not out of the box
Tenants and Users in ManageIQ are only for ManageIQ internal RBAC (role based access control) and have nothing to do with Tenants and Users in Openstack

However ManageIQ has a pretty good Automation engine, that would allow you to script it yourself pretty easily

Thank you buc,

Nice to have your answer this morning.

First I have to correct my OpenStack/Keystone language :slightly_smiling_face:
By Tenant I meant Domain and not “Project”

So let’s start having only one Openstack. I can connect keystone where I declare 2 Domains, and 10 different users in each Domain. I assign _admin role to one and _user to 9 others in each Domain.

Then I declare in my ManageIQ this single OpenStack as my Cloud Provider.
Finally I create a new service catalog, and a new service item pointing my single Openstack.

  • But my users are sitting in my keystone and are not the same that the ones accessing ManageIQ, how the “order” of a service item could work if the user is not defined from one end to the other with consistent access rights?

… As I reach saturation, now let’s extend the capacity of my Cloud. In a remote datacenter I create a new Openstack cluster. I have then to “clone” all (almost all) I have in the first cluster, at least Domains, User groups, users, etc. and go back to ManageIQ and declare a new Provider.

  • But now I have two keystones that shall be synchronized upstream and downstream, how to deal with consistency if ManageIQ doesn’t hold the master user reference?

Growing over time with 10s and 10s of clusters, I do not want the “user” having any visibility on my technical setup and be asked to chose where to place the VM. I want my CMS to place automatically the requested VM in a placement zone somewhere based on a predefined policy (role, Domain, geography, resources occupation balance, traffic, latency, affinity, etc.)

  • Again if my user definitions and roles are sitting in each keystone, how the policy engine could sort out where to place the user’s workload?

I suppose that I have solve these questions outside ManageIQ with a kind of master KeyCloak…

Looking forward to having your insight

Alireza

I am not that familiar with Openstack, anyway

Maybe a little bit of background: ManageIQ is designed to be an abstraction over Public Clouds, Openstack and traditional Infrastructure Virtualization like VMware and RHEV (and some additional stuff). As such ManageIQ maintains its own database with all the information of all the different providers in a common schema.

Providers in ManageIQ are responsible to

  1. Read out all provider resources and store them in the ManageIQ database
  2. Translate common commands to the providers specific API calls
  3. Listen for Events happening in the provider and forward them to ManageIQ

Do do this ManageIQ connects to the provider with an administrator account and will use the same account for all actions, independent of which user triggered the action in ManageIQ

ManageIQ has it’s own access control system. ManageIQ queries the provider data with an administrator account and stores everything in its own database. Access control is implemented on whatever is in the database, completely independent from the providers access control (ManageIQ will even keep database records of stuff that is deleted from the provider)


Back-2-topic:

  • ManageIQ cannot authenticate users based on their Keystone credentials (at least not without some configuration, it is probably possible to set up)
    ManageIQ typically uses its internal database, LDAP or HTTPD external Auth to authenticate users
  • ManageIQ will use its provider credentials to perform actions in Openstack, independent of which user triggered it. There is no federated authentication happening under the hood
  • Do you need to synchronize users, if you aren’t using user federation?
  • ManageIQ has a pretty robust Automation Engine, if you need to synchronize users you can basically script it yourself in ManageIQ using the information in the ManageIQ database
  • ManageIQ has a database with all the information from all the providers. For Cloud Providers it has the concept of Cloud-Tenants I am not sure if they are detailed enough for your use case

Summary:
I think if you can live without federated users, it would be easier to implement your use case.

Regarding the whole placement question, I would

  • Create a Generic Service Catalog and let the users select a named placement zone, e.g. “my_awsome_placement_one”
  • At some point ManageIQ will drop you into Automation Engine (i.e. some ruby code of your choice) with access to the ManageIQ database and the information which user requested the VM Provisioning
  • Given the Username and the information from the ManageIQ database you should be able to figure out which Openstack Cluster you want the VM to land in
  • Start a Sub-Workflow in the background doing the actual work and just wait in the main Workflow until that is finished

Good morning Buc,
I read carrefully your write-up and it helps alot . Thank you for that.
I am now open to consider having users only managed at MiQ side and hide that granularity from providers. This scenario simplifies the user management but raises a list of questions on security:
In a case where I have 50 tenants and 100 000 users scattered over the 50 accounts, at the provider side I’ll see one user (the admin) but do I see also 1 only tenant? I hope this is not the case otherwise there is no isolation at provider side, and I hope also that the tenant creation inside MiQ is replicated over all providers, or shall I script it with an automation engine?

Regards
Alireza

BTW, if I have 50 tenants, I’ll have 50 admin users declared both MiQ side and Provider side, in many user access management systems such as SAML, a tenant comes with its admin user, now I have to find out if in MiQ there is a correspondance between the notion of tenant and admin user?

Sounds like a pretty massive deployment and I think you are going to need a Proof-of-Concept to really verify your requirements…
Also we only have one Tenant, so I have zero experience with what I am talking about

Anyway, I would start by differentiate between your types of “tenants”

  • In the real world, there are multiple Companies, companies have departments and departments have people in them
  • In Openstack, There are Clusters (i.e. Keystone, which does the authentication in Openstack), clusters have Openstack-Tenants (?), Openstack-Tenants have Projects (I.e Resource Groups), and Projects have VM in them
  • In ManageIQ there is ManageIQ-Tenant-0 (i.e everything ManageIQ knows about), multiple levels of Child ManageIQ-Tenants (Parents usually see resources from childs and childs inherit configuration), within a ManageIQ-Tenant there is an RBAC system, which subdivides the resources in a MangeIQ-Tenant and which actions users can perform on the resources they see

Notable with regards to ManageIQs Tenants

  • Child Tenants can see Parent Providers
  • Child Tenants can see Parent Service Catalog Items
  • Child Tenants can define their own Automate code and it’s possible to select different code branches “automatically” (based on the tenant of the logged in User)
  • Parent Tenants can see Resources (VMs/Hosts/…) of their childs

I think the important part is, that Providers are affected by ManageIQ-Tenants. I would create a Openstack-Tenant (and admin user) and a MangeIQ-Tenant for each Company and add the same Openstack Cluster multiple times, with a different admin user each time.
On the Openstack side, you would have only one user in each Openstack Tenant

Most likely you want to automate the creation of tenants in ManageIQ and Openstack by either by creating “Admin Catalog Items” in ManageIQ or with some tool outside of MangeIQ

Thank you Buc,

indeed we have at scale deployment relatively large: 100K users/100 tenants/7000 VMs
I fully follow your description, and in addition I think MiQ has a Tenant Mapping mechanism allowing automatically to align and update both sides Provider’s tenant with MiQ tenant. I need read more to understand how to implement it.

  • Indeed there will be one admin user per tenant seen from Openstack.
  • and end users will only be visible from MiQ

Now I am exploring how to organize this tenant mapping between the two worlds:

  • At one side (MiQ) we have one Tenant
  • At the other we have several Openstack clusters, so several “Tenant definitions”

I have to understand how to map one MiQ tenant with several OpenStack Tenants.
In other words, I think we have to clone each OpenStack cluster to hold exactly the same set of tenants and admin users pointing all to one consolidated view in MiQ.

… and I have also to understand articulation between this Tenants hierarchy and the notion of Zones and Regions. Shall I consider a sub-region as the environment for one tenant? not clear yet !

This is the mindmap that I created step by step while studying MiQ… where would you place Tenants here?

tenants are about access control to VMs, Catalog Items and so on.
I tend to think about MIQ in terms of

  • what’s in the database
  • things acting upon the data in the database
  • the machinery to get the data in the database

From that point of view, tenants act upon the data, while everything on your mindmap, is the machinery to get the data


Zones and Regions are entirely about scaling your MIQ deployment to your environment, like number of VMs/Providers, working around Firewall clearings, …

  • A region is just a database, so unless you run into performance problems with the database or the network a single region should be fine.
  • Zones and server roles determine which tasks a appliance will pull from the MiqQueue. You probably want different appliances to do different things and that’s how you achieve that.

Our deployment is about 4000 VMs and we have 1 dedicated database appliance, 1 UI Appliance, 1 Embedded Ansible appliance and about 10 generic worker appliances. Everything on 1 region