OpenStack provider privileges

Is it reasonable to require admin privileges for OpenStack providers (EMSs) within ManageIQ, and hence visibility of all tenants?

For infrastructure providers, ManageIQ requires admin privileges and visibility of the whole virtual infrastructure. However, for cloud providers, this requirement may be unacceptable. There are various aspects of the ManageIQ design and implementation that will need to be re-examined if we cannot assume admin privileges for cloud providers.


1 Like

I agree. We have run in to this specific issue with AWS and need to be able to provide a subset of privileges to allow MIQ to do what it needs without providing too much access.

An interesting note on Openstack Provider privileges is that the out-of-the-box admin user does not appear to have privileges to provision instances in tenants other than the admin tenant.

For instance, if a new tenant (“Tenant A”) is created after setting up Openstack, the admin user will not be able to provision to “Tenant A” until the admin user is associated with “Tenant A”. Note that this contradicts what I’ve read in the Openstack documentation.

As a follow up, I’ve had a conversation with Russel B. from the Nova team and he believes that what I’m seeing is correct: an admin user would not be able to launch an instance in another tenant.

However, there may be some possibility to investigate keystone “trusts”, which may allow an admin user to act on behalf of a tenant. I’m not entirely sure what this means yet, but I can investigate it and find out if it’s a feasible option for us in terms of admin user access to other tenants.

Another option for provisioning that I’m looking into right now is to try adding the admin user to all of the tenants. This should allow the admin user to authenticate to each tenant and provision in those tenants.

We may want to use this approach to limit the visibility of the user associated with ManageIQ access to Openstack. That’s something else we can explore.

Even if the admin user always had access to all the tenants, we can’t assume the OpenStack administrator would want to make the admin credentials available for use by ManageIQ (MIQ). It’s also not valid to assume that all instances of MIQ, referencing the same OpenStack cloud, should see all the tenants. There will be instances where tenants (or groups of tenants) will want to manage their portion of the cloud with their own MIQ instances.

The scope of what MIQ can see is defined by the users (and respective roles) provided for use to the OpenStack EMSs within MIQ. All of that is defined externally to MIQ, so it’s up to the cloud user/administrator to define what MIQ should see.

Given the above, It seems all we need to do is manage what we can see from each EMS (this can be accomplished through the use of the “*_accessible__tenants” methods in the OpenstackHandle). Each EMS will see one or more tenants (it could be all the tenants, but it doesn’t have to be). This gives administrators the freedom to define the visibility they desire solely in the cloud environment, ensuring MIQ could never access anything they don’t want it to.

The only technical problem I see is the fact that we may want to configure more than one EMS to point to the same cloud (same IP different user). Currently, I think EMSs are identified by IP only, so in order to support this, we may need to qualify by user as well.

It’s fair to assume that the specific “admin” user is not the one that Openstack Administrators want to share with ManageIQ. In fact, it probably makes more sense to use a specific username for ManageIQ. To take it a step further, it probably makes most sense to have a specific username for each ManageIQ region (zone?) that wants individualized access to an Openstack environment.

That way, the Openstack Administrators could limit the access to specific tenants.

And, this would allow the *_accessible_tenants logic to function correctly. And, it would allow those openstack accounts to provision into tenants where it has access.

As it stands today, when calling *_accessible_tenants on the OpenstackHandle when using the admin user’s credentials, the only tenant returned in the admin tenant.

Yes, but that’s just because that’s the way it’s configured in that environment. In our Grizzly environment admin can see all the tenants.

I think we need to define how our text/dev environments need to be configured. We need to define multiple users with access to different tenants. As it stands now, we only have the admin user, so it’s hard to test this stuff. We can also use tenant visibility to isolate test from dev in the same environment, minimizing the need to change the spec tests when re-recording the vcr cassettes.

Just to be clear here. There’s a difference between an admin being able to see a tenant and an admin being able to access a tenant.

Admins can list all tenants. However, admins can only authenticate against tenants that they are associated to.

Right, I sould have used “access” instead of “see” in my last post.
That’s why I added the accessor_for_accessible_tenants() method (and related methods). They only return the tenants the user can actually access.

accessible_tenants() - will return a list of tenants the user can access.
accessible_tenants_names() - will return only the names of the user’s accessible tenants.

I don’t think it’s a problem the admin (or any user for that matter) can only access the tenants they’re associated with. That’s the mechanism through which tenant access is granted. On the MIQ side, we just need to use the visibility we’re given through the EMS user.

I think this is primarily a documentation issue. If you can’t access the expected tenants from a given EMS, it means the OpenStack user associated with the EMS isn’t configured properly. In addition to documenting this, we need to be able to test and verify the behavior. In order to better accomplish this, I think our test/dev environments need to be configured to have more variation in regard to tenants and users.

To start the discussion, I’m going to take an initial swag at defining what the environment should look like. Once we’re satisfied with the definition, we can discuss any issues surrounding its implementation.

Our current environments have too few tenants, and we tend to use the admin tenant by default. I don’t think we should ever use the admin tenant for testing or development, except for rare cases (ytbd) where it’s absolutely necessary. Instead, I think we should use tenants that are defined for specific purposes.

  • Multiple test tenants for use by the spec tests.
  • One or more dev tenants for use by developers for experimentation and discovery purposes.
  • One or more demo tenants, used to host demonstration environments as needed.

Isolating sub-environments by tenant should eliminate cross-environment impact. For example, changes made to the dev and demo environments should have no impact on the spec tests, which will only use the test environments.

Users define the view into the environments, since they determine which tenants can be accessed, and by whom. As with the ‘admin’ tenant, we tend to use the admin user by default as well. Here to, I think direct use of the admin user should be rare. Instead, each sub-environment should have users defined specifically to access the environment in question. For example, test_admin would be a user with admin privileges, that only has access to the test tenants.

Test Environment

In order to test and verify tenant-specific behavior, I think our test environments should have a minimum of two users and four tenants. For example:

User: testa_admin will have access to testa_tenant1 and testa_tenant2
User testb_admin will have access to testb-tenant1 and testb_tenant2

To test overlapping views, we may also define:

User: test_admin that can access  testa_tenant1,testa_tenant2,testb-tenant1andtestb_tenant2and/or Definetestab_tenantthat can be accessed by bothtesta_adminandtestb_admin` users.

We may also want to define associated non-admin users: testa_user1, testb_user1, etc.

##Dev and Demo environments
These environments don’t need to be as strictly defines as the test environment, but they should be defined such that they are isolated from the test environment, and from each other.

I like this proposal a lot!

I really like the idea of having test environments that will help to create a somewhat consistent environment for recording our VCR cassettes for testing. And, the idea of overlapping test environments to test out possible edge cases.

Also, having an environment where developers can mess around with some experimental features is fantastic.

Thanks for writing this up!

@rpo Completely agree. I think we should start off by changing the environment builder to create a tenant, then everything else should be created within the tenant. Lets try to keep with the existing naming prefix of “EmsRefreshSpec” since that is a lot more clear about the use case of those objects. I feel like using “test” or “testa” is too generic of a prefix and people may not realize that the tenant / user / etc. is important for the EmsRefresh specs.

As far as naming goes, yes I agree.

My initial goal is, as you describe, to create an environment builder that will duplicate the test environment we currently have under a test-specific tenant. Then, we can add additional test tenants and users as we expand the spec tests.

@blomquisg , during my investigation into setting up test environments, as described above, I uncovered a few issues that lead me to believe requiring admin privileges for EMSs may be a problem. Basically, any user with admin privileges can operate on entities that are outside of the user’s accessible tenants. For example, I set up a user MiqRefreshTestAdmin, gave it the admin role, and assigned it access only to the MiqEmsRereshTestProj tenant. As the ’ MiqRefreshTestAdmin` user, I was then able to: terminate instances instantiated by other users in other tenants, and delete volumes in other tenants.

So, if we require EMS users to have the admin role, it seems we can’t rely on the tenant abstract to provide user isolation in the cloud. This shortcoming has been addressed in newer versions of OpenStack by the introduction of domains (thanks @bdunne for pointing this out), where the scope of administration is isolated to the user’s domain.

Even with the advent of OpenStack domains, I think it would be beneficial to adhere to a least privilege design philosophy. Requiring EMS users to have only the minimum privilege needed to perform the desired operations within a given scope, will afford our users the greatest flexibility in managing their environments with ManageIQ. For example: ManageIQ could then be used to manage the entire cloud, a domain within the cloud, even portions of a domain - or any combination thereof.

Most, if not all, of the OpenStack operations performed by ManageIQ do not require admin privileges. Currently, I don’t believe inventory (ems refresh) or provisioning require admin privileges per se. We may require admin privileges in some cases, for example: ceilometer might require them, or if we ever implement CRUD for higher-level cloud constructs, tenants, domains, users, etc. But we shouldn’t require such privilege for all use cases.


I would definitely prefer to not require admin privileges for the Openstack credentials used to setup the provider. I think I was simply being cautious. If we can get away with non-admin user and validate that it works that way, I’m 100% on board with that!

I’ll try prototyping this to see if there are any unforeseen problems. I can think of a few, but I don’t think they’re insurmountable:

  • Getting a list of tenants a non-admin user can access may not be possible using the scoped identity service returned by Fog. However, I’m reasonably sure I can accomplish this by using an un-scoped token to make a raw request.

  • We probably should shy away from using :all_tenants => true because, if the user happens to have admin privileges, it will return things outside the scope of the user’s tenant accessibility.

  • Related to the above, quota collection will probably need to change to iterate through accessible tenants.

  • If possible, the OpenstackHandle should be changed to re-authenticate using an un-scoped token. This should incur less overhead than using user/password, making tenant iteration more efficient.

I’ll post my findings when I’m done.

Well, I’ve tested a number of scenarios and have uncovered a few things - some anticipated, some not.

  • As expected, a non-admin user cannot retrieve a list of tenants from a scoped Identity service. So, even if a user has access to multiple tenants, a list of those tenants cannot be retrieved from an Identity service that’s scoped to a specific tenant. Because of this, our current OpenStack ems refresh code fails completely when configured with a non-admin user.

    This is essentially a Fog issue, because Fog always returns services that are scoped to a tenant. The good news is, there is a workaround for this. The scoped Identity service returned by Fog contains an unscoped_token attribute. The value passed back through this attribute can be used to instantiate services scoped to other tenants, without having to re-authenticate with a username and password.

    Unfortunately, it seems there is no way to use this token to obtain an unscoped service, or to force Fog to use it for a single request. Luckily, Fog exposes its low-level request mechanism, so with a little work, we can solve the problem. I’ve added a visible_tenants method to our our IdentityDelegate class. Said method is implemented as follows:

    Replacing calls to tenants() with visible_tenants() solves the problem.

    def visible_tenants
    response =
    “http://#{@os_handle.address}:#{@os_handle.port}/v2.0/tenants”, false, {}).request({
    :expects => [200, 204],
    :headers => {‘Content-Type’ => ‘application/json’,
    ‘Accept’ => ‘application/json’,
    ‘X-Auth-Token’ => unscoped_token},
    :method => ‘GET’
    body = Fog::JSON.decode(response.body)
    vtenants =

  • Currently, the OpenstackHandle code switches tenant scope by re-authenticating using username/password. During my investigation, I’ve verified that using the unscoped_token eliminates the need for this re-authentication, and using it may be more efficient. However, I’m not sure if doing so will circumvent Fog’s re-authenticate after token expiration logic, so we may want to defer this change until we know more.

  • The current quota collection code relies on the user having admin privileges. To fix this, I’ve implemented quotas_for_accessible_tenants methods in the: ComputeDelegate, NetworkDelegate and VolumeDelegate classes.

  • I’ve been unsuccessful retrieving information from Swift as a non-admin user, and it fails in an unexpected (at least to me) way. We can instantiate a Storage service, we can retrieve a containers collection object from the service, but if we try to access the collection, an Excon::Errors::Forbidden exception is raised.

    I’m pretty sure this is a configuration issue on the OpenStack side, since the same user can’t access storage information through Horizon either. While I think the solution is to set account-level ACLs granting the user access, my attempts to do so have been unsuccessful. See:

My next steps will be to see if I can update the OpenStack ems refresh, so it will work as expected with a non-admin user. I’m assuming the Swift issue is resolvable, and we just have to find the proper way to set the ACLs. I’ve modified the OpenstackHandle to account for this failure. Instead of raising an Excon::Errors::Forbidden exception at an inopportune time, the Storage service is flagged as being unavailable.

So, to sum it up, I think this is an achievable goal. Once the Swift problem is resolved I see no other obstacles in the way. Once implemented, our tests and test environments can be updated accordingly.

Has there been a possible resolution to this? I have a situation where i can use admin role accounts to tenants, but don’t have the admin account as option and would like to have the visibility on my project within the Openstack provider in ManageIQ. Thanks.

Most of the issues mentioned in my previous post have been addressed in the code base. So, you should be able to add an OpenStack provider based on a non-admin user, and it should work with a couple of caveats:

  1. The user in question must be configured on the back-end (OpenStack environment) to have visibility into all the tenants and storage you want MIQ to be able to see. This includes Swift ACLs, as mentioned above. I must add, I have not been able to verify Swift access, because I was unable to determine how to set the proper ACLs.

  2. Currently, MIQ requires provider IP addresses and host names to be unique. So, you won’t be able to add multiple OpenStack providers for the same OpenStack environment, even if each is based on a different user.

    I feel that restriction should be removed, because I think having multiple MIQ providers, each seeing a portion of an OpenStack environment, is a valid and useful capability. However, we still need to enable and test this. Especially, edge cases surrounding inventory overlap.

I hope this helps.