The MIQ Provider Team's Focus for the "F" Release (and beyond)


Hi all,

On the MIQ Providers Team, we have been noticing an exciting, yet daunting, trend lately: people want more providers, and they want them soon!

We have also realized that our little core providers team on MIQ is simply not big enough. We’re not big enough to develop the providers ourselves, and we’re not even big enough to manage the development of the providers by community contributions.

So, what to do?

Some may already know that several on the team have been working on several changes that help enable our concepts of “Pluggable Providers”. This is the idea that we would be able to provide a clear Provider Integration Platform for contributors to follow to easy success.

The question comes up often: How close are we to this goal?

And, unfortunately, the answer hasn’t changed much over the last two years: We’re very far away from this goal.

However, now that the Euwe release is entering its stabilization phase, it’s time to reinvigorate the call to action! Platform-ify all the things!!

Here’s the ideas that we’re putting together to make this happen:

Extract All Providers from the Core Codebase


Allow for separate work streams across various teams which will free up community development teams to move at the pace that makes sense to them.

What does it mean?

Some might have seen that we have already extracted the Amazon Provider to its own github repository. This didn’t come without its nasty challenges, but the work is largely done, and @urandom is confident that we can start moving forward on the other providers immediately.

The high level plan is to extract the providers in the following order (still a rough estimate, but sends the basic picture):

  1. First Phase
  • RHEV
  • Middleware
  • Containers
  • OpenStack
  1. Second Phase
  • Google
  • Nuage
  • Azure
  • VmWare
  1. Third Phase
  • Ansible
  • Foreman

Since we’ve only extracted one provider, and it was the guinea pig, we don’t yet know how long a “phase” will take. But, after the first phase is complete, we’ll be in a better position to estimate the remaining phases.

My hope is that we can do all of the phases in the “F” release.

APIs for All the Things


Ease the creation of new providers

What does it mean?

Instead of API, this might end up looking more like an SDK. But, at this point, the differences are probably negligible.

Specifically, the MIQ Providers team wants to create APIs for inventory collection, events collection, and metrics collection. My utopia here is to see the definition of how to collect these data separate from the definition of what data to collect.

Provider Scaffold Generator


Take the first step for developers creating their first new provider integration.

What does it mean?

New provider integration developers should be able to execute a simple command on the command line and end up with a complete structure for adding a new provider. This should include the gemspec, boilerplate code for various integration points, customized UI, and hints at how to write tests.

Refresh Strategies


Make it easier for developers to know what refresh strategies are available and how to implement one that makes sense for their provider integration(s).

What does it mean?

There are currently two different refresh strategies: full refresh and targeted refresh. And, there are rumblings of adding more: skeletal and/or delta-based.

MIQ should provide a catalog of various refresh strategies with clear descriptions, pros/cons, and implementation hints.

Some of this catalog might be presented directly in the code, and some of it might be in developer documentation. The important thing is that it’s clear to developers.

Supported Feature Registry


Remove provider-specific features from the Supported Feature list while allowing providers to register features with a Supported Feature Registry.

What does it mean?

This came up in a PR review where someone noticed that some provider-specific features crept into the list of queryable supported features:

We would rather have core features contained in the Supported Feature Registry. Then, if there are provider-specific features that enable functionality in the application, those should be registered in the registry by the provider.


There is a list of benefits to implementing these features, most notably they enable the community to write their own integrations which allows us to scale. In addition, large disruptive changes like these gives us the opportunity to address some issues that are prevalent in the ManageIQ provider codebase today; by creating a SDK we can simplify and de-duplicate the code at the provider level.
Both these issues can easily creep into the provider gems too. Although enabling separate work streams among the community allows for rapid independent progress it might lead to code duplication across different provider gems largely because it could feel natural for engineers to work within the scope of their own gem. @blomquisg mentioned the possibility of establishing a provider certification strategy which I think is a great way to somewhat control this. I have been trying to think of tools that can help, my list is a little short but thats where the community comes in :slight_smile:

Use a tool to track duplicate code
Code climate uses the flay gem under the covers to identify duplicate code.
Pro: Given that we already use code climate this seems like it would make sense to use it as part of the certification process.
Con: Does it track code across different repos?

Generate SDK documentation
Ruby comments can be used to generate documentation. Examples: YARD, rDoc (I have not vetted these). It might be easier for an engineer to search for an existing method in documentation rather than ruby code before writing a new one.

@Fryguy - Any thoughts on tools that might help with code quality?

@abellotti - I imagine the REST API will be extended to support integrating with new providers, can you point to any REST API documentation that might help get us get the lay of the land? Have you any high level thoughts to share at this early stage?

Comments and suggestions welcome!


I’m not quite sure of the request here, integrating new kind of provider vs using one already added. Are we adding new types of provider via the UI ? If so, then yes we need expose this via the REST API, otherwise the SDK mentioned above would be more appropriate. Once the provider type is added, then using one as such would fall into the REST API land where we currently provide CRUD operations including the “refresh” action as shown in

Note that the REST API for providers is model driven w.r.t. what types of providers are allowed to be created, authentication types/attrs allowed, etc. As the provider landscape is extended we need to assure that we continue to be model driven and if we introduce new actions/capabilities we need to leverage all the great work being done today like the supports_feature, etc. if we chose to expose certain actions to only certain types of providers, etc. (I know vm_import is coming up soon), we’d need to enhance the REST API accordingly.


@abellotti Bronagh is talking more about a developer creating a new provider integration.

So, with the MIQ API, if the provider type is existing, then my understanding is that “it just works” because of how the API model driven.

If the provider type is new, then there would likely need to be new API POST endpoints added to allow API users to create resources for the new provider type. However, when adding a new provider type, there’s going to be a lot of things being added to the core MIQ repo outside the MIQ API.

@abellotti do you have a general guideline for when it’s necessary to add new API endpoints to the MIQ API? For instance, should developers need to add new endpoints when they need new tables? Or, new fields in a table? Or, some other criteria?



We only need to expose new endpoints if the new table and related resources are not accessible otherwise via reflections or virtual attributes. Also, we need to expose the new endpoint, either primary /api/:new_endpoint or via subcollection /api/:provider/:id/:new_endpoint if there are actions related to those resources. This allows us to declare the necessary role identifiers for those actions in the api.yml

Otherwise, you can simply access the new resources via reflections/virtual attributes, i.e. GET /api/providers/:id?attributes=:new_reflection


Folks, we’re starting to tackle some of the questions related to “how to we get from here to there” regarding these top five priorities for the providers team.

This week we’re going to start talking about APIs for All the Things. Here’s a list of things I want to discuss and see if we can work towards some clear paths forward.

Provider Integration APIs:

  • Events
    • Decent idea how to do the event monitor / event catcher API … most of it’s already written when looking at the methods that throw NotImplementedErrors in BaseManager::EventCatcher::Runner.
    • Can we make event handling any easier?
  • Metrics
    • Is it possible to piggy back on the Time Series Database Work to define the APIs here?
  • Refresh
    • I think that zero progress has been made here.
    • What are the first steps?
  • Operations
    • Operations define the things that MIQ can do to provider level objects (e.g., start a vm, stop a vm)
    • Can we DSL this away?
    • Discuss the origin of raw_<operation> and recent talks about not needing it
    • with_provider_object vs. run_command_via_parent
      • I suspect that run_command_via_parent is simply legacy, but I want to be sure about that
      • If run_command_via_parent is legacy, then can we just refactor the existing code using it and turn it into with_provider_object?
    • Can the Operations support be folded back into SupportedFeatureMixin somehow? Maybe a SuppportedOperationsMixin?


A github issue about provider operations api