Newbie - Hoping to be pointed in the right direction

I am hoping someone here can point me in the correct direction. First, I don’t have much experience with Ruby, please excuse my ignorance.

For background, I use AWX to handle an end to end deploying of a VM using VMWare + all the infrastructure plugins needed for a VM. I am looking for a better way of handling a self service catalog for end users and thus here I am. I also am using jansa-2.20201027185742_b8d5deb.

I am ultimately looking to use ManageIQ to have the user select a catalog item --> go thru any approval --> execute via the Tower provider --> AWX build VM --> Return data of VM to user portal(One of the questions in the survey is to ask for quantity of VM the user wants, example 4 Database Servers). In the end I would like the user to see under “My Services” the VM Name + IP of all servers for that Request. Also, eventually to do a standard retirement, VM modification, and display chargeback to said servers.

I have been searching online, watching videos, and reading through documentation but I am not sure if what I am looking to do is possible. I can execute my AWX workflow job but I have been unable to locate a way to pull any return data back into ManageIQ. Each online site that I have reviewed has shown how to execute the job and it going through the process of being built but, I havnt been able to locate what the user see’s once it has been built.

  1. I believe that what I am trying to do is possible, I could use confirmation before going down some of the rabbit hole.
  2. Is it possible to update the My Services to show the VM name or names to help the user identify which server or application it is through automation?
  3. I am confused by one of the documents below, ansible playbook in a state machine. This article leaves to me to believe that Ansible Tower should work but with set_stats on a workflow level, I have not had this work. I am wondering if I need to create a job template that contains the manageiq_automate task, but I have not found what the workspace definition relates when the job is executed yet.
  4. Is there any Git examples or other videos that may assist with my journey?

I am currently reviewing these documents:

Thank you for your help!

A lot of the things that you’re looking to do are much easier using embedded Ansible in the appliance rather than an external AWX server. For example passing data between playbooks in a state machine is only available using embedded Ansible playbook methods, and adding new VMs to a service is much easier this way as well.

You can however use the automatic extra_vars that MIQ sends over to AWX when it runs a job template, to connect back to the ManageIQ API, for example:

  X_MIQ_Group: EvmGroup-super_administrator
  api_token: 898ababb8ba32ddf66e009744725a7c6
  api_url: 'https://192.168.1.x'
  group: groups/2
  request: requests/40
  request_task: requests/40/request_tasks/53
  service: services/25
  user: users/1
  X_MIQ_Group: EvmGroup-super_administrator
  token: 898ababb8ba32ddf66e009744725a7c6
  url: 'https://192.168.1.x'

What you’d need to do from the playbook is make an API call to MIQ to lookup the ID of each newly provisioned VM (probably a filter[]=name= lookup), then add that VM’s ID to the service ID that’s triggered the workflow. The service ID is the manageiq.service extra_var.

There’s an example of something similar here:

You’ll have to modify it though, for example that playbook hard-codes the MIQ VMware Provider ID to “21000000000002” to force a refresh so that MIQ sees the newly provisioned VMs. Refreshes are triggered pretty quickly anyway though so you may not need that, but if you decide to keep it then you’ll need to at least pass your actual provider ID as an extra_var (or hard-code it your own provider ID).

You could probably use the syncrou.manageiq-vmdb role to simplify the API calls, see an example at the end of this article: (the role is manageiq-core.manageiq-vmdb for embedded Ansible, but there’s the syncrou version on Galaxy)

Hope this helps,

Thanks you for the detailed information and the suggested repositories\documents to review. It sounds like I may have to use the roles externally to perform some of the work I am missing. The git repo is helpful in understanding this process.

I thought about moving everything into ManageIQ but I have an extensive presence in AWX, and the infrastructure support teams like the workflow GUI to manage the roles they use. The Ansible workflow passes the set_stats between each template job and at the end a final email is sent with the vm names + ip. I was curious if I was missing something to allow the set_stats to be registered via set_state_var so that we can switch this from email to some logical sense for the user and also have a framework for the lifecycle.

I have the VM that gets created from AWX being added into the ManageIQ Service using the suggestion made for API call or the vmdb role. This part works as expected but when my AWX workflow completes the ManageIQ job fails. I found that the timeout caused an issue and I adjusted this to be higher. I also noticed that when my AWX workflow template called another workflow template in the same workflow, even though neither failed, ManageIQ thought it was a failure. If I remove the call to the other workflow, the failure does not occur and works as expected.

This failure had me wondering if its possible for the check_provisioned job to look for a value to conclude it is done. My understanding is that if I am using a state machine and I am not using the embedded Ansible, than the manageiq_automate roles does not work. Instead I get this:

X_MIQ_Group: Read.Only
api_token: cf937d9c8f778d1706
api_url: ‘https://’
group: groups/21
request: requests/169
request_task: requests/169/request_tasks/168
service: services/167
user: users/9
X_MIQ_Group: Read.Only
token: cf937d9c8f778
url: ‘https’

Since it doesnt contain the automate workspace id than set_state doesnt work. Would using an API call work instead? I wonder if creating an option and changing the value could be used.

I added this line to the (Domain) → AutomationManagement → AnsibleTower → Service → Provisioning → StateMachines → preprovision

@handle.root[“service_template_provision_task”].set_option(‘build_status’, ‘preparing’)

If I do an API call to the request task I see the following item:

"options": {

    "dialog": {

        "dialog_service_name": "",

        "dialog_param_description": "Ansible Lab Test - Throwaway",

        "dialog_param_application_name": "ansible test",

        "dialog_param_vm_count": 1,

        "dialog_param_server_environment": "Development",

        "dialog_param_app_type_code": "Application Server – Unix",

        "dialog_param_operating_system": "RHEL 7",

        "dialog_param_num_cpus": 2,

        "dialog_param_memory_gb": 6,

        "dialog_param_disk_size": "small",

        "dialog_param_database": "False",

        "dialog_param_standard": "True",

        "dialog_vm_status": "none",

        "request": "clone_to_service",

        "service_action": "Provision",

        "Service::Service": 167


    "workflow_settings": {

        "resource_action_id": "445",

        "dialog_id": "10"


    "initiator": null,

    "src_id": "3",

    "request_options": {

        "submit_workflow": true


    "executed_on_servers": [



    "build_status": "preparing"


If I edit the request_task above through the API so that the variable is different, is there a way to have the check_provisioned detect this change? Using the discovery walker tool, I found that it lives under the task:

task_build_status = task.get_option(‘build_status’)
@handle.log(“info”, task_build_status)

If I update the key value, through the API, it shows the new option but I have been unable to locate what object inside ManageIQ it touched vs [“service_template_provision_task”]. Doing a loop, it just shows the same value. I am curious if anyone has done this before?

My reasoning above is that my AWX workflow handles everything for now, outside of being used as a service catalog. For now, I am using the process inside AWX to handle the workflow and I am looking into sending a limited amount of data back. Also, this give me a way to set a loop until the workflow is complete or any other customization is done through an API call.

I am hoping this is something simple or there is an easier way.

Thank you for your help!