How to change priorities in schema/instance?


#1

One of our schemas list all integration method calls to deploy a server (F5, infoblox etc…)

All theses methos are calles sequencially : i want to parallelize them.
Is this is done this priority field?

When i edit the schema / instance i can’t find where to change the priority of each element.

When i edit the exported yaml file Infrastructure/VM/Provisioning/StateMachines/VMProvision_VM.class/class.yaml i can see a priority :

schema:

  • field:
    priority: 1

So how to do that?


#2

@gquentin,

Automate processes instances sequentially. The priority field in the export defines the order the fields are displayed and processed.

You can adjust the sequence in the UI, but you cannot make them run in parallel.

@pemcg @ramrexx @bascar Any experience you can share?

cc @mkanoor


#3

@gquentin
If your goal is to run methods in parallel you would have to have your methods written in a way where you can run a detached process from the method and get its process id

You can then dispatch the methods in parallel and possibly wait for them to end in another method with a retry

Typically these are done with fields which have a type of ae_state.

if you had a schema that would look like this

runF5 type ae_state /YourNamespace/…/Class/Method/RunF5Async
runInfoBlox type ae_state /YourNamespace/…/Class/Method/RunInfoBloxAsync

wait type ae_state /YourNamespace/…/Class/Method/WaitForMethods

In the RunF5Async method you can use
Kernel.spawn and Process.detach to run a async process. The method would have to generate the script that it dispatches async. If you have REST API that you can use that

You can save the process_id/rest handle id etc in the workspace that you can use in WaitForMethods

using

$evm.set_state_var(‘f5_process_id’, process_id)

and then end the method. At that point the other long running process would be running detached

Then you can move on to the next method run it async save its handle/process id

Then in the waitforMethods

you would fetch the process/rest api handles and then wait till they end if need be. If you want them to run completely detached with no wait you can do that.

If you want to wait you would

fetch the process id using
pid = $evm.get_state_var('f5_process_id)
check if the process is finished
if the process is still running you end with a retry

$evm.root[‘ae_result’] = ‘retry’

else you end with $evm.root[‘ae_result’] = ‘ok’


#4

It depends if you just wanted ‘fire-and-forget’ concurrency in no guaranteed order, or asynchronous operation with status checking. Madhu’s suggestion is better for the latter.

The first would be simpler. The way to do this would be to have a ‘parent’ method that launched new automate requests for each of your stages, using $evm.execute(‘create_automation_request’,…).

For example I tested this using a simple method:

number = $evm.object['number']
$evm.log(:info, "Starting number #{number}")
sleep 10
$evm.log(:info, "Ending number #{number}")

I called this three times using the following script:

options = {}
options[:namespace]     = 'Stuff'
options[:class_name]    = 'Methods'
options[:instance_name] = 'test'
options[:user_id]       = $evm.vmdb(:user).find_by_userid('admin').id
auto_approve            = true

['1','2','3'].each do |number|
  options[:attrs] = {'number' => number}
  $evm.execute('create_automation_request', options, 'admin', auto_approve)
end

The three calls to the method executed concurrently but in a non-deterministic order (you’ll only be able to run as many concurrent automate tasks as you have generic workers - I increased my number of generic workers to 3 for this to work)

Hope this helps,
pemcg


#5

In fact this needn’t be completely fire-and-forget. $evm.execute(‘create_automation_request’,…) returns the request object:

request = $evm.execute('create_automation_request', options, 'admin', auto_approve)

Each of the child tasks can also access this same request object, and add status information to its options hash, like so:

request = $evm.root['automation_task'].automation_request
request.set_option(:return, {:status => 'success', :return => some_data})

Your calling method then just needs to poll on the request.state attribute to become ‘finished’, and then read the :return key from the options hash, like so:

return = request.get_option(:return)

So your state machine can be:

State 1:
launch tasks using create_automation_request, save array of request IDs in a state_var
State 2:
Retrieve request IDs from state_var, lookup the corresponding request objects using $evm.vmdb, and check to see whether any are not in state ‘finished’ If they are not, set a wait time and $evm.root[‘ae_result’] = 'retry’
State 3:
Retrieve request IDs from state_var, lookup the corresponding request objects using $evm.vmdb, and check their return status/data/message

pemcg


#6

Thanks for your answers

What i want is that:
Actually service provisioning is calling sequentially:

  • naming to get vm name
  • Infoblox to get IP
  • and after : 4 differents services to subscribe the vm into (name+ip) (REST calls)

I would like the 4 services to be called in parallel to be faster.

The whole sequence takes 7 minutes which is too much.

Regards.


#7

Do you have approximate timings for each of your stages? 7 minutes for a complete service provision doesn’t sound unduly long. If the stages that you want to parallelise only take a few (5-10ish) seconds each then you probably won’t save a lot by calling them concurrently with $evm.execute(‘create_automation_request’,…), as you’re introducing request->approval->task delays for each call.

If they take longer than this then you probably would save time launching them concurrently using $evm.execute(‘create_automation_request’,…) from a parent method, and call this parent method from your state machine.

pemcg