Validating new vm

Good evening,

I do “Provision Istances”, after set all field, in the Services -> Request the creation of Virtual Machine is blocked in “Validating new vm” status.
How can I fix it??

@SixArt - The “Validating New Vm” state is waiting for the VM to show up in the provider inventory before moving onto the post-provisioning state. Was the VM properly created on the provider? If not are there any related errors on the provider or in our logs?

Hi @gmccullough,

I recently built a new appliance to my infrastructure and at the time of creating machines in the new zone, the process stops at “Validating new vm”.
I have reviewed the automation.log but it does not give me more information (only retry message). In contrast evm.the log shows that the process is not continuous, because (as mentioned above) the new vm is not found in the inventory, the problem is that the vm has actually been created.
It may be that during the process of deploying the new appliance is not running a refresh inventory? I could run the refresh manually in a method after the deployment?

Many thanks!!!

An additional refresh should not be needed as the code performs a refresh on the destination host after determining that the provider has completed the clone task:

From the screenshot I can see that the internal state-machine has moved into the poll_destination_in_vmdb state so this refresh would already have occurred.

Is it possible that the VM was migrated to a different host then the original destination host? Does a manual refresh from the UI cause the new VM to be added to the database?

If the VM does make it into the database but provisioning is still stuck then another thing to check is if the “notes” property on the VM contains the “validation guid” referenced in the screenshot. This is visible in the UI from the VM Summary screen as part of the container properties.

If you want to try triggering a refresh you can add task.source.ext_management_system.refresh into the check_provisioned method as part of the retry logic:

# Get current provisioning status
task = $evm.root['miq_provision']
task_status = task['status']
result = task.statemachine_task_status

$evm.log('info', "ProvisionCheck returned <#{result}> for state <#{task.state}> and status <#{task_status}>")

case result
when 'error'
  $evm.root['ae_result'] = 'error'
  reason = $evm.root['miq_provision'].message
  reason = reason[7..-1] if reason[0..6] == 'Error: '
  $evm.root['ae_reason'] = reason
when 'retry'
  $evm.root['ae_result']         = 'retry'
  $evm.root['ae_retry_interval'] = '1.minute'
when 'ok'
  # Bump State
  $evm.root['ae_result'] = 'ok'
1 Like

Thanks to your tips I fixed this error by performing a refresh relationships and power status. Now it works perfectly

1 Like