Maximum Workers per appliance

What are the maximum number of workers (general and priority) that can be assigned to an instance. I have beefy enough VM for my appliance, but there are times where I’ll be spinning 100+ VMs and want to make sure I get the most performance possible. Via the Admin UI I have set the maximum it allows at 9&8 I believe. Does it make sense to go higher if possible? And if possible, how?

You might want to consider adding appliances to your zone (i.e. scaling horizontally) rather than (or as well as) adding workers to a single appliance. This will give you the benefit of adding workers as you require, but also a degree of HA and resilience to your zone.

Cheers,
pemcg

pemcg,

Does adding appliances distribute the load between appliances. For example if a self service UI call for 100 new VMs is generated on one node, will that get distributed to all available appliances and broken up or will one request be handled by a single appliance. I’ve read the HA guide and I assume I can use an F5 in place of the HAP nodes and will look at adding appliances for sure.

Hi Paul

Yes the workers on all of the appliances will dequeue “work instructions” known as as messages from a central queue (the miq_queue table) in the region’s database, so the work is pretty evenly distributed across appliances. All queue workers process one (and only one) message to completion, and then dequeue another message for their worker type, process this, and so on.

There’s a description of workers and messages here if it helps.

Cheers,
pemcg

1 Like

Answering the original question: You can tune server settings using a script in the vmdb/tools directory.

- command:/var/www/miq/vmdb/tools/configure_server_settings.rb --serverid={{ server_id }} --path={{ item.path }} --value=\"{{ item.value }}\" {{ item.opts|default('') }}"
  loop:
    - { path: "workers/worker_base/ems_refresh_core_worker/memory_threshold", value: "800.megabytes" }
    - { path: "workers/worker_base/queue_worker_base/generic_worker/count", value: "4", opts: " -t integer" }

And I guess we should mention, that actions affecting the UI (e.g. a dynamic methods in Service Dialogs) run on the PriorityWorker on the current appliance.
As you mentioned already, you are looking into running multiple UI appliances behind an F5, so that should not be an issue for you. The VM Provision Requests will be distributed across appliances

1 Like