Servire Retire stuck in Active state

``Hi All,

I am having an issue with Service Retires getting stuck in an active state and never finishing or failing. If the service has vm’s the vm’s will retire fine but the service never retires. If I issue a second service retire request once it is in this state then it will retire successfully. I get the following error:

[----] E, [2021-05-24T05:26:04.173247 #28509:ceaf68] ERROR – : Q-task_id([r5000001197604_service_retire_request_5000001197604]) [RuntimeError]: Service Retire request is already being processed Method:[block (2 levels) in class:LogProxy]
[----] E, [2021-05-24T05:26:04.173501 #28509:ceaf68] ERROR – : Q-task_id([r5000001197604_service_retire_request_5000001197604]) /var/www/miq/vmdb/app/models/miq_request_task.rb:125:in task_check_on_delivery' /var/www/miq/vmdb/app/models/miq_retire_task.rb:23:in deliver_to_automate’
/var/www/miq/vmdb/app/models/miq_request.rb:464:in block in create_request_tasks' /var/www/miq/vmdb/app/models/miq_request.rb:461:in each’
/var/www/miq/vmdb/app/models/miq_request.rb:461:in create_request_tasks' /var/www/miq/vmdb/app/models/miq_queue.rb:455:in block in dispatch_method’
/usr/local/lib/ruby/2.4.0/timeout.rb:93:in block in timeout' /usr/local/lib/ruby/2.4.0/timeout.rb:33:in block in catch’
/usr/local/lib/ruby/2.4.0/timeout.rb:33:in catch' /usr/local/lib/ruby/2.4.0/timeout.rb:33:in catch’
/usr/local/lib/ruby/2.4.0/timeout.rb:108:in timeout' /var/www/miq/vmdb/app/models/miq_queue.rb:453:in dispatch_method’
/var/www/miq/vmdb/app/models/miq_queue.rb:430:in block in deliver' /var/www/miq/vmdb/app/models/user.rb:275:in with_user_group’
/var/www/miq/vmdb/app/models/miq_queue.rb:430:in deliver' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:104:in deliver_queue_message’
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:137:in deliver_message' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:155:in block in do_work’
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:149:in loop' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:149:in do_work’
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:329:in block in do_work_loop' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:326:in loop’
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:326:in do_work_loop' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:153:in run’
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:127:in start' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:22:in start_worker’
/var/www/miq/vmdb/app/models/miq_worker.rb:408:in block in start_runner_via_fork' /usr/local/lib/ruby/gems/2.4.0/gems/nakayoshi_fork-0.0.4/lib/nakayoshi_fork.rb:23:in fork’
/usr/local/lib/ruby/gems/2.4.0/gems/nakayoshi_fork-0.0.4/lib/nakayoshi_fork.rb:23:in fork' /var/www/miq/vmdb/app/models/miq_worker.rb:406:in start_runner_via_fork’
/var/www/miq/vmdb/app/models/miq_worker.rb:396:in start_runner' /var/www/miq/vmdb/app/models/miq_worker.rb:447:in start’
/var/www/miq/vmdb/app/models/miq_worker.rb:277:in start_worker' /var/www/miq/vmdb/app/models/miq_worker.rb:154:in block in sync_workers’
/var/www/miq/vmdb/app/models/miq_worker.rb:154:in times' /var/www/miq/vmdb/app/models/miq_worker.rb:154:in sync_workers’
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:53:in block in sync_workers' /var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in each’
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in sync_workers' /var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:22:in monitor_workers’
/var/www/miq/vmdb/app/models/miq_server.rb:339:in block in monitor' /usr/local/lib/ruby/gems/2.4.0/bundler/gems/manageiq-gems-pending-98680009fe14/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in realtime_store’
/usr/local/lib/ruby/gems/2.4.0/bundler/gems/manageiq-gems-pending-98680009fe14/lib/gems/pending/util/extensions/miq-benchmark.rb:28:in realtime_block' /var/www/miq/vmdb/app/models/miq_server.rb:339:in monitor’
/var/www/miq/vmdb/app/models/miq_server.rb:380:in block (2 levels) in monitor_loop' /usr/local/lib/ruby/gems/2.4.0/bundler/gems/manageiq-gems-pending-98680009fe14/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in realtime_store’
/usr/local/lib/ruby/gems/2.4.0/bundler/gems/manageiq-gems-pending-98680009fe14/lib/gems/pending/util/extensions/miq-benchmark.rb:35:in realtime_block' /var/www/miq/vmdb/app/models/miq_server.rb:380:in block in monitor_loop’
/var/www/miq/vmdb/app/models/miq_server.rb:379:in loop' /var/www/miq/vmdb/app/models/miq_server.rb:379:in monitor_loop’
/var/www/miq/vmdb/app/models/miq_server.rb:241:in start' /var/www/miq/vmdb/lib/workers/evm_server.rb:27:in start’
/var/www/miq/vmdb/lib/workers/evm_server.rb:48:in start' /var/www/miq/vmdb/lib/workers/bin/evm_server.rb:4:in
[----] I, [2021-05-24T05:26:04.180221 #28509:ceaf68] INFO – : Q-task_id([r5000001197604_service_retire_request_5000001197604]) MIQ(MiqQueue#delivered) Message id: [5000083617195], State: [ok], Delivered in [4.637129081] seconds

This is only happening in one of our regions and I have verified all the versions and code are the same. We are using hammer 7. If someone could point me in the right direction to figure out what is happening here I would really appreciate it.