I’m deploying a group of applications into a new kubernetes cluster and am looking for some best practice or practical advice on deployments.
the task of this application is to do video streaming, there is a backend pod containing a mongodb with configuration for streams (id, url) there could be between 1…100+ configurations and a few controlling applications / API containers.
I have 2 options for the streaming container deployment. The basis is that when the container starts the application needs an ID (1…100), to get the correct config from the database. I can only have 1 instance per ID.
Option 1 - (desirable) would be to have 1 deployment, that is scaled to n pods with each pod ‘getting’ a unique ID somehow?
Option 2 - (easy but messy) have a unique deployment for each container with the ID via env variable. these deployments I can create via API using another process that reads from DB and posts to K8s API. I then need to manage updates etc into k8s.
I have looked at StatefulSets that give me pod-1, pod-2 etc that is ‘nice’ but seems to be quite limited in deployment speed/updating and I couldn’t skip a number in the middle of the list if required.
I thought about using an etcd que to pull id’s from so I could scale one deployment and have the app do the leg work etc but this may add more complexity.
What are peoples thoughts over lots of unique deployments vs less config in k8s and application code doing the work? the more i think about it, (from an operations standpoint) the mass of deployments looks complicated but may be easier to diagnose problems on a per stream basis, and if a process is doing the leg work of setting up deployments what you see is what you get…
Thanks for any advice in advance.