My homelab is essentially my own passion project and only really I access it except for when I spin up the occasional game server for friends.
I’m currently running Proxmox and run a debian LXC container for each docker stack I have, and have OpnSense routing incoming traffic with Haproxy with ssl offloading. My currently running LXCs are: mediawiki, amp game server(2 Minecraft servers), freshrss, and currently playing around with n8n.
I’m looking to collapse my LXC’s to just VMs. I’d like to be able to have 3 VMs running in a Docker Swarm together so I can upgrade a VM at a time and just swing my running containers to another docker node and then swing back when the VM is stable again.
I’ve looked at k0s, k3s, and k8s and it just seems way too much work and overhead for what I’m willing to do. I also want to keep using docker compose and want a decent webgui to manage my containers/nodes/swarm. I’m using DockHand right now, but need to research swarm support.
Anyone have any advice for something like this? Any specific terms, tech, software I should look into?
Also, gonna throw a curveball, but what would the effects be of running 3 different distros as my nodes in my swarm? Like a Debian node, Rocky Linux node and potentially arch node? I’m guessing I shouldn’t due to docker engine differences potentially.
I’m just trying to have fun with things, break things, fix them, learn, etc.
I run k3s on a single node and it’s not really that much more overhead than Docker Compose if you understand k8s. I mostly have a deployment.yaml, service.yaml, ingress.yaml, and network-policy.yaml for each service that I’ve copy / pasted and updated. Here are some of the benefits over Docker Compose for my setup:
-
Has a built-in Traefik reverse proxy / ingress controller so I can access my services by domain name instead of by port, like http://jellyfin.lan/, http://forgejo.lan/ (using local dns on my OpenWRT router)
-
I use the Calico CNI so I can have network policies for each service to allow them to access only what they need. If a service doesn’t need internet access, it doesn’t get it.
-
I use Bitnami Sealed Secrets to store my secrets in YAML files that can be safely stored in git
-
ConfigMaps make it easy to manage configuration files
-
Easier to have separate YAML files for each service while sharing a network between them. Services connect to each other like http://forgejo.forgejo.svc.cluster.local/
Of course, if you’re looking to load balance across multiple machines, k3s makes even more sense.
Edit:
k8s is the clear industry standard for container orchestration at this point, so if you want something beyond Compose, a lightweight k8s distribution like k3s is an obvious choice.
-
Regarding different distros: yes, absolutely possible! Considering that swarm development is basically stopped the risk of inconsistencies is lower than back in the days but I’d pin the engine version on all nodes and only explicitly upgrade all at the same time to a newer version.
Regarding swarm itself, even though I understand your desire, I’d recommend to have a look at something else. Have you had a look at Nomad for instance? It’s not compose compatible but I guess these days it’s a simple prompt to convert the compose files, it’s supposed to be simpler than K8s and still offers many perks (like the failover you are looking for).
Also, from what I just read, Proxmox is working on improving the container support to allow “native” containers (inside VMs similar to Kata containers). They’re not there yet but I am wondering if it’s worth the effort from your side right now 😅
Either way, have fun! A fellow homelab buddy 😄
Glad to hear I’m not crazy with different distros haha. I wanted to be able to have different “enviroments” to keep familiar with release schedules, package managers, and just the flow of the distro. I’ve been using nothing but Debian/Ubuntu in my homelab for ~10 years now, at work we use RHEL, and for my desktops I’m on Arch. I’ll have to look into pinning.
I’ve never heard of Nomad(love the name) so I’ll definitely add it to my list of things to research. Looking at their site it looks solid, but want to weigh my options once I’ve loomed at everything.
And thanks for your comment! I’ve been doing this a long time and nothing “tickles” my brain more than something in my homelab breaking and I have to figure out how to fix it and then prevent it going forward.
I’m running dokploy in swarm mode on 3 nodes.
The only downside is the development of swarm is basically halted and some features are missing (like passing /dev devices to a container, you have to use dirty workarounds) but otherwise it just works.
And another thing I have not heard yet. Dokploy looks really enticing just from a brief look at their site, I’ll definitely add it to my list of things to look into. Thank you!
How long have you been running this type of setup?
How is swarm support/integration with Dokploy? Are you able to initiate the swarm and also connect other docker nodes to that swarm all through the webgui or does Dokploy just see and attach to the swarm after its been setup? Are you able to manage it all through the gui? Swing/motion containers to other nodes, etc.
I’ll definitely need to deploy it in a test environment and see how it all works. Thank you again!
I’ve been running it for 2 years.
The swarm support integration is first class, there is not much to do in the gui, you can add nodes and see basic info about them.
Most of the stuff happens in the compose files where you can define how many copies of a container you run and what nodes you want to restrict them to. etc.
I’m not sure about the moving features tbh. It should move them automatically when a node is down. In my setup I don’t use that at all, all my containers are pinned to specific nodes by feature flags (one node has lots of hdd storage, another has more ram, another has a gpu).
You can see the container logs, but you have to select “swarm” in a dropdown when the container is not on your master node.
And also when deploying a new app you have to select “Compose” and then in a further dropdown “Swarm”.


