Sysadmin and FOSS enthusiast. Self-hosting on Proxmox with a focus on privacy and digital sovereignty. Documenting my experiences with Linux, home labs, and the ongoing fight to keep Big Tech out of our hardware.

@unknownuniverse@unkn.uk

  • 1 Post
  • 5 Comments
Joined 1 month ago
cake
Cake day: March 31st, 2026

help-circle
  • The home server is an old, low-powered mini PC running Debian. It acts as the bridge between the WireGuard tunnel and my local LAN.

    I’ve just finished migrating one of my AdGuard Home instances onto it today. Its role is now twofold:

    Routing: It has ip_forward enabled and a bit of NAT (iptables/nftables) so that traffic arriving from the VPN can actually “hop” onto the local network to reach my other VMs and containers.

    DNS: It provides ad-blocking for the tunnel. VPN clients point to this node’s internal WireGuard IP for DNS queries.

    Technically, it’s just another WireGuard peer, but with AllowedIPs configured to advertise my 192.168.x.x subnet back to the hub (VPS2). This is what allows  VPS1 and my mobile devices to resolve and reach home services without a single open port on my router.


  • You’re right, and for a lot of people, one VPS is the sensible choice. I actually addressed this in the post:

    "VPS1 is my web-facing server. It handles the public side of things. VPS2 is the VPN hub. At first glance, that probably looks unnecessary. Strictly speaking, it is unnecessary. I could have crammed WireGuard onto VPS1 and called it done. But splitting the roles makes the whole thing cleaner.

    One machine serves public traffic. The other handles VPN duties. That means fewer networking compromises, fewer chances of Docker or firewall rules becoming annoying, and a clearer separation between the public-facing stack and the private tunnel. It also means I can change one side without poking the other with a stick and hoping nothing catches fire."



  • Exactly that, VPS2 handles the WireGuard port and has no domain pointing to it, so it’s basically hiding in plain sight. VPS1 holds the domain and handles the web traffic.

    I keep SSH open on both, but locked down (key-based auth + restricted to my IPs).

    Your idea of using the provider firewall (Ionos in my case) as a “mechanical” lock is a good one, block it at the edge and only open it when needed. I’ve thought about doing that, but I’m generally happy relying on a hardened SSH config and the provider’s KVM if everything goes sideways.