— GPG Proofs —

This is an OpenPGP proof that connects my OpenPGP key to this Lemmy account. For details check out https://keyoxide.org/guides/openpgp-proofs

[ Verifying my OpenPGP key: openpgp4fpr:27265882624f80fe7deb8b2bca75b6ec61a21f8f ]

  • 0 Posts
  • 25 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle


  • Not really, but I can give you my reasons for doing so. Know that you’ll need some shared storage (NFS, CIFS, etc) to take full advantage of the cluster.

    1. Zero downtime for patching. Taking systems offline to update Proxmox sucks, especially if the upgrade fails for some reason. A cluster means I can evacuate one host, upgrade it, and move on to the next with no downtime for the hosted VMs.
    2. Critical service resiliency. I have a couple of critical systems in my home lab that, if they unexpectedly go down, will make for a very bad day. For instance, my entire home network (and lab) is configured to use a PowerDNS cluster for DNS. I can put the master PowerDNS server on one host and the slave on a second host - if I have a hardware failure, I won’t lose DNS. I have a similar setup for my Kubernetes cluster’s worker nodes.
    3. Experimentation. A cluster gives me a larger shared pool of CPU/Memory than my single host could offer. This means I can spin up new VMs, LXC containers, etc and just play with new software and services. Heck that’s how I got started with my Kubernetes cluster - I had some spare capacity so I found a blog post that talked about Kubes on LXC containers and I spun it up.

    I hope that helps give some reasons for doing a cluster, and apologies for not replying immediately. I’m happy to share more about my homelab/answer other questions about my setup.




  • Sorry, I wasn’t clear - I use PowerDNS so that I can more easily deploy services that can be resolved by my internal networks (deployed via Kubernetes or Terraform). In my case, the secondary PowerDNS server does regular zone transfers from the primary in order to ensure it has a copy of all A, PTR, CNAME, etc records.

    But PowerDNS (and all DNS servers really), can either be authoritative resolvers or recursors. In my case, the PDNS servers are authoritative for my homelab zone/domain and they perform recursive lookups (with caching) for non-authoritative domains like google.com, infosec.pub, etc. By pointing my PDNS servers to PiHole for recursive lookups, I ensure that I have ad blocking while still allowing for my automation to handle the homelab records.


  • This is overkill.

    I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can’t recall right now.

    If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.

    EDIT: I should have phrased the first sentence: “My setup is overkill” rather than “This is overkill” - the Op is asking a very valid question and the passive phrasing of my post’s first sentence could be taken multiple ways.




  • Hosting on the public web isn’t too crazy - start with port forwarding on standard ports (443 for sale/web) and add in a dynamic DNS address.

    More than likely your residential ISP doesn’t change your IP that often, but Dynamic DNS solves that problem before it hits. I use Cloudflare, but mostly because I’m lazy and haven’t moved off of them after their most recent sketch behavior.





  • This is the way. Layer 3 separation for services you wish to access outside of the home network and the rest of your stuff, with a VPN endpoint exposed for remote access.

    It may be overkill, but I have several VLANs for specific traffic:

    • DMZ - for Wireguard (and if I ever want to stand up a Honeypot)
    • Services - *arr stack, some Kubes things for remote development
    • IoT - any smart things like thermostat, home assistant, etc
    • Trusted - primary at home network for laptops, HTPCs, etc

    There are two new additions: a ext-vpn VLAN and a egress-vpn VLAN. I spun up a VM that’s dual homed running its own Wireguard/OpenVPN client on the egress side, serving DHCP on the ext-vpn side. The latter has its own wireless ssid so that anyone who connects to it is automatically on a VPN into a non-US country.



  • For the nginx reverse proxy - that’s how I ran things prior to moving to microk8s. If you want I can dig out some config examples. The trick for me was to set up host based stanzas, then update my internal DNS to have A records for each docker service pointing to the same docker host.

    With Kubes + external-dns + nginx ingress, I can just do a deployment/service/ingress and things automatically work now.




  • Ceph is… fine. I feel like I don’t know it enough to properly maintain it. I only went with 10gbe because I was basically told on a homelab reddit that Ceph will fail in unpredictable ways unless you give it crazy speeds for it’s storage and network. And yet, it has perpetually complained about too many placement groups.

    1 pools have too many placement groups
    
    Pool tank has 128 placement groups, should have 32
    

    Aside from that and the occasional falling over of monitors it’s been relatively quiet? I’m tempted to use use the Synology for all the storage and let the 10GbE network be carved up into VM traffic instead. Right now I’m using bonded USB 1GbE copper and it’s kind of sketchy.