

Building a M920x with 4 NVMe and 10gig Ethernet is very tempting.
Hey that’s almost me! My home lab is three M920q systems with 10Gb fiber for Ceph. I didn’t know you could do multiple NVMes though, right now I’m doing some janky AF USB drives.
— GPG Proofs —
This is an OpenPGP proof that connects my OpenPGP key to this Lemmy account. For details check out https://keyoxide.org/guides/openpgp-proofs
[ Verifying my OpenPGP key: openpgp4fpr:27265882624f80fe7deb8b2bca75b6ec61a21f8f ]


Building a M920x with 4 NVMe and 10gig Ethernet is very tempting.
Hey that’s almost me! My home lab is three M920q systems with 10Gb fiber for Ceph. I didn’t know you could do multiple NVMes though, right now I’m doing some janky AF USB drives.
I would get multiple drives and do RAID. Here’s a helpful calculator to figure out drive quantity, size, and configuration. The reason to do RAID is redundancy. Hard drives will fail (even NAS branded drives). You do not want your photos, media, etc to be lost in that case. I personally do not go with anything below RAID5 (and for super sensitive things, I’ll even go RAID6 - despite the hit on overall capacity. If the optiplex has drive capacity for multiple drives, I strongly recommend you go this route.
I have a tech illiterate mother in law who I switched to Zorin OS (an Ubuntu fork).
I installed a wireguard VPN client on her laptop and did drills over the course of 3 days to make sure that she understood how to connect. Anytime she needs help, I can tunnel through my wireguard server and log on with my own account - Heck, as long as she doesn’t change her password, I can log on as her as well.
That has made remote troubleshooting significantly easier as she is located about a a 23-hour drive away.


Okay, then I’m thinking your router/NAT maybe causing the problem. Typically, your ISP won’t block subdomains for dns, they may outright block Source NAT (SNAT), but if you could get through via the IP, you should be good to go.


An easy way to check is to visit a site like this and check for port 443: https://www.yougetsignal.com/tools/open-ports/. You don’t need to be on the server that’s hosting your portfolio, just any thing that’s on the same network as your portfolio (something behind your external router)


Just to make sure.
https://fqdn/ it does not connect (probably with the ERR_CONNECTION_TIMED_OUT that you mentioned below)What happens if you, on the hotspot, try browsing to https://206.x.x.x? When you are on the same network as the portfolio, can you reach https://[internal ip]?
What I’m leaning towards is a router/firewall that may be causing some issues. To help with troubleshooting, does your website server have any local firewalls (for ubuntu that would typically be ufw, but it could be iptables or firewalld)?


Try this command from a terminal on the system from which you’re attempting to connect:
nslookup <yourfqdn>
It should come back with something like this:
~ ❯ nslookup stronk.bond
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: stronk.bond
Address: 172.67.174.80
If it says something like “can’t find” that means that your dns isn’t configured appropriately. Does your IP address start with 192.168, 10., or 172.? That would be a private IP address (something which isn’t accessible from the internet.
Oh! And where is everything - is your workstation/laptop on the same network as your portfolio? Is the portfolio on a different network? That could effect things as well.


What does your nginx config look like for ssl? It should specify a certificate and key file - that certificate subject needs to match your fully qualified domain name (fqdn). Certificate can have subject alternative names (SAN) for other names and even IP addresses.
For instance, you could have a single certificate for foo.bar with a SAN for just foo and an IP SAN for 192.168.1.30.
Certificates also need to be signed by a certificate authority (CA), and in order for your browser to visit https://foo.bar/ without a warning your browser must trust that CA.
If you did a self signed cert, this is most likely the problem you’re running into.
It’s important to know that your communication is still encrypted because of SSL, but since your browser doesn’t trust the CA (or the subject doesn’t match the FQDN) the browser will say it’s not secure.
Oh there’s definitely some elder gods involved with programming when I do it.


Not trying to be difficult, but that’s what get-member is for - it’ll dump all the properties for a given object.
I get it - it’s way different from bash - speaking as someone who has been using Linux since Debian Hamm. Side note, net installers over dialup really sucked.
I was originally forced to use powershell when I joined up with a virtualization team for work and they used PowerCLI.
It was bonkers how easy it was to get reproducible scripts bundled up for the more junior engineers.


The idea with powershell isn’t to be a text parser - so grep doesn’t really work. When you pass things through pipes, it’s a full object with multiple properties, and those you can filter with either simple expressions like select-object [-property] or with more complex expressions: https://4sysops.com/archives/add-a-calculated-property-with-select-object-in-powershell/


We were visiting for about a week and I think it took three separate days, about 20 minutes each day before she felt comfortable doing the VPN stuff herself.
It was definitely painful, but if you’re patient, it’s doable.
Good luck with whichever option you choose!


Speaking as someone who has recently taken on a far-remote (e.g. about 22 hour drive away) support for a MIL, the best thing you could do is set up a VPN.
For me, I’m still on Plex with a very old lifetime account with my MIL using a dedicated user account - that access is over the Internet. The VPN is to provide access to Overseerr so that she can do things like request specific movies/TV shows without having to email/call.
It’s not perfect - one day I woke up to 26 seasons of “Into the Country”, but it works fairly well.
I sat down with her one day while visiting about a year or so ago and walked her through connecting to the VPN, then getting to the hosted site, then disconnecting from the VPN - basically running drills and making her take notes until she felt she could do it by herself.


No, see, if we just give it all the energy, burn our skies and boil our oceans to make AI better, then it’ll for sure tell us how to unfuck everything.
/S
I use netbox too - and if you’re careful about it, you can actually use terraform to create the netbox details. I use one manifest file to handle deployment to Proxmox, set up DNS in PowerDNS, and create the relevant netbox entries.


I bought a car that comes with a “free” 300k/30 year warranty, but only if to do oil changes every 4k miles or 3 months. Maybe this guy has something similar?
For me, I may try And keep it up for a bit, but driving to one particular dealer every 3 months just to get a ridiculous warranty that will probably never actually pay out isn’t worth it.


Also of note - if you’re using docker (and Linux), make sure the user is/group id match across everything to eliminate any permissions issues.


Not really, but I can give you my reasons for doing so. Know that you’ll need some shared storage (NFS, CIFS, etc) to take full advantage of the cluster.
I hope that helps give some reasons for doing a cluster, and apologies for not replying immediately. I’m happy to share more about my homelab/answer other questions about my setup.


Those are beasts! My homelab has three of them in a Proxmox cluster. I love that for not a ton of extra money you can throw in a PCIe expansion slot and the power consumption for all three is less than my second hand Dell Tower server.
They’re actually about the same heat wise. I use some relative cheap NICs based on the Intel 82599 controller and 10G SFP+ modules.
It’s not a bad setup, I wish there was a second NVMe slot in my M920q boxes, but what are you going to do?