• 0 Posts
  • 66 Comments
Joined 2 years ago
cake
Cake day: December 14th, 2023

help-circle




  • If you search for pfsense alias script, you’ll find some examples on updating aliases from a script, so you’ll only need to write the part that gets the hostnames. Since it sounds like the hostnames are unpredictable, it might be hard as the only way to get them on the fly is to listen for what hostnames are being resolved by clients on the LAN, probably by hooking into unbound or whatever. If you can share what the service is it would make it easier to determine if there’s a shortcut, like the example I gave where all the subdomains are always in the same CIDR and if one of the hostnames is predictable (or if the subdomains are always in the same CIDR as the main domain for example, then you can have the script just look up the main domain’s cidr). Another possibly easier alternative would be to find an API that lets you search the certificate transparency logs for the main domain which would reveal all subdomains that have SSL certificates. You could then just load all those subdomains into the alias and let pfsense look up the IPs.

    I would investigate whether the IPs of each subdomain follow a pattern of a particular CIDR or unique ASN because reacting to DNS lookups in realtime will probably mean some lag between first request and the routing being updated, compared to a solution that’s able to proactively route all relevant CIDRs or all CIDRs assigned to an ASN.


  • I think the way people do it is by making a script that gets the hostnames and updates the alias, then just schedule it in pfsense. I’ve also seen ASN based routing using a script, but that’ll only work on large services that use their own AS. If the service is large enough, they might predictably use IPs from the same CIDR, so if you spend some time collecting the relevant IPs, you might find that even when the hostnames are new and random, they always go to the same pool of IPs, that’s the lazy way I did selective routing to GitHub since it was always the same subnet.



  • My homelab has been mostly on autopilot for a while. Synology 6 bay running most lighter weight docker stuff (arrstack, immich, etc) and an Intel nuc running heavy stuff (quicksync transcodes for Plex+jf, ollama). Both connected to digitalocean via WG for reverse proxy due to CGNAT.

    I had my router SSD either die or get corrupted this past week, haven’t looked much at the old SSD besides trying to extract the config off of it. I ended up just fresh installing opnsense because I didnt have any recent backups (my Synology and nuc back up to rsync.net, but I haven’t gotten around to automated backups for my router since it’s basically a plain config, and my cloud reverse proxy which is just a basic docker compose + small haproxy config). Luckily my homelab reaching out to the cloud reverse proxy means there’s basically no important config on my router anymore, they just need DHCP and a connection.

    Besides that the arrstack just chugs along on its own.

    I recently figured out I can load jellyfin playback URLs into vrchat video players, either direct stream or through the transcoding pipeline as an m3u8 that live transcodes based on the url parameters you set. This is great because the way watch parties in VRChat works is that everyone in an instance loads the same URL pasted into media players and syncs the playback. That means you need to have a publicly accessible url (preferably with a token of some sort) that can be loaded by an arbitrary number of unique IP addresses simultaneously, which I don’t think is doable with Plex.

    I’m now working on a little web app to let me log into Jellyfin, search/browse media, and generate the links with arbitrary or pre-set transcode settings for easy copy/pasting into VRChat. The reason it’s needed is that Jellyfin only provides the original file without transcoding when you use the “copy stream” option, so I believe the only way to get a transcoded stream url currently is to set the web interface to specific settings and grab the URL from the network. But that doesn’t let you set arbitrary stuff like codecs and subtitle burn in and overriding what it thinks you support. So a simple app to construct the URL will make VRChat watch parties a lot easier.





  • Imo that’s perfectly fine and not idiotic if you have a static IP, no ISP blocked ports / don’t care about using alt ports, and don’t mind people who find your domain knowing your IP.

    I did basically that when I had a fiber line but then I added a local haproxy in front to handle additional subdomains. I feel like people gravitate towards recommending that because it works regardless of the answers to the other questions, even their security tolerance if recommending access only over VPN.

    I have CGNAT now so reverse proxy in the cloud is my only option, but at least I’m free to reconfigure my LAN or uproot everything and plant it on any other LAN and it’ll all be fine.


  • This is 99% my setup, just with a traefik container attached to my wifeguard container.

    Can recommend especially because I can move apartments any time, not care about CGNAT (my current situation which I predicted would be the case), and easily switch to any backup by sticking my boxes on any network with DHCP that can reach the Internet (like a 4G hotspot or a nanobeam pointed at a public wifi down the road) in a pinch without reconfiguring anything.



  • Immich is pretty good for this if you take pictures at each location. It has a global map that shows all your photos with a heatmap-style display and a drawer that shows a grid of the photos within your viewport as you can and zoom around. It doesn’t seem like you can view a specific album on the map currently but you can at least filter the map to favorites or a date range.


  • I use a .dev and it just works with letsencrypt. I don’t do anything special with wildcards, I just let traefik request a cert for every subdomain I use and it works. I use the tls challenge which works on port 443, so I don’t think HSTS or port 80 matters, but I still forwarded port 80 it so I can serve an http->https redirect since stuff like curl and probably other tools might not know about HSTS.


  • Gotcha thanks for the info! It looks like I would be fine with ocis or opencloud, but since my main use case and pain points are with document editing which is collabora, it probably wouldn’t change much besides simplifying the docker setup (I had to make a gross pile of nginx config stuff pieced together from many forum help posts to get the nextcloud fpm container to work smoothly). But it already works so unless it breaks there’s little incentive for me to change.


  • Ah I see, I guess at least that would help with the main UI, but I’m already using collabora through the collabora code server in next cloud so it sounds like I’ll probably have the same document editing experience with OCIS/opencloud. I used to use onlyoffice but after I tried out their mobile app, it started blocking me from editing documents using the next cloud app (which seemed to use the only office web UI) so I was forced to switch unless I started paying for onlyoffice.


  • What are the apps that you would miss? I basically only use my NC as a Google drive and docs replacement, so all it has to do is store docx files and let me edit them on desktop or mobile without being glitchy and I’ve really wanted to consider OCIS or similar.

    That second requirement for me seems hard because of how complex office suites are, but NC is driving me to my wit’s end with how slow and error prone it is, and how glitchy the NC office UI is (like glitches when selecting text or randomly scrolling you to the beginning).