

Ah, maybe it could detect the api level, and show a generic error message if your api level meets the requirements but it still didn’t work.


Ah, maybe it could detect the api level, and show a generic error message if your api level meets the requirements but it still didn’t work.


Hmm unfortunately it doesn’t work on my pixel 5 which is running android 16 and should be api level 36


Makes sense. I share my media library with 10-15 friends so there’s usually a few streams late at night, and scrubs, container updates, and backups run early morning at like 2-4am.


I got a few on my laptop but none on either of my long running homelab boxes (70-80 days uptime). On my laptop they all seem related to espeak, the tts program. Is there any pattern in what processes yours are from?


Dust and jank you say? Behold, my old basement homelab when I rented just outside Boston with a very permissive landlord who agreed to let me have Comcast gig pro fiber pulled into the basement, running off an outlet I installed without asking on a free slot in our breaker box. The dust was terrible, the rack was a hodge podge, I had to put up that sign because maintenance guys kept plugging their power tools into the UPS when I wasn’t around and tripping it. But Comcast fucked up the billing and the 2gig + 1gig symmetric internet is still active to this day for free, which I left behind minimally working for the next tenants after parting out the rack. The tower by the side was a friend who wanted to colocate on my fiber, and I had some fun stuff like a slide out vga console. I also pulled Ethernet into every room, most of them installed with nice wall plates all bundled down to the rack, so with a house full of gamers, you could have multiple people pulling a gig on a game download without anyone stepping on anyone else’s toes.



My *arrstack DBs are part of my backed up portion, so they’ll remember what I have downloaded in my non-backed up portion.


Same here, ~30TB currently but my personal artifacts portion is only like 2TB, which is very affordable with rsync.net, which conveniently has an alerts setting if more than X kb hasn’t changed in Y days. (I have my Synology set up to spit out daily security reports to meet that amount, so even if I don’t change anything myself I won’t get bugged)


The most notable difference is that meshtastic has range in the order of miles. At least 1 mile even with bad antennas but with other nodes nearby to repeat your messages, 20 miles is not hard to do.


Are you using sonarr/radarr to do your renaming? I have mine set to patterns that put the release group at the end. It usually has no problem picking up release groups at the beginning (especially for anime, that seems to be pretty common), so by the time it’s auto imported, the filenames have been normalized to standard format with release group at the end.


I would replace it. Sometimes I push my luck and for minor or unexpected errors I just clear the error and re-add the drive, but this many errors is likely a solid sign.


Keep in mind that elementary is doesn’t provide an in place upgrade path between major versions, I didn’t realize this when I set it up for my parents so they’ve been stuck on one major version since I can’t backup, fresh install, and restore remotely.


It looks like it’s about helping to audo deploy docker-compose.yml updates. So you can just push updated docker-compose.yml to a repo and have all your machines update instead of needing to go into each machine or set up something custom to do the same thing.
I already have container updates handled, but something like this would be great so that the single source of truth for my docker-compose.yml can be in a single repo.
I use gluetun to connect specific docker containers to a VPN without interfering with other networking, since it’s all self contained. It also has lots of providers built in which is convenient so you can just set the provider, your password, and your preferred region instead of needing to manually enter connection details manage lists of servers (it automatically updates it’s own cached server list from your provider, through the VPN connection itself)
Another nice feature is that it supports scripts for port forwarding, which works out of the box for some providers. So it can automatically get the forwarded port and then execute a custom script to set that port in your torrent client, soulseek, or whatever.
I could just use a wireguard or openvpn container, but this also makes it easy to hop between vpn providers just by swapping the connection details regardless of whether the providers only support wg or openvpn. Just makes it a little more universal.


Sounds like a job for a pair of second hand nanobeams or something similar.
I second the other commenter who suggested using WISP gear. If you have clear fresnel zones it should work a treat.


I second this. Gluetun makes it so easy, working with docker’s internal networking is such a pain.


Luckily they are on 2.0.1 now so there has been 2 stable version by now


Is external libraries maybe what you’re looking for?
There’s already an issue open for it: https://github.com/immich-app/immich/issues/1713
Be sure to thumbs it up!


If you search for pfsense alias script, you’ll find some examples on updating aliases from a script, so you’ll only need to write the part that gets the hostnames. Since it sounds like the hostnames are unpredictable, it might be hard as the only way to get them on the fly is to listen for what hostnames are being resolved by clients on the LAN, probably by hooking into unbound or whatever. If you can share what the service is it would make it easier to determine if there’s a shortcut, like the example I gave where all the subdomains are always in the same CIDR and if one of the hostnames is predictable (or if the subdomains are always in the same CIDR as the main domain for example, then you can have the script just look up the main domain’s cidr). Another possibly easier alternative would be to find an API that lets you search the certificate transparency logs for the main domain which would reveal all subdomains that have SSL certificates. You could then just load all those subdomains into the alias and let pfsense look up the IPs.
I would investigate whether the IPs of each subdomain follow a pattern of a particular CIDR or unique ASN because reacting to DNS lookups in realtime will probably mean some lag between first request and the routing being updated, compared to a solution that’s able to proactively route all relevant CIDRs or all CIDRs assigned to an ASN.
Yeah but the learn more link points to the getHdrSdrRatio documentation which says available in api level 34. Maybe that can only show if you’re using <34