

Holy shit, this has every cert I’ve ever generated or renewed since 2015.


Holy shit, this has every cert I’ve ever generated or renewed since 2015.


Well that was fun! I’m confident this project isn’t malicious. It’s for sure coded using AI, and I think that’s what triggered a smear campaign. This removed Reddit post looks like there is just a downvote brigade out to get the project because the author admitted to using AI.
The only network traffic it’s made when I monitored it was local. Certainly nothing went to Asia.
I think it tries to solve a neat problem. There’s so many features packed in that it’s obviously vibe coded. That’s probably a huge turn off for AI detractors. If you don’t care about that, I think you’re safe to give it a try.


Ok, so I ran the repo through an LLM to look for any suspicious requests, and it came back clean.
But it’s hella suspicious that the repo owner edited away the issue and closed it without a response.
It’s also hella suspicious that the user that reported that issue created their account yesterday.
I think I need to go the nuclear option: pop a gummy and monitor the network traffic of the container and see what it’s doing.


Ohh that’s suspicious. I’m going to kill mine for now and take a look later tonight. I’ll report back if I find anything interesting!


I think the author literally released it like 2 days ago which is why there’s no issues or prs yet.
I installed it yesterday and have only fiddled around a little bit. I like that it pointed out a bunch of health issues with my Lidarr library and have been stuck on a side quest dealing with those.
If you want to explore it and see if anything seems malicious to you, I’d focus on code making requests, and review the sub-dependencies to see if any look sus. It should live entirely in your network and shouldn’t be making any external requests outside your server apart from the connections you set up (like last.fm).


I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.
The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.
I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.


extension developers should be able to justify and explain the code they submit, within reason
I think this is the meat of how the policy will work. People can use AI or not. Nobody is going to know. But if someone slops in a giant submission and can’t explain why any of the code exists, it needs to go in the garbage.
Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work. You’re not done. You’re not almost done. You have a working prototype that you now need to turn into something of value.


I do backups with a Raspberry Pi with a 1TB SD card and leave it on all the time. The power draw is very small and I think reasonable for the value of offsite backups.
My personal experience with WOL (or anything related to power state of computers) is that it’s not reliable enough for something offsite. If you can set something up that’s stable, awesome, but if your backup server is down and you need to travel to it, that suuuucks.


I found code that calculated a single column in an HTML table. It was “last record created on”.
The algorithm was basically:
foreach account group
foreach account in each account group
foreach record in account.records
if record.date > maxdate
max = maxdate
It basically loaded every database record (the basic unit of record in this DATA COLLECTION SYSTEM) to find the newest one.
Customers couldn’t understand why the page took a minute to load.
It was easily replaced with a SQL query to get the max and it dropped down to a few ms.
The code was so hilariously stupid I left it commented out in the code so future developers could understand who built what they are maintaining.
“So what was the problem in the end?”
“Man, I don’t fucking know.”


Replaced by AI, ironically.


They have (had?) a fairly generous free tier that works well for people starting out.
I ended up buying a license after evaluation because the UI provides everything I reasonably want to do, it’s fundamentally a Linux server so I can change things I need, and it requires virtually zero fucking around to get started and keep running.
I guess the short answer is: it ticks a lot of boxes.
I set up Syncthing using the docker image from the Unraid “store” and it works great.
I’m not in love with the clients (especially Windows) but it seems to work pretty well once your setup is stable.


White cables also transmit slower in the dark. As soon as the cabinet is closed the data is going to slow way down with only the dim glow of the LEDs of the equipment acting to accelerate packets.
“Hey why is it taking you so long to review my PR?”


The thing I hate most about rsync is that I always fumble to get the right syntax and flags.
This is a problem because once it’s working I never have to touch it ever again because it just works and keeping working. There’s not enough time to memorize the usage.


Hours to get a response? So it’s gpt-5?


That’s how I’d answer if I set something up years ago and it was stable and never required me to come tinker with it.
deleted by creator
This is a good compromise. When I was tight on backup space, I just had a “backup” script that ran nightly and wrote all the media file names to a text file and pushed that to my backup.
It would mean tons of redownloading if my storage array failed, but it was preferable to spending hundreds of dollars I didn’t have on new hardware.