It doesn’t kinda feel that way, doesn’t it?
It doesn’t kinda feel that way, doesn’t it?
Funny that predictive text seems to be more advanced in this instance but I suppose this is one of those scenarios that you want to make sure you get right.
I use Gitea myself and when the big dust up about the backing company came up, I didn’t feel like there was a big enough reason to migrate away from Gitea. Just because they could do something wasn’t enough of a reason for me. Sure it’s great that they are running a fork that I could switch to but I currently don’t see a reason to switch as of today.
I would also second Hugo which I use for my personal site and blog which I haven’t updated for a long time. Nice thing is that it has a minimal footprint of needing to watch out for updates unlike something like Wordpress which was known for being vulnerable stable if left unmaintained. It’s mostly looking out for old themes with vulnerable javascript.
Another popular options is Jekyll and I honestly can’t remember why I picked Hugo over it but if you don’t need dynamic content, why make things more complex?
I use apt cacher ng. Most of my use case though is for caching of packages related to Docker image builds as I build up to 200+ images daily. In reality, I have aggressive image caching so I don’t actually build anywhere close to that many each day but the stats are impressive. 8.1 GB of data fetched from the internet but 108 GB served from the acng instance as it shows in the stats page of recent history.
Ah, you mean using Obsidian Sync and putting the synced folder in another sync service. That makes sense. I was only thinking of why syncing a non-Obsidian Synced folder in another shared folder tool.
Just curious - why do you say that?
I have two internet connections - one is fiber and the other is cable. My cable is the backup connection and is a lower tier offering with a 1.2 TB/month cap while my primary fiber is 1gig symmetrical with no data cap. I use pfsense to handle failover in case of an outage.
The thing these brilliant people making these decisions don’t realize is that they are gonna lose the people they don’t want to lose. People who can find other jobs because they’re valuable people can easily find jobs; people who can’t find jobs will do more to stay.
I also use acme.sh. It has worked great for me and was dead simple to use. Super flexible on what it can do from just renewing the certs to web server integration. Love the simple to use hooks available too.
Check out Plexamp, the Plex music streaming client.
I’m not aware of a way to lock an entire system to a major.minor version with Debian, only holding individual packages. What exact version is your base-files
? The full string matters.
You could check to see if anything is held with apt-mark showhold
.
It is possible that the mirror you have in your sources.list file stopped syncing so to your system is looks like it has no updates. What mirror is your system pointed at?
So 12.1 is out but have you upgraded any of your packages yet? The /etc/debian_version
file comes from the base-files
package. On my up to date system, it’s showing 12.1
in the file and the package version is 12.4+deb12u1
as I can see from dpkg -l base-files
.
Make sure to do an apt update
and then do an apt upgrade -s
to do a dry-run to see what packages would be upgraded. I’m guessing the base-files
package hasn’t been updated.
I user homer. Really simple, basic config and it looks nice. The stats are pretty cool for certain integrations and are easy to add - I’ve added a few myself for services that didn’t have them. Only issue is slow PR review.
I’m in a similar boat except I just do everything on standard Docker containers but so do use Telegraf, Influx, and Grafana for everything. I’ve gone mostly to Discord notifications on any alerts. If I run into any problem scenarios, I figure out how to monitor it and add it via Telegraf and add an alert. I’m still just using Grafana alerts but it works fine for my home lab.
Even better if I can automate fixes to those problems. One of the best things I did was monitoring all of my network devices and all major hops. If I have internet or network issues, I know exactly where the problem is without having to troubleshoot. Lots of dpinger and shell scripts to input data to Telegraf.
You can do TCP proxying with nginx but many of the same features available in haproxy are behind the paywall. In nginx, layer 4 connections are dealt with through streams. You can do both TCP and UDP. I stick with haproxy for TCP streams with very few exceptions. HAproxy is most definitely more robust for situations where you have a pool of upstream servers. For single upstream instances, it’s not terrible. Most of the features I would use for better control of how the failover and balancing would work isn’t available in the open source nginx.
Yup, you just figured it out. It’s indeed about productivity software.
Gordon Ramsey was spot on