

I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.
Someone interested in many things.
I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.
Tbh, I haven’t really had this issue in a few weeks. I’m tempted to think it’s usage-related, and could possibly indicate that my memory allocation for the DB is still too high.
You can if you want. Reply here with the link if you do (or mention me if that’s a thing on Lemmy).
Yeah, mine have technically happened after reboots, although things typically take a few days at least for the problem to creep up. This past time, I basically have a whole entire week in before things went to crap.
I did that a while ago, and unfortunately, it didn’t really help. I don’t think it’s an issue of RAM, but rather a daemon or something periodically going nuclear with resource utilization. A configuration issue, perhaps?
The problem is that an update will inherently involve a restart of everything, which tends to solve the problem anyway. Whether the update fixed things or restarting things temporarily did is only something you can find out in a few days.
I’ll save this to look at later, but I did use PGTune to set my total RAM allocation for PostgreSQL to be 1.5GB instead of 2. I thought this solved the problem initially, but the problem is back and my config is still at 1.5GB (set in MB to something like 1536 MB, to avoid confusion).
This issue occured a few weeks ago as well, even when we had very little traffic. We still have peanuts when compared with other instances.
Oh, and for completeness:
We’ve deleted the vast majority of the spam bots that spammed our instance, are currently on closed registration with applications, and have had no anomalous activity since.
Our server is essentially always at 50% memory (1GB/2GB), 10% CPU (2 vCPUs), and 30% disk (15-20GB/60GB) until a spike. Disk utilization does not change during a spike.
Our instance is relatively quiet, and we probably have no more than ten truly active users at this point. We have a potential uptick in membership, but this is still relatively slow and negligible.
This issue has happened before, but I assumed it was fixed when I changed the PostgreSQL configuration to utilize less RAM. This is still the longest lead-up time before the spikes started.
When the spike resolves itself, the instance works as expected. The issues with service interruptions seems to stem from a drastic increase in resource utilization, which could be caused by some software component that I’m not aware of. I used the Ansible install for Lemmy, and have only modified certain configuration files as required. For the most part, I’ve only added a higher max_client_body_size in the nginx configs for larger images, and have added settings for an SMTP relay to the main config.hjson file. The spikes occured before these changes, which leads me to believe that they are caused by something I have not yet explored.
These issues occured on both 0.17.4 and 0.18.0, which seems to indicate it’s not a new issue stemming from a recent source code change.
The Ansible install does make things a lot more simple, but it’s still pretty involved if you’re new to self-hosting in general. For example, you might need to set up an SMTP relay if you can’t port forward a workable port, and you also will probably want to change your Nginx configs to allow uploading larger images than a single megabyte.
Lemmy is pretty fun to host. Doubly so if you host a private instance with low latency; you’d basically be defederation proof.
Nothing is wrong with making new rules, but they have to exist and before you start enforcing them.
You know things are going to shit when they can’t even make up excuses for their administrative decisions that align with their policies. “You can’t take a subreddit NSFW if it was SFW before, and people aren’t expecting it.” That is nowhere in their policies, and many of the subreddits held votes to determine whether they should change their communities’ focus. I hope all of the new Lemmy instances and their admins, even if they acknowledge the importance of interpreting rules flexibly, realize that there is a finite limit to how far plain meaning can be twisted before you’re outright making shit up.
Does Lemmy have any tools to mass-delete new accounts within a time-frame?
I also don’t know if this is related or expected behavior, but this instance seems to be automatically banning user accounts on other instances. They do seem to be NSFW or related to inappropriate topics, but I had no idea that this was something Lemmy automatically did.
This worked. Thanks! Turns out my bootstrap style looked like shit, but at least I know where to put things!
Reddit is such a mess.
I wonder if it’s some strategic bullshit to try and scare people. Fuck it, most of those people are the kind who would enjoy using Lemmy anyway.
https://NormalCity.life is one I’m talking about to you from as we speak, but honestly visting to see if anything strikes you as worth subscribing to from your existing account is great. We’re going to be expanding our community repertoire to feature more permutations of “tech and creativity,” and I’m currently writing a multi-post series on the Fundamentals of Lemmy.
I had no idea FOSS tax software was a thing. Huh. I’ll try and play around with it at some point and let you know.