

If you mean a 2.5" drive (laptop sized) then yes you can generally do that. 3.5" drives are usually 1" thick and won’t fit in a slim DVD drive slot.
If you mean a 2.5" drive (laptop sized) then yes you can generally do that. 3.5" drives are usually 1" thick and won’t fit in a slim DVD drive slot.
For what it’s worth, there’s an FSF article that addresses this:
https://www.gnu.org/philosophy/programs-must-not-limit-freedom-to-run.html
Whether it’s persuasive is of course up to you to decide ;).
If Mozilla died would I quickly be finding a larger chunk of websites that aren’t supported?
Likely yes, as Google will keep enshittifying the web unless stopped by antitrust or whatever. Which isn’t looking so likely.
clubhouse.com? I don’t know anything else about it really.
I believe I was thinking of Clubhouse but I haven’t checked into it much.
No, Jitsi is a chat program. I must have been confusing Rumble with some other thing. But as with youtube, the video collection is much more important than the software. Releasing all the youtube software wouldn’t change youtube’s dominance even slightly.
Rumble is real time voice chat right? Closest I know to that is Jitsi Meet. For text chat there are many irc networks.
I just download the mp3 and play it with mplayer. Don’t need no apps.
50GB of flac = maybe 20GB of Vorbis amirite? Is that 450GB of flac in your screen shot? It would fit on a 256gb phone even without an SD card. A 512GB card is quite affordable these days. Just make sure to buy a phone with a slot, and think of it as next level degoogling ;).
Yeah I know there’s lots of music in the world but who wants to listen to all of it on a moment’s notice anyway?
Can’t understand why this is interesting, as phones now have a lot of storage space, even the ones that don’t have SD card slots. Just store the music that interests you directly on the phone.
I haven’t looked in a few years but 20TB is probably plenty. I agree that Wikipedia lost its way once it got all that attention online and all that search traffic. Everyone should have their own copy of Wikipedia. I used to download the daily incremental data dumps but got tired of it. I still have a few TB of them around that I’ve been wanting to merge.
The text is in not-exactly-convenient database dumps (see other commenter’s link) and there are daily diffs (mostly bot noise), but then there are the images and other media, which are way up in the terabytes by now. There are some docs, maybe out of date, about how to run the software yourself. It’s written in PHP and it’s big and complicated.
How much do you expect to pay for the 24 NVMe disks?
It’s possible for a while but there is a whack-a-mole game if you’re doing anything they would care about. So you will have to keep moving it around. VPS forums will have some info.
Noo, really, idk what Disco was but tags and recommendations from other humans are plenty to find good AO3 fic to read. And AO3 itself has been getting hammered for months, presumably by corporate AI crawlers. A recommendation engine would also have to crawl AO3. That’s very difficult to do because of said hammering. Even the regular download feature barely works now if you use fanficfare for it.
Are you familiar with git hooks? See
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
Scroll to the part about server side hooks. The idea is to automatically propagate updates when you receive them. So git-level replication instead of rsync.
I see, fair enough. Replication is never instantaneous, so do you have definite bounds on how much latency you’ll accept? Do you really want independent git servers online? Most HA systems have a primary and a failover, so users only see one server. If you want to use Ceph, in practice all servers would be in the same DC. Is that ok?
I think I’d look in one of the many git books out there to see what they say about replication schemes. This sounds like something that must have been done before.
Why do you want 5 git servers instead of, say, 2? Are you after something more than high availability? Are you trying to run something like GitHub where some repos might have stupendous concurrent read traffic? What about update traffic?
What happens if the servers sometimes get out of sync for 0.5 sec or whatever, as long as each is in a consistent state at all times?
Anyway my first idea isn’t rsync, but rather, use update hooks to replicate pushes to the other servers, so the updates will still look atomic to clients. Alternatively, use a replicated file system under Ceph or the like, so you can quickly migrate failed servers. That’s a standard cloud hosting setup.
What real world workload do you have, that appeared suddenly enough that your devs couldn’t stay in top of it, and you find yourself seeking advice from us relatively clueless dweebs on Lemmy? It’s not a problem most git users deal with. Git is pretty fast and most users are ok with a single server and a backup.
I wonder if you could use HAProxy for that. It’s usually used with web servers. This is a pretty surprising request though, since git is pretty fast. Do you have an actual real world workload that needs such a setup? Otherwise why not just have a normal setup with one server being mirrored, and a failover IP as lots of VPS hosts can supply?
And, can you use round robin DNS instead of a load balancer?
Oh I see. Yeah DVD drives generally use the same SATA interface as hard drives.