

Just puts(“I’m a teapot”);
:)
Just puts(“I’m a teapot”);
:)
This is mostly an IOPS dependent answer. Do you have multiple hot services constantly hitting the disk? If so, it can be advantageous to split the heavy hitters across different disk controllers, so in high redundancy situations that means different dedicated pools. If it’s a bunch of services just reading, filesystems like ZFS use caching to almost completely eliminate disk thrashing.
I just need a tiny phone that still had an incredible camera array, gps, and music streaming. I think the Light Phone III is almost it but they’re not quite there yet.
I use Actual and my solution is to just report the differences in investments value at the end of each week as a transaction. It’s not great but it affords me an opportunity to see trends in a different way and make adjustments feeling a little more informed. I even put my car in and just check KBB every year and update it. Helps with the year end net worth evaluation though it’s not the most flexible.
Sweet, cheaper stocks this week.
Yup. And this is why all these coupon sites exist. Absolutely none of them would be running this long if they weren’t profiting off it.
How’s this compare to kdenlive?
Oooh fascinating! I’m gonna have to try that myself. Bring us back to DOS days of computing.
Difference being you can still do things like launch steam games as long as you have a DE installed even if you’re using TTY primarily.
Even better, after your machine boots up, press ctrl+alt+F(3-6) to access TTY sessions and then there is no mouse at all. Plenty of TUI apps work just fine here too.
What are the features you need from your host? If it’s just remote syncing, why not just make a small Debian system and install git on it? You can manage security on the box itself. Do you need the overhead of gitlab at all?
I say this because I did try out hosting my own GitLab, GitTea, Cogs, etc and I just found I never needed any of the features. The whole point was to have a single remote that can be backed up and redeployed easily in disaster situations but otherwise all my local work just needed simple tracking. I wrote a couple scripts so my local machine can create new repos remotely and I also setup ssh key on the remote machine.
I don’t have a complicated setup, maybe you do, not sure. But I didn’t need the integrated features and overhead for solo self hosting.
For example, one of my local machine scripts just executes a couple commands on the remote to create a new folder, cd into it, and then run git init —bare
then I can just clone the new project folder on the local machine and get started.
Yeah, real developers do git clean -dxf
.
The older I get, the less time passes between starting a new project and reading the readme / manpages for a library.
If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.
Huh, I’ve only heard business logic before.
Can you show where they’ve gone further than apples game porting toolkit or game translation layers? Genuinely curious because I haven’t seen any comparison but do know several large profile games have come to apple silicon recently.
Isn’t UTC meant to be… you know, universal?
I self host services as much as possible for multiple reasons; learning, staying up to date with so many technologies with hands on experience, and security / peace of mind. Knowing my 3-2-1 backup solution is backing my entire infrastructure helps greatly in feeling less pressured to provide my data to unknown entities no matter how trustworthy, as well as the peace of mind in knowing I have control over every step of the process and how to troubleshoot and fix problems. I’m not an expert and rely heavily on online resources to help get me to a comfortable spot but I also don’t feel helpless when something breaks.
If the choice is to trust an encrypted backup of all my sensitive passwords, passkeys, and recovery information on someone else’s server or have to restore a machine, container, vm, etc. from a backup due to critical failures, I’ll choose the second one because no matter how encrypted something is someone somewhere will be able to break it with time. I don’t care if accelerated and quantum encryption will take millennia to break. Not having that payload out in the wild at all is the only way to prevent it being cracked.
I really wish rider would respect when I turn it off. It just keeps re-enabling itself.
I still think I’ll stick with nala as my apt front-end but hopefully this will be a more robust backend.