

Crossfading and normalization would both independently be dealbreakers for me. I can’t go back
Crossfading and normalization would both independently be dealbreakers for me. I can’t go back
I would be genuinely surprised if fair use draws the line on format-shifted, legally purchased media, at “remote watch-together”, leaving format-shifting and local watch-together in-tact.
If it were up to the studio’s interpretation of the law, you’d need to purchase a license for each person during local watch-together.
agree in principal, but in practice:
parents who live across the state
plexamp for music
Outlook being on that list is crazy.
The proper deepseek r1 requires about 500gb of ram/vram to run, which is orders of magnitude more ram than modern phones have. The smaller models called “deepseek r1” are not the real deepseek model that everyone is talking about.
I use ansible on one of my side projects; I use puppet at work. It’s the same reason I use raw docker and not rancher+rke2… it’s not about learning the abstractions; it’s about learning the fundamentals. If I wanted a simple abstraction I’d have deployed truenas and Linuxsserver containers instead of Taco Bell programming everything myself.
Sure. I have an r630 that is configured as an NFS server and a docker host called vacuum. There is a script called install_vacuum.sh that with a single command, can build the server to my spec from a base install of Ubuntu 24.04. it has functions to install base packages from repositories, add new repositories, set up users, create config files for NFS, smb, fstab, crontab, etc… once an NFS server exists on my network, any other server could be my docker host. My docker host is set up from a script install_containers.sh. as with before, it does all the things to get me a basic docker host, firewalled, and configured for persistence via my NFS server. It also has functions to create and start docker containers for all of my workflows (Plex, webserver, CA, etc), and if those containers don’t exist, it will build a docker image for said workflow based on a standardized format (you guessed it) bash build script for the containers. There is automation via cron on whatever host runs docker to build and update the containers once a week, bare-metal servers update themselves nightly, rebooting when necessary via unattended-upgrades.
Basically, you break everything down into the simplest function possible, have everything defined via variables in shared configurations that everything sources before running, and you have higher and higher level functions call other functions until you have a single function that cascades into a functioning system. Does that make sense?
Have you started collecting your notes into scripts?
Not sure if many people do what I do, but instead of taking notes I make commented functions in bash. My philosophy is: If I can’t automate it; I don’t understand it. After a while you build enough automation to build your workstations, your servers, all of your vms and containers, your workflows, etc, and can automate duplicating / redeploying them whenever required. One tarball and like 6 commands and I can build my entire home + homelab.
Nothing beats the bang/buck ratio of used enterprise hardware (always buy new drives though if you care about the data)
https://www.theserverstore.com/ https://www.serversupply.com/ https://www.servermonkey.com/
I’ve bought from all of these in the past, personally I’m a fan of dells but there are arguments for just about any of the major 3 (dell, hp, sueprmicro)
Personally my main server right now is an r630. 96 threads, 768gb of ram. With that many memory channels, not only can you run all of what you listed, you can even do medium-sized inferencing/diffusion if you’re interested in that sort of thing.
the best way to learn is by doing!
I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.
Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.
If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.
Talescale proper gives you an external dependency (and a lot of security risk), but the underlying technology (wireguard) does not have the same limitation. You should just deploy wireguard yourself; it’s not as scary as it sounds.
Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation
https://docs.docker.com/engine/network/packet-filtering-firewalls/
For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.
The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)
+1 for cmk. Been using it at work for an entire data center + thousands of endpoints and I also use it for my 3 server homelab. It scales beautifully at any size.
That is usually more incompetence than malice. They write a game that requires different operation on amd vs Nvidia devices and basically write an
If Nvidia: Do x; Else if amd: Do Y; Else: Crash;
The idea being that if the check for amd/Nvidia fails, there must be an issue with the check function. The developers didn’t consider the possibility of a non amd/Nvidia card. This was especially true of old games. There are a lot of 1990s-2000s titles that won’t run on modern cards or modern windows because the developers didn’t program a failure mode of “just try it”
You would expose a single port to multiple vlans, and then bind multiple addresses to that single physical connected interface. Each service would then bind itself to the appropriate address, rather than “*”
You should consider reversing the roles. There’s no reason your homelab cannot be the client, and have your vps be the server. Once the wireguard virtual network exists, network traffic doesn’t really care which was the client and which was the server. Saves you from opening a port to attackers on your home network.
Thank you for letting me know what software not to use; good bot