

Also, there are some “former crypto miner“ boards that are configured with SUPER wide slots for video cards exactly like this.
They’re great and cheap used because nobody wants them.
If I have to build a second one, that’s my next path.
Also, there are some “former crypto miner“ boards that are configured with SUPER wide slots for video cards exactly like this.
They’re great and cheap used because nobody wants them.
If I have to build a second one, that’s my next path.
I have some nvlinks on the way.
Sooooo I’ve got a friend that used pcie-oculus and then back to pcie to allow the cards to run outside the case, but that’s not what I do, that’s just the more common approach.
You can also get pcie extension cables, but they’re pricey.
I stumbled upon a cubix device by chance which is a huge and really expensive pcie bus extender that does some really fancy fucking switching. But I got that at a ridiculous price and they’re hard to come by.
If I do it right, I could host 10 cards total (2 in the machine and 8 in the cubix)
This also means that I’m running 3x 1600w psu’s and I’m most at risk for blowing breakers (adding in a 240V line is next lol)
I’m rocking 4 used ones from 4 different people.
So far, all good
You can’t buy 3090’s new anymore anyways.
4090’s are twice as much for 15% better perf, and the 5090’s will be ridiculous prices.
2x3090 is more than enough for basic inference, I have more for training and fine tuning.
You want epyc/threadrupper etc.
You want max pcie lanes.
Given the price of P40’s on eBay vs the price you can get 3090’s for, fuck the P40’s, in rocking quad 3090’s and they kick ass.
Also, Pascale is the OLDEST hardware supported……… for how long?
Also, you’ll want to look for strange specific things to host multiple 3090’s etc… on your motherboard You want a lot of pcie lanes from your chip and board. You want above 4g decoding (fairly common in newer hardware)
……… what are you talking about?
The new modules can absolutely be updated independently of the kernel.
The modules need to be built against your version of the kernel, but MANY versions of the modules work (and are compiled against) different kernel versions.
Just look at nvidia, a nearly duplicate version of this exact problem. They have MANY versions you can install at any given time for their cards.
Yes and those kernel modules that get loaded in to control hardware interfaces are often referred to as drivers.
Likely driver issues, hopefully they can get it fixed!
Intel did a great job with the drivers in windows last time, time will tell if they fix up the Linux ones
Or you can do the dns challenge for letsencrypt
Check what your docker subnets are, but that shouldn’t conflict.
That sounds like your local network IP’s are conflicting with the default docker IP addresses.
What is your routers subnet?
Ahhhh now you’re talking kubernetes.
I mean you can do it with 2 machines and docker compose, but yeah.
If you have a docker compose, you can just bring it to a new machine with the storage medium and hit “go” and it’ll go.
That’ll probably be enough for a home setup and have a 1 hr downtime in a failure.
If you want “always hot” kubernetes is basically “multi-node docker on cocaine “
Damn, that addiction is strong lol.
I’m happy to help where I can but it’s a FUCKTON of knowledge and setup to go far enough to kubernetes it.
Docker-compose is 100x easier and gets you 95% of what you need.
You are going to want a single larger server and docker
Much easier maintenance
If you’re crazy, you’ll go with Kubernetes . I personally recommend it., I love it.
But I also work in it every day, so there’s a convenience there, but the complexity is off the charts .
Not a bad idea, but I’ll warn you that the addiction for homelabs is strong.
The n100 will make you sad eventually as your self-hosting addiction soars.
An older i5 with onboard gpu or an nvidia card will make you happier and not pull THAT much wattage.
Your Minecraft server will thank you.
I use containers instead
In fairness, ALL git terms feel backwards at first.
Either way, waiting 3 months worth of work before a commit is the big mistake here.
Squash ftw. Simpler clearer history.
The problem is that these are “source control basics” that everyone needs to learn the hard way once it seems.
Waiting 3 months in between commits however is a really bad rookie mistake because you were worried about making a commit that wasn’t perfect.
That is the lamest decepticon transformer I’ve ever heard of