

“Yeah this sounds like a Phil Nash sort of problem, I’ll just stick him in here.”
“Yeah this sounds like a Phil Nash sort of problem, I’ll just stick him in here.”
The rust compiler produces a flawless understanding of your code, and then quits out because understanding that code is a Nightly-only feature and you’re using the stable build.
It cheeses my beans so goram much that they took a perfectly good web site and made it terrible so they could sell it to “the public”, notionally the same people who were using the site!!!
I can only conclude that this is some kind of scam and actually most of the thing is going to end up owned by deliberately nebulous “institutional investors” and not the community members who constitute and deserve ownership of the community. Or even the people at Reddit Inc. who did the work of making the thing.
DAE socialism?
I think you can keep doing the SMB shares and use an overlay filesystem on top of those to basically stack them on top of each other, so that server1/dir1/file1.txt
and server2/dir1/file2.txt
and server3/dir1/file3.txt
all show up in the same folder. I’m not sure how happy that is when one of the servers just isn’t there though.
Other than that you probably need some kind of fancy FUSE application to fake a filesystem that works the way you want. Maybe some kind of FUES-over-Git-Annex system exists that could do it already?
I wouldn’t really recommend IPFS for this. It’s tough to get it to actually fetch the blocks promptly for files unless you manually convince it to connect to the machine that has them. It doesn’t really solve the shared-drive problem as far as I know (you’d have like several IPNS paths to juggle for the different libraries, and you’d have to have a way to update them when new files were added). Also it won’t do any encryption or privacy: anyone who has seen the same file that you have, and has the IPFS hash of it, will be able to convince you to distribute the file to them (whether you have a license to do so or not).
You might want to try Openstack. It is set up for running a multi-tenant cloud.
Seems to not be paying off though; having whole communities and instances close is pretty inconvenient.
Why does Lemmy even ship its own image host? There are plenty of places to upload images you want to post that are already good at hosting images, arguably better than pictrs is for some applications. Running your own opens up whole categories of new problems like this that are inessential to running a federated link aggregator. People selfhost Lemmy and turn around and dump the images for “their” image host in S3 anyway.
We should all get out of the image hosting business unless we really want to be there.
Sitewide Discontent is my new OC name.
One day to learn how to make stuff in React, one day to learn the degree to which the React developers despise you.
I actually kind of like the error handling. Code should explain why something was a problem, not just where it was a problem. You get a huge string of “couldn’t foobar the baz: target baz was not greebleable: no greeble provider named fizzbuzz”, and while the strings are long as hell they are much better explanations for a problem than a stack trace is.
Because you want to know if the first half of the code works at all before you write the whole second half.
Finding all the bits that will be used by the second half and changing the declarations to just expressions is a bunch of extra work. As is adding placeholder code to use the declared variables.
That could work fine, probably? Or you could use it on the same machine as other stuff.
ZFS zRAID is pretty good for this I think. You hook up the drives from one “pool” to a new machine, and ZFS can detect them and see that they constitute a pool and import them.
I think it still stores some internal references to which drives are in the pool, but if you add the drives from the by-ID directory when making the pool it ought to be using stable IDs at least across Linux machines.
There’s also always Git Annex for managing redundancy at the file level instead of inside the filesystem.
I think if you want to write a fake story, you need to make sure that by the end of it people realize they’ve been had. A high-effort troll is entertaining. A person who writes a story that just happens to not be true is just wasting everybody’s time.
EDIT: What if OP wasn’t a Reddit story faker? 😮
They still seem to be working though, or at least the ones I was using are. So that would suggest they weren’t using API keys to access stuff.
Usually for Windows VM gaming you want to pass through a GPU and a USB controller and plug in directly. You might be able to use something like Steam streaming but I wouldn’t recommend a normal desktop-app-oriented thin client setup, not having tried it.
You may run into weird problems with latency spikes: mostly it will work great and everything runs at 90 FPS or whatever, but then inexplicably 1 frame every few minutes takes 100ms and nobody can tell you why.
There can also be problems with storage access speed. What ought to be very fast storage on the host is substantially slower storage once the image file and host FS overhead, or the block device pass through overhead, come into play. Or maybe you just need an NVMe device to pass straight through.
You might be looking for a “smoke test”?
How micro are the cuts? You might get pretty far with the “ping” tool without any fancy monitoring setup around it.
They might be 64 bit numbers.
It’s not just that they’ve printed the code. I’ve printed code.
It’s that they’ve printed a screenshot of the editor.
It sounds like nobody actually understood what you want.
You have a non-ZFS boot drive, and a big ZFS pool, and you want to save an image of the boot drive to the pool, as a backup for the boot drive.
I guess you don’t want to image the drive while booted off it, because that could produce an image that isn’t fully self-consistent. So then the problem is getting at the pool from something other than the system you have.
I think what you need to do is find something else you can boot that supports ZFS. I think the Ubuntu live images will do it. If not, you can try something like re-installing the setup you have, but onto a USB drive.
Then you have to boot to that and
zfs import
your pool. ZFS is pretty smart so it should just auto-detect the pool structure and where it wants to be mounted, and you can mount it. Don’t do a ZFS feature upgrade on the pool though, or the other system might not understand it. It’s also possible your live kernel might not have a new enough ZFS to understand the features your pool uses, and you might need to find a newer one.Then once the pool is mounted you should be able to
dd
your boot drive block device to a file on the pool.If you can’t get this to work, you can try using a non-ZFS-speaking live Linux and
dd
ing your image to somewhere on the network big enough to hold it, which you may or may not have, and then booting the system and copying back from there to the pool.