• 21 Posts
  • 70 Comments
Joined 11 months ago
cake
Cake day: July 18th, 2024

help-circle
  • And storing the source and such for every dependency would be bigger than, and result in the same thing as an image.

    Let’s flip that around.

    The insanity that would be downloading and storing everything you need, wrapping it all up into a massive tarball and then shipping it to anyone who wants to use the end product, and also by the way assuming that everything you need in order to rebuild it will always be available from every upstream source if you want to make any changes, is precisely what Docker does. And yes, it’s silly to trust that everything it’s referencing will always be available from whoever’s providing it.

    (Also, security)

    Docker is like installing onto an empty computer then shipping the entire machine to the end user.

    Correct. Because it’s not capable enough to make actually-reproducible builds.

    My point is, you can do that imaging (in a couple of different ways) with Nix, if you really wanted to. No one does, because it would be insane when you have other more effective tools that can accomplish the exact same goal without needing to ship the entire machine to the end user. There are good use cases for Docker, making it easy to scale services up as was the original intent is a really good one. The way people commonly use it today, as a way to make reproducible environments for ease of one-off deployment, is not one. In my opinion.

    I’ve been tempted into a “my favorite technology is better” pissing match, I guess. Anyway, Nix is better.


  • The issue is, nix builds are only guaranteed to be reproducible if the dependencies don’t change.

    Dude, this is exactly why Nix is better. Docker builds are only guaranteed to be reproducible if the dependencies don’t change. Which they will. The vast majority of real-world Dockerfiles do pip install, wget, all kinds of basically unlimited nonsense to pull down their dependencies from anywhere on the internet.

    Nix builds, on the other hand, are forbidden from the internet, specifically to force them to declare dependencies explicitly and have it within a managed system. You can trust that the Nix repositories aren’t going to change (or store them yourself, along with all the source that generated them and will actually produce the same binaries, if you’re paranoid). You can send the flake.nix and flake.lock files and it will actually work to reproduce a basically byte-identical container on the receiver’s end, which means you don’t have to send multi-gigabyte “images” in order to be able to depend on the recipient actually being able to make use of it. This is what I was saying that the whole thing of needing “images” is a non-issue if your workflow isn’t allowing arbitrary fuckery on an industrial scale whenever you are trying to spin up a new container.

    I suspect that making a new container and populating it with something useful is so trivial on Nix, that you’re missing the point of what is actually happening, whereas with Docker you can tell something big is happening because it’s such a fandango when it happens. And so you assume Docker is “real” and Nix is “fake” or something.

    I like one a package to be independent

    Yes, me too, which is why an affinity for Docker is weird to me.


  • Yes because that is a wrong and clunky way to do it lol.

    If you really wanted to, you could use dockerTools.BuildImage to create an “imaged” version of the container you made, or you could send around the flake.nix and flake.lock files exactly as someone would send around Dockerfiles. That stuff is usually just not necessary though, because it’s replaced with just a better approach (for the average-end-user case where you don’t need large numbers of Docker containers that you can deploy quickly at scale) that accomplishes the same thing.

    I feel like I’m not going to convince you of this though. Have fun with Docker, I guess.


  • Hold up, nix added containerization? How did I miss that? I will have another look now!

    Nix is containerization. Here is firing up a temporary little container with a new python version and then throwing it away once I’m done with it (although you can also do this with more complicated setups, this is just showing doing it with one thing only):

    [hap@glimmer:/proc/69235/fd]$ python --version
    Python 3.12.8
    
    [hap@glimmer:/proc/69235/fd]$ nix-shell -p python39
    this path will be fetched (27.46 MiB download, 80.28 MiB unpacked):
      /nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21
    copying path '/nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21' from 'https://cache.nixos.org/'...
    
    [nix-shell:~]$ python --version
    Python 3.9.21
    
    [nix-shell:~]$ exit
    exit
    
    [hap@glimmer:/proc/69235/fd]$ python --version
    Python 3.12.8
    

    The whole “system” you get when moving from Nix to NixOS is basically just a composition of a whole bunch of individual packages like python39 was, in one big container that is “the system.” But you can also fire up temporary containers trivially for particular things. I have a couple of tools with source in ~/src which, whenever I change the source, nix-os rebuild will automatically fire up a little container to rebuild them in (with their build dependencies which don’t have to be around cluttering up my main system). If it works, it’ll deploy the completed product into my main system image for me, but if it doesn’t then nothing will have changed (and either way it throws away the container it used to attempt the build in).

    Each config change spawns a new container for the main system OS image (“generation”), but you can roll back to one of the earlier generations (which are, from a functional perspective, still around) if you want or if you broke something.

    And so on. It’s very nice.


  • I mean if it makes you happy, I won’t tell you to do anything different. I think a certain amount of it is just prejudice against Docker on my part. Just in my experience NixOS is the best of both worlds: You can have a single coherent system if everything in that system can play nice with each other, and if not, then things can be containerized completely that way still works too. And then on top it has a couple of other nice features like rolling back configs easily, or source builds that get slotted in in-place as if they were standard packages (which is generally where I abandon Docker installs of things, because making changes to the source seems like it’s going to be a big hassle).

    I’m not trying to evangelize though, you should in all seriousness just do what you find to be effective.




  • Huh.

    IDK man, my experience is that Nix solves the problem you originally talked about and a bunch of others, pretty effectively. Among other things if things “just… don’t work” you can trivially roll back to an earlier working config, and see what changed between working and not-working, and so what would be a pretty grueling debugging process in some other environment becomes pretty easy to sort out.

    But whatever. If for some reason Docker makes you more happy and not less, you’re welcome to it and best of luck.




  • My laptop will send a signal to all programs telling them to shut down, which includes cleaning up their stuff, and then it unmounts the drives, and then it shuts down. It just doesn’t wait forever and make me fix the problem if some program is having trouble shutting down. That is the correct behavior.

    I do get that it’s nice to be protected against having your work blown away. As a first step, the idea of checking with every program to make sure it’s okay to turn off was a good progress, back in the past when it was first invented. The solution in the present day to that is autosave. The solution is definitely not to leave all the user’s work unsaved for a potentially unlimited amount of time, and then refuse to shut down if there is any terminal that still has an ssh session open, any settings window still open, or any GIMP session with files exported but not saved as .xcf.

    Literally 2/3 of those obstacles happen pretty much every time I shut down my Mac, and I have to wander through the programs resolving programs’ problems that have nothing to do with saving my work. It’s annoying. I do understand that, with the other way, you have to go around checking that you have no work unsaved before shutting down. But, if you are mature enough to do that, then the “init 0” way is objectively better.



  • I just flip through all the workspaces, make sure there’s nothing going on I care about, and then hit the button.

    Computers that teach you not to do that, but instead to just blindly pick “shut down” and then assume that the computer will protect you against having anything unsaved, but also refuse to shut down if there’s some app this is not cooperating, have 0 upside compared to the other way.



  • No idea about tools although I hope you find something.

    Two related suggestions that will change your life:

    1. Grunt Fund if you are making decisions about equity
    2. Have people estimate the total time for a task, rigidly enforce that every man-hour spent on a project has to be allocated to one of those tasks (including the elusive but vital “oh shit we forgot” task), keep track of the coefficient between the two. It’ll be different for different people sometimes. When estimating a project, have people come up with estimates and then multiply by the coefficient. Be transparent with everyone about this system. It’ll revolutionize your project management life once people get used to it. I tried to find a blog post which explains more detail, but honestly, it’s not complicated, and Google is too shit now to find it.



  • Back before people knew all that much about it, back when Elon Musk was the guy who made Tesla and SpaceX and this super smart guy (as opposed to being the guy who bought them and then fucked up the engineering), I knew some people who were excited about it. It was supposed to be a working truck but electric, bring all the better-than-other-cars stuff that the Roadster and Model S had, it was supposed to have solar panels and electrical outlets and super-strong construction so you could use it to survive the zombie apocalypse.

    I think that was before the inflection point, back when the genuine success Tesla had had made Musk’s personal brand of bullshit believable. I remember when people started getting a good look at all the concept and actual prototypes, that made it look like a dumpster without the storage space, was when the shine came off the rose. But I definitely do remember people who were excited about it back in the beginning.





  • PhilipTheBucket@ponder.cattoSelfhosted@lemmy.worldWhat is Docker?
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    10
    ·
    2 months ago

    Okay, so way back when, Google needed a way to install and administer 500 new instances of whatever web service they had going on without it being a nightmare. So they made a little tool to make it easier to spin up random new stuff easily and scriptably.

    So then the whole rest of the world said “Hey Google’s doing that and they’re super smart, we should do that too.” So they did. They made Docker, and for some reason that involved Y Combinator giving someone millions of dollars for reasons I don’t really understand.

    So anyway, once Docker existed, nobody except Google and maybe like 50 other tech companies actually needed to do anything that it was useful for (and 48 out of those 50 are too addled by layoffs and nepotism to actually use Borg / K8s/ Docker (don’t worry they’re all the the same thing) for its intended purpose.) They just use it so their tech leads can have conversations at conferences and lunches where they make it out like anyone who’s not using Docker must be an idiot, which is the primary purpose for technology as far as they’re concerned.

    But anyway in the meantime a bunch of FOSS software authors said “Hey this is pretty convenient, if I put a setup script inside a Dockerfile I can literally put whatever crazy bullshit I want into it, like 20 times more than even the most certifiably insane person would ever put up with in a list of setup instructions, and also I can pull in 50 gigs of dependencies if I want to of which 2,421 have critical security vulnerabilities and no one will see because they’ll just hit the button and make it go.”

    And so now everyone uses Docker and it’s a pain in the ass to make any edits to the configuration or setup and it’s all in this weird virtualized box, and the “from scratch” instructions are usually out of date.

    The end