she/they

  • 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • But that still leaves the question: How to install Nix in the first place? Without just running the script.

    You can download tarballs with the precompiled Nix, though you’ll still need to run an install script (but you can at least read it to convince yourself it’s not malicious), see the relevant documentation for that.

    Something that slipped my mind is that since OpenSUSE uses SELinux now, that means the recommended multi-user mode won’t work. Single-user mode should be fine afaik, but it’s a bit less convenient.

    This command just runs the software once without actually installing it right?

    The nix-env -iA does actually install the software locally, not completely unlike how a zypper in would. For running a program without installing you would use something like nix-shell -p yazi --command yazi. Of course that still downloads and “installs” the program, it just won’t add it to your PATH or create a GC root, which means the next time Nix does “garbage collection” it will be removed again.

    And yeah I would recommend just trying OpenSUSE out and then if you realize you actually really do need stuff from third party package managers, then you can worry about whether getting into Nix is a good idea or not. Or fall back to the Arch/AUR in distrobox idea which is probably simpler to do overall, especially since from what I understand that’s what you’re supposed to do on the immutable spins like Aeon.

    Late edit: I’ll also note that there are several OpenSUSE specific third party repos too. Packman has some proprietary codecs that OpenSUSE doesn’t want to ship (in case you really don’t want your browser to be a Flatpak), and the Open Build Service (OBS) which is basically the AUR for OpenSUSE. They’re not as useful because they’re nowhere near the size of the AUR, but if you just need one specific package (perhaps one with questionable legality like yt-dlp or something) they might just have it. And of course you can also build stuff from source and put it in your ~/.local/bin, which has been common practice since before Linux was able to run on real hardware.


  • Theoretically you could download the .rpm file which quite a few developers provide on and install it on Tumbleweed too? But I am not 100% sure about that so please correct me about that if I’m wrong.

    Yeah that’s not going to work in the general case. A trivial RPM package might be fine but every additional dependency increases the chance that it depends on some package that OpenSUSE doesn’t know. There’s a reason OpenSUSE is usually considered an independent distro and not a “Fedora-based” one despite some shared components.

    I don’t think security wise there’s much of a difference between running random software directly or via distrobox. Note that distrobox mounts your entire home directory into its containers, which removes any security benefit that containers could theoretically bring. In both cases you either need to audit the software yourself or you need to trust whoever you’re downloading the software from.

    Out of the third party repositories you mentioned, I would personally consider Nixpkgs the most trustworthy because package specs are actually code reviewed, unlike the AUR into which anyone can publish packages with zero oversight. That doesn’t mean it’s impossible for Nixpkgs to end up with malware in it, but the AUR sets a low bar. Using Nix (not NixOS) is also not actually that hard, you can just run nix-env -iA nixpkgs.yazi and it does exactly what you would expect, even if NixOS users would scoff at the “imperativity”.

    That being said, the OpenSUSE repositories really aren’t that bad. Especially if you combine them with Flatpak, and especially if you install Firefox and VLC (or equivalents of your choice) from Flatpak so you don’t need proprietary codecs in your base system. I used OpenSUSE Tumbleweed for years and got by just fine without Nix, homebrew or distrobox.



  • It’s been a while since I last gave it a try, but I remember frequently ending up in strange states where a window wouldn’t want to tile properly. Windows would also frequently end up overlapping or extending beyond the screen, in ways they just wouldn’t when I was using Sway, Hyperland or Niri. IIRC mouse dragging and mouse resizing windows was extremely jank too.

    Most of this is KWin’s fault as far as I know, it’s built for stacking window management and there’s only so much you can fix with scripting around it. It’s also the reason for the bad multi-monitor experience; the way it interacts with workspaces in particular is in my opinion not useful and never what I want.






  • You could make the same kind of articles for the old coreutils if you really wanted to. Just creatively “rewording” the bugfixes from the recent 9.8 release:

    • GNU Core Utilities Are Causing Failures [when copying between NFS and non-NFS filesystems with ACLs]
    • GNU Core Utilities May Cause Data Corruption [with copy ranges larger than 2 GiB]
    • Correctness Bugs Found in GNU Core Utilities [tail --pid may race with reused PIDs]

    I feel like the reactions regarding uutils are a bit… off in general. There seem to be a lot of people who are pathologically negative towards open source projects for, frankly, bullshit reasons, like vague complaints about “Rust evangelism” (what?) or how permissive licensing is against the spirit of open source (WTF).

    Phoronix isn’t helping with these clickbait articles which border on content farming and their failure to moderate their comments of course, but these negative attitudes seems to cut across sites, also including Lemmy, Reddit and even Hackernews.

    The uutils team seems to be doing well but it makes me sad to think about any aspiring open source devs without corporate backing reading such drivel.


  • You could always swap out crons, syslog, init systems, and it would not affect Gnome.

    That just isn’t true. Both GNOME and KDE already have hard dependencies on systemd-logind. GNOME hasn’t supported non-systemd Unixes since 2015! The only reason it works is that the elogind project provides a systemd-logind implementation decoupled from the rest of systemd. The GNOME team has elected to give users of elogind (despite not being officially supported) advance warning that they’ll have to do some amount of extra work in the future if they want to ship GNOME 50. Honestly I think that’s quite fair of them.

    There are more GNOME features that don’t work without systemd even if it launches, like application isolation using systemd scopes. Fundamentally this is about not having to reinvent the world. Why should every DE have bespoke implementations of user, login and service managers instead of just using the ones that 99% of user systems already have?



  • That’s not entirely true, unless you choose to nixify everything. You can just have a basic Nix configuration that installs whatever programs you need, then use stow (or whatever symlink manager you prefer) to manage the rest of your config.

    You can’t entirely forget that you’re on NixOS because of FHS noncompliance but even then getting nix-ld to work doesn’t require a lot of effort.


  • No C program is written to satisfy a borrow checker and most wouldn’t compile with one, so adding it would require rewriting the world anyways. At that point why not choose a language that, in addition to being memory safe, also drastically cuts down on other kinds of UB, has sum types, sane error handling, a (mostly) thread safe standard library, etc.?






  • Oinks@lemmy.blahaj.zonetoLinux@programming.devThe Wizard and His Shell
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    7 months ago

    GUIs do have advantages in things like discoverability. Honestly the 1983s Apple Lisa nailed this with the idea of having clickable menus annotated with keyboard shortcuts, so users could do the same thing faster next time. For some reason we stopped doing this (especially in web apps), but that’s a reason to make better GUIs, not to RETVRN to the feature set of a VT100.

    I don’t know why we have to go on nonsensical diatribes about “UNIX wizards” though when we’re fundamentally talking about a handful of minor UI improvements to things that already exist.


  • To be fair that’s not the entire story, since you need to actually resolve the conflicts first, which is slightly scary since your worktree will be broken while you do it and your Linter will be shouting at you.

    You may also want a dedicated merge tool that warns you before accidentally commiting a conflict and creating a broken commit.

    Oh and non trivial resolutions may or may not create an evil merge which may or may not be desirable depending on which subset of git automation features you use.

    Using git status often is definitely good advice though.