• 5 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: October 20th, 2023

help-circle
  • You’ve kind of keyed in on one of the things I was hesitant to say:

    There are two big uses for an “offline” media library.

    Some people just use it for all the stuff they grabbed off the pirate bay (probably avoid TPB in 2025 but…). You don’t really care about quality and just want to consume media.

    Others, like myself, primarily use it to rip/back up their blu rays and UHDs and the like. If I am watching on my TV in the living room? I want that to be the highest quality I have available and I want to revel in every shadow gradient and so forth. If I am watching it on my computer? I don’t need anywhere near that much detail. And on a tablet? Compress that shit like an exec at netflix just saw the storage arrays.

    That is the benefit of transcoding and offline caching. It means you, as a “server”, just focus on backing up your library/finding the best quality rips or whatever. And you, as a “user”, don’t have to worry about figuring out how many different versions to keep so that you always have an appropriate version for whatever your use case is that week.


  • Storage is cheap until it isn’t.

    On my desktop where I have something like 6 TB of NVME storage because I am a sicko? The only thing that makes me think twice about a flatpak is if I need to give it access to devices or significant parts of my filesystem (yay permissions weirdness).

    On my laptop where I can have one drive and replacing it involves opening the entire laptop AND reinstalling Fedora (or dealing with clonezilla/dd)? Yeah… I very much care about just how much bloat I am dealing with. And, as the other person pointed out, flatpaks can balloon REAL fast.


  • If dependencies are articulated (and maintained…) properly, it is very doable and is intrinsically tied to what semantic versioning is actually supposed to represent. So appfoo depends in libbar@2:2.9 and so forth. Of course, the reality is that libbar is poorly maintained and has massive API/header breaking changes every point release and was dependent on a bug in libbar@2.1.3.4.5 anyway.

    Its one of the reasons why I like approaches like Portage or Spack that are specifically about breaking an application’s dependencies down and concretizing. Albeit, they also have the problem where they overconcretize and you have just as much, if not more, bloat. But it theoretically provides the best of both worlds… at the cost of making a single library take 50 minutes to install because you are compiling everything for the umpteenth time.

    And yeah… I run way too many appimages too.


  • Part of it is that Ubuntu/Canonical so aggressively pushed Snaps which became a huge culture war. So you have people who hate the idea of those style of packages because they hate Snap AND people who hate flatpak because they are Team Ubuntu for some reason.

    And the other aspect is that it is incredibly space inefficient (by the very nature of bundling in dependencies) and is prone to “weirdness” when it comes to file system permissions and the like. And many software projects kind of went all in on them because it provides a single(-ish) target to build for rather than having a debian and an arch and a redhad and a…


  • That’s nice.

    That doesn’t work if you are on an airplane (unless you want to spend the entire flight downloading one episode). Or if you just don’t want to deal with hotel wifi. Or if you just don’t want to expose your internal home network at all.

    Which is the point and why this is one of those big features of plex that there are so many tickets and requests to get into jellyfin et al. Because yes, you can just copy files from your NAS to your phone’s internal storage (assuming you don’t care about transcoding and the like)… at which point there isn’t much use to a metadata oriented media server/service.

    Or you can just set up Plex to always download the next 10 episodes of whatever show you are watching when it has network access. I mean… that probably won’t work (see: 40%) but when it does, it is awesome. Which is the “it just works” functionality.

    Which gets back to the issue where, because it is FOSS, it is the greatest thing ever and anyone asking for anything else is wrong and stupid. Which is a shame because if the Jellyfin devs could actually get the “download the next N episodes” functionality to reliably work (even at 80-90%) it would be a killer app. And, for what it is worth, I have liked the devs a lot when I interacted with them in the past. But the users and evangelists are just… what we can see in this thread.



  • Yeah.

    Jellyfin is spectacular for LAN usage on two computers. Once you start using devices (because, you know, that is what people tend to plug into their TVs…) or going on travel, it rapidly becomes apparent that it just isn’t a competitor.

    Hell, a quick google suggests jellyfin STILL doesn’t have caching of media for offline viewing. Plex’s works maybe 40% of the time but… 40% is still higher than 0%.

    I have a lifetime pass for Plex and encourage anyone who even kind of cares to get one next time it is on sale (or shortly before the scheduled price hike). I have tried Jellyfin a few times over the years and… it is basically exactly what I hate with FOSS “alternatives”. It isn’t an alternative in the slightest but people insist on talking it up because they want it to be and that just makes people less willing to try genuinely good alternatives.


    To put it bluntly, Plex is an “offline netflix” as it were. Jellyfin is a much better version of smbstation and all the other stuff we used to stream porn to our playstations back in the day.




  • Yeah. This isn’t the first time the news app and the core nextcloud updates have fought each other in weird and mysterious ways (for me or others). I forget how I solved it last time (I think it was a similar case of needing to manually update to bleeding edge and then tweak things) but… I just don’t care anymore.

    I don’t know who is right or wrong in how nextcloud is maintained (my instinct is the nextcloud devs because… have you seen nextcloud? but also, most apps don’t have this recurring problem). But at this point, the benefits I get out of it are largely gone. And when so many issues boil down to “We need more people and resources to maintain this”, it kind of feels like getting off the train BEFORE it crashes rather than after.


  • I’m on the alpha and it still won’t update any of my feeds. And going through the github issues it is basically summed up as “We will do another stable release once we have a frontend developer” which is basically never. So, at best, it will work until it doesn’t and then I have to fix it myself yet again and… yeah.

    And if my choice is to run an older version of nextcloud to support one app? Hell no.





  • If y9ou are close enough to a system of importance that you can spray it, you are close enough to compromise it in countless other ways.

    This is just one of many physical access attacks. Just like “you could take a hammer to it”

    Like, I know people want to think this is some Ocean’s Eleven heist waiting to happen. It isn’t. This is only viable if you can drench an area with helium (which means you can already gas everyone you care about) or you have such close physical access that there are so many other things you could do. At best it is an episode of Burn Notice where Michael has to rapidly improvise an escape where his CIA handler of the week already refused to give him something much more useful.





  • More drives is always better. But you need to understand how you are making it better.

    https://en.wikipedia.org/wiki/Standard_RAID_levels is a good breakdown of the different RAID levels. Those are slightly different depending on if you are doing “real”/hardware RAID or software raid (e.g. ZFS) but the principle holds true and the rest is just googling the translation (for example, Unraid is effectively RAID4 with some extra magic to better support mismatched drive sizes)

    That actually IS an important thing to understand early on. Because, depending on the RAID model you use, it might not be as easy as adding another drive. Have three 8 TB and want to add a 10? That last 2 TB won’t be used until EVERY drive has at least 10 TB. There are ways to set this up in ZFS and Ceph and the like but it can be a headache.

    And the issue isn’t the cloudflare tunnel. The issue is that you would have a publicly accessible service running on your network. If you use the cloudflare access control thing (login page before you can access the site) you mitigate a lot of that (while making it obnoxious for anything that uses an app…) but are still at the mercy of cloudflare.

    And understand that these are all very popular tools for a reason. So they are also things hackers REALLY care about getting access to. Just look up all the MANY MANY MANY ransomware attacks that QNAP had (and the hilarity of QNAP silently re-enabling online services with firmware updates…). Because using a botnet to just scan a list of domains and subdomains is pretty trivial and more than pays for itself after one person pays the ransom.

    As for paying for that? I would NEVER pay for nextcloud. It is fairly shit software that is overkill for what people use it for (file syncing and document server) and dogshit for what it pretends to be (google docs+drive). If I am going that route, I’ll just use Google Docs or might even check out the Proton Docs I pay for alongside my email and VPN.

    But for something self hosted where the only data that matters is backed up to a completely different storage setup? I still don’t like it being “exposed” but it is REALLY nice to have a working shopping list and the like when I head to the store.


  • A LOT of questions there.

    Unraid vs Truenas vs Proxmox+Ceph vs Proxmox+ZFS for NAS: I am not sure if Unraid is ONLY a subscription these days (I think it was going that way?) but for a single machine NAS with a hodgepodge of drives, it is pretty much unbeatable.

    That said, it sounds like you are buying dedicated drives. There are a lot of arguments for not having large spinning disk drives (I think general wisdom is 12 TB is the biggest you should go for speed reasons?), but at 3x18 you aren’t going to really be upgrading any time soon. So Truenas or just a ZFS pool in Proxmox seems reasonable. Although, with only three drives you are in a weird spot regarding “raid” options. Seeing as I am already going to antagonize enough people by having an opinion, I’ll let someone else wage the holy war of RAID levels.

    I personally run Proxmox+Ceph across three machines (with one specifically set up to use Proxmox+ZFS+Ceph so I can take my essential data with me in an evacuation). It is overkill and Proxmox+ZFS is probably sufficient for your needs. The main difference is that your “NAS” is actually a mount that you expose via SMB and something like Cockpit. Apalrd did a REALLY good video on this that goes step by step and explains everything and it is well worth checking out https://www.youtube.com/watch?v=Hu3t8pcq8O0.

    Ceph is always the wrong decision. It is too slow for enterprise and too finicky for home use. That said, I use ceph and love it. Proxmox abstracts away most of the chaos but you still need to understand enough to set up pools and cephfs (at which point it is exactly like the zfs examples above). And I love that I can set redundancy settings for different pools (folders) of data. So my blu ray rips are pretty much YOLO with minimal redundancy. My personal documents have multiple full backups (and then get backed up to a different storage setup entirely). Just understand that you really need at least three nodes (“servers”) for that to make sense. But also? If you are expanding it is very possible to set up the ceph in parallel to your initial ZFS pool (using separate drives/OSDs), copy stuff over, and then cannibalize the old OSDs. Just understand that makes that initial upgrade more expensive because you need to be able to duplicate all of the data you care about.

    I know some people want really fancy NASes with twenty million access methods. I want an SMB share that I can see when I am on my local network. So… barebones cockpit exposing an SMB share is nice. And I have syncthing set up to access the same share for the purpose of saves for video games and so forth.

    Unraid vs Truenas vs Proxmox for Services: Personally? I prefer to just use Proxmox to set up a crapton of containers/vms. I used Unraid for years but the vast majority of tutorials and wisdom out there are just setting things up via something closer to proxmox. And it is often a struggle to replicate that in the Unraid gui (although I think level1techs have good resources on how to access the real interface which is REALLY good?).

    And my general experience is that truenas is mostly a worst of all worlds in every aspect and is really just there if you want something but are afraid of/smart enough not to use proxmox like a sicko.

    Processor and Graphics: it really depends on what you are doing. For what you listed? Only frigate will really take advantage and I just bought a Coral accelerator which is a lot cheaper than a GPU and tends to outperform them for the kind of inference that Frigate does. There is an argument for having a proper GPU for transcoding in Plex but… I’ve never seen a point in that.

    That said: A buddy of mine does the whole vlogger thing and some day soon we are going to set up a contract for me to sit down and set her up an exporting box (with likely use as a streaming box). But I need to do more research on what she actually needs and how best to handle that and she needs to figure out her budget for both materials and my time (the latter likely just being another case where she pays for my vacation and I am her camera guy for like half of it). But we probably will grab a cheap intel gpu for that.

    External access: Don’t do it, that is a great way to get hacked.

    That out of the way. My nextcloud is exposed to the outside world via a cloudflare tunnel. It fills me with anxiety but as long as you regularly update everything it is “fine”.

    My plex? I have a lifetime plex pass so I just use their services to access it remotely. And I think I pay an annual fee for homeassistant because I genuinely want to support that project.

    Everything else? I used to use wireguard (and openvpn before it) but actually switched to tailscale. I like the control that the former provided but much prefer the model where I expose individual services (well, VMs). Because it is nice to have access to my cockpit share when I want to grab a file in a hotel room. There is zero reason that anything needs access to my qbitorrent or calibre or opnsense setup. Let alone even seeing my desktop that I totally forgot to turn off.

    But the general idea I use for all my selfhosted services is: The vast majority of interactions should happen when I am at home on my home network. It is a special case if I ever need to access anything remotely and that is where tailscale comes in.

    Theoretically you can also do the same via wireguard and subnetting and vlans but I always found that to be a mess to provide access both locally and remotely and the end result is I get lazy. Also, Tailscale is just an app on basically any machine whereas wireguard tends to involve some commands or weird phone interactions.


  • NuXCOM_90Percent@lemmy.ziptoLinux@programming.devWhere is Naomi Wu?
    link
    fedilink
    arrow-up
    67
    arrow-down
    2
    ·
    8 months ago

    Naomi Wu is one of the OGs of maker youtube and a lot of consumer grade 3d printing can be traced right back to her.

    Teaching Tech have talked about this a fair amount over the past year or two. But Naomi basically trying to walk a fine line and not get CCP’d is pretty well known at this point. The issue is that she isn’t seeking help (because any help is likely to get her and her partner in trouble) and the major “gossip” youtubers just want to say “Stupid girl has tits”.

    Real shit situation all around but hopefully she and her partner are safe-ish and happy.