Nope. I don’t talk about myself like that.

  • 0 Posts
  • 185 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle
  • Thanks for admitting it. A few people simultaneously responded attacking my warning. So rereading my response to you, I recognize I was a bit more snarky than was warranted, and I apologize for that.

    But yeah, 2fa (Even simple TOTP) baked in would go a long way too on the user front too.

    It’s clear that Sony could just generate a rainbow table of hashes in MD5 with common naming conventions and folder conventions, make a list of 100k paths to check or what have you for their top 1000 movies… and then shodan(or similar tool) to finding JF instances, and then check the full table in a few hours… rinse repeat on the next server. While that alone shouldn’t be enough to prove anything, the onus at that point becomes your problem as you now have to prove that you have a valid license for all the content that they matched, they’ve already got the evidence that you have the actual content on your server, and you having your instance public and linkable could be (I’m not a lawyer) sufficient to claim you’re distributing. Like I can script this attack myself in a few hours (Would need a few days to generate a full rainbow table)… Put this in front of a legal team of one of the big companies? They’ll champ at the bit to make it happen, just like they did for torrents… especially when there’s no defense of printers being on the torrent network since it’s directly on your server that exists on your IP/domain.


  • I don’t need to trust because I know how it works: https://github.com/jellyfin/jellyfin/blob/767ee2b5c41ddcceba869981b34d3f59d684bc00/Emby.Server.Implementations/Library/LibraryManager.cs#L538

    Yes… exactly how I said it works. Notice the return.

    return key.GetMD5();

    It’s a hash, not a proper randomized GUID. But thanks for backing me up I guess? I wasn’t interested in posting the actual code for it because I assumed it wouldn’t be worth a damn to most people who would read this. But here we are.

    They can’t. Without the domain, the reverse proxy will return the default page.

    You are wrong, but at this point I’d have to educate you on a lot of stuff that I don’t have the time or care to educate you on. The tools are out there and it’s beside the point at all, proper auth fixes all the concerns. If it’s publicly accessible you have to assume that someone will target you. It’s pitifully simple for someone to setup a tool to scan ranges and find stuff(especially with SSL registrations being public in general, if I asked any database for all domains issued that start with “jf” or “jellyfin” or other common terms, I’d likely find thousands instantly). Shodan can and does also do domain stuff.

    There are 2 popular Docker images, both store the media in different paths by default

    So they’d only have to have 2 hashes for a file to hit the VAST MAJORITY OF PEOPLE WHO USE THE DOCKER. What an overwhelming hurdle to jump…

    You do not have to follow the default path

    Correct, but how many people actually deviate? Forget that most people will map INTO the container and thus conform to the mapping that the containers want to use. This standardizes what would have been a more unique path INTO a known path. This actually makes the problem so much worse.

    The server does not even have to run in Docker

    And? Many people are simply going to mount as /mnt/movies or other common paths. Pre-compiling md5 hashes with hundreds of thousands of likely paths that can be tested within an hour is literally nothing.

    You do not know the naming scheme for the content

    Sure, but most people follow defaults in their *arr suite… Once again… the up-front “cost” of precompiling a rainbow table is literally nothing.

    It does not need to be similar, it needs to be identical.

    Correct but the point that I made is that they would simply pre-build a rainbow table. The point would be that they would take similar paths and pre-md5 hash them. Those paths would be similar. Not the literal specific MD5 hash.

    There are 1000s of variations you have to check for every single file name

    Which is pitifully easy if you precompile a rainbow table of hashes for the files for in the name formats and file structures that are relatively common on plex/jellyfin setups… especially to mirror common naming formats and structures that are used in the *arr setups. you can likely check 1000 urls in the matter of a couple of seconds… Why wouldn’t they do this? (the only valid answer is that they haven’t started doing it… but could at any time).

    My threat model does not include “angsty company worried about copyright infringement on private Jellyfin servers”.

    Yes… let’s ignore the companies that have BOATLOADS of money and have done shit like actively attack torrents and trackers to find thousands of offenders and tied them up legally for decades. Yes, let’s ignore that risk all together! What a sane response! This only makes sense if you live somewhere that doesn’t have any reach from those companies… Even then, if you’re recommending Jellyfin to other people without knowing that they’re in the same situation as you. You’re not helping.

    Why bother scanning the entire internet for public Jellyfin instances when you can just subpoena Plex into telling you who has illegal content stored?

    I thought you knew your threat model? Plex doesn’t hold a list of content on your servers. The most Plex can return is whatever metadata you request… Except that risk now is null because Plex returns that metadata for any show on their streaming platform or for searches on items that are on other platforms since that function to “show what’s hot on my streaming platforms” (stupid fucking feature… aside) exists. So that meta-data means nothing as it’s used for a bunch of reasons that would be completely legitimate. The risk becomes that they could add code that does record a list of content in the future… Which is SUBSTANTIALLY LESS OF A RISK THAN COMPLETE READ ACCESS TO FILES WITHOUT AUTH but only if you guess the magic incantations that are likely the same as thousand of others magic incantations! Like I said though several times. I’d LOVE to drop plex, BECAUSE that risk exists from them. But Jellyfin is simply worse.

    You seem wildly uneducated on matters of security. I guess I know now why so many people just install Jellyfin and ignore the actual risks. The funny part is that rather than advocating for fixing it, so that it’s not a problem at all… you’re waiving it all away like it could never be a problem for anyone anywhere at anytime. That’s fucking wildly asinine when proof of concept of the attack was published on a thread 4 years ago, and is still active today. It’s a very REAL risk. Don’t expose your instance publicly. Proxied or not. You’re asking for problems.




  • No… and you’re trusting this WAY too much. This is exactly why it’s dangerous.

    You don’t need any knowledge of the domain. Tools like shodan will categorically identify EVERY jellyfin instance that scanners will run into.

    the media/user/stream IDs and media paths.

    No. Read the whole thread. https://github.com/jellyfin/jellyfin/issues/5415#issuecomment-2525076658

    If your path is similar to my path, which due to the nature of the software we ALL have similar paths. You can absolutely bruteforce the CALCULATED AND NOT RANDOM MD5 hash of the folder names that bigbucksbunny lives in. All it takes is for one angsty company to rainbow table variants of their movies name to screw you completely over. This is “security through obscurity”. This isn’t safe AT ALL.

    Edit: Just to clarify you would have to ADD your own GUID style information to the folder path in order to make it so a generic precompiled rainbow table for common paths to not work. Eg, /mnt/53ec1945-55dd-4b73-8e03-9e465d5739c3/movies/bigbucksbunny

    common paths/names can be setup based on the defaults for programs like the *arrs with minor linux-minded variants and I bet it would hit a good chunk of users who run jellyfin.


  • Would seem so. The project is open source, and nobody is getting paid. So the lack of update makes sense to some extent.

    As cool as it is… and as much as I want to make plex shove it completely. Jellyfin just isn’t ready for prime-time.

    I run both… Jellyfin isn’t allowed to talk outside of my network at all, and I can access it over my personal VPN… But Plex is where all my users are because anything else would just be too annoying to maintain.










  • Why bother asking for help if you’re not going to do the bare minimum of bringing us up to speed on what you’re running and how it’s configured. Just to be a complete jackass and “block” everyone the moment they ask for any of that information in order TO HELP YOU.

    To this point the only facts we know…

    Your running docker. You added two containers. One of which might be Hanabira. That were run via docker compose. SSH is working, but not… because you didn’t actually explain that well at all.

    That’s it. That’s all you’ve provided and I had to literally read EVERY thread to find that. That’s all we got. Nobody can help you. And with people outright asking you for more, and your hostility in return to those trying to help you. Nobody else will want to help you. Including me.

    Good luck. But I wanted you to know that you’re the jackass here.

    It’s mildly funny though that you live up to your .ml instance preconceptions.



  • there is a easy way to synchronize your YouTube and PeerTube channel

    No there isn’t anymore. yt-dlp, what all those syncing tools rely one, is basically fucked at this point. Youtube has made it fucking impossible to grab content off their platform and it’s really damn annoying. Even for my private IP address, I’ve earned what seems to be a permanent ban from Youtube.

    Every video shows either this…

    Or I login, and it only shows me the first 60 seconds of content before it just buffer loops forever

    But I wouldn’t want to sync the content from youtube anyway… Youtube compresses the shit out of everything.

    I get your point. It’s not hard for them to make a second post of the same video content to another platform. Many just don’t see the value in it. I agree that at least FUTO should see the point of putting it up… Hell I’m even willing to share the load in the bandwidth (with my own instance that’s currently up and running). Is what it is.




  • If so, that’s pathetic and weird.

    Pathetic and weird is complaining about downvotes when they don’t even tally up anywhere. So not only were they meaningless to begin with, they’re not even as useful as they are on Reddit.

    I did downvote, not because of disagreeing with me, but because

    The issue with LXC is that it doesn’t set the software up for you.

    is factually wrong in this context. You can absolutely distribute software in an LXC. I even pointed you directly at one such repository of hundreds of images that do exactly that. And they’re repeatable and troubleshoot-able all the same. The script that a dev would publish would be doing literally the same exact thing as a dockerfile.

    A dockerfile is just a glorified script. Treating it as if it’s something different is intellectually dishonest. Anything in a docker can be edited/modified the same as an LXC. docker exec -it <> /bin/bash puts a user in the same position as being in an LXC container. Once again. Aside from some additional networking stuff, Docker was literally based on LXC and is more or less functionally the same. Even in their own literature they only claim that they’ve enhanced LXC by adding management to it… (https://www.docker.com/blog/lxc-vs-docker/) Except Proxmox can manage an LXC just fine… LXD as well.

    As far as CI/CD stuff… It works on LXC containers as well… Here’s an example from 3 years ago that I found literally in 10 seconds searching for LXC ci/cd https://gitlab.com/oronomo/docker-distrobuilder.

    Also you can even take a DOCKERFILE and other OCI compliant images and push them directly into an LXC natively. https://www.buzzwrd.me/index.php/2021/03/10/creating-lxc-containers-from-docker-and-oci-images/ (Create LXC containers using docker images section).

    like pretty much the entire development community does?

    This is also a bullshit appeal/fallacy. The VAST majority of development communities don’t use ANY form of containerization. It’s only a subset that works on cloud platforms that now push into it… It’s primarily your exposure to self-hosted communities that makes you believe this. But it’s far (really far) from true. Most developers I work with professionally have no idea what docker is other than maybe have heard about it from somewhere or another. It’s people like me who take their shit and publish it into a container and show them that they understand and learn more about it. And even in that environment, production tends to not be in docker at all (usually kubernetes, Openshift, Rancher, or other platforms that do not use the Docker Runtime) but that choice is solely up to the container publisher.

    I didn’t like docker for the longest time

    Good for you? I see docker as a useful tool for some specific stuff. But there’s very few if any cases where I would take Docker over an LXC setup, even in production. I don’t hate or love docker (or LXC for that matter). However… I find I get better performance, lower overhead, and better maintainability with LXC. So that’s what I use. I don’t delude myself that LXCs are somehow not containers… and that Docker does anything different than any other container platform.