

How do you know it’s “AI” scrappers?
I’ve have my server up before AI was a thing.
It’s totally normal to get thousands of bot hits and to get scraped.
I use crowdsec to mitigate it. But you will always get bot hits.
How do you know it’s “AI” scrappers?
I’ve have my server up before AI was a thing.
It’s totally normal to get thousands of bot hits and to get scraped.
I use crowdsec to mitigate it. But you will always get bot hits.
You can also use a shorter version .clone();
I have it on docker with two volumes, ./config and ./cache
I back up those before each update.
A bad Jellyfin update should not mess with your media folder in anyway. Though you should have backups of those aswell as a rule of thumb.
I’ve been using jellyfin for years.
My best recommendation is DELAY UPDATES and back up before you update.
I have a history of updates breaking everything so you should be careful about them.
All software recommends backing up before an update, but for jellyfin the shit is real, you really want to back up.
How does it differentiate an “AI crawler”, from any other crawler? Search engine crawler? Someone monitoring data to offer statistics? Archiving?
This is not good. They are most likely doing the crawling themselves and them selling the data to the best bidder. That bidder could obviously be openAI for all we know.
They just know that introducing the sentence “this is anti AI” a lot of people is not going to question anything.
So… Are there any cryptocurrencies that the owners hold in mass and are trying to get profit by selling them to users in a “pay to win” system?
Last time I checked bot IPFS and plebbit had those… Which is why I steered away from those projects.
I vibe coded with cleverbot.
I scored 7 out of 10! Can you tell a coder from a cannibal? 💻🔪 https://vole.wtf/coder-serial-killer-quiz/
Not bad.
If you need to use a new language that you are not yet used to, it can get you through the basics quite efficiently.
I find it quite proficient at translating complex mathematical functions into code. Specially since it accept the latex pretty print as input and usually read it correctly.
As an advanced rubber duck that spits wrong answers so your brain can achieve the right answer quickly. A lot of the times I find myself blocked on something, ask the AI to solve and it gives me a ridiculous response that would never work, but seeing how that won’t work it makes easier for me how to make something that will work.
Isn’t there already a mobile app?
Developers can focus on whatever they seem appropriate.
But I think content discover and community (lack of) are the biggest issues of peertube right now.
I hop once in a while to the main peertube site and I can never find anything remotely interesting to watch. There may be some good content, but it’s impossible to find.
Mods be thinking that if they dig SO’s grave deep enough it will emerge on the other side of the world.
I second the N100. It’s what I use and it’s ridiculously powerful for the small amount of power it drains. And barely needs refrigeration.
No one can predict the future. One way or the other.
The best way to not be let behind is to be flexible about whatever may come.
I used to get the light prices on my phone widget via a public api. Some years ago they closed the api and started asking for full name and id in order to get api access. So I just made a scrapper that takes the numbers I want from their website and serves an API for the widget.
That’s the only self made app I self host, but I’m quite proud of it.
IP addresses are fairly public.
In order to get that kind of infection there need to be a serious vulnerability. None of the services I expose have those kind of vulnerabilities, and I keep them updated.
A Zero-day may be possible, but it can happen with any software.
Any way, even if some of my services got infected that way, I have them all in docker containers. If they managed somehow to insert any malicious software it would have disappeared in the next restart of the container.
And in order to have a software that breaks out of the container it would need to also have some sort of zero-day docker exploit. Two zero-days needed for accomplish that…
Every expose software I have is running on a caddy reverse proxy. And caddy is the only authorized author on my firewall so it gets more difficult to try to run an unexpected malicious software through it.
Any software can have zero-day exploits for that matter.
I don’t think jellyfin vulnerabilities could lead to a zombified machine. At least I’ve not read about something like that happening.
Most Jellyfin issues I know are related to unauthorized API calls of the backend.
I have had jellyfin exposed to the net for multiple years now.
Countless bots probing everyday, some banned by my security measures some don’t. There have never been a breach. Not even close.
To begin with, of you look at what this bots are doing most of them try to target vulnerabilities from older software. I have never even seen a bot targeting jellyfin at all. It’s vulnerabilities are not worth attacking, too complex to get it right and very little reward as what can mostly be done is to stream some content or messing around with someo database. No monetary gain. AFAIK there’s not a jellyfin vulnerability that would allow running anything on the host. Most vulnerabilities are related to unauthorized actions of the jellyfin API.
Most bots, if not all, target other systems, mostly in search of outdated software with very bad vulnerabilities where they could really get some profit.
You can share jellyfin over the net.
The security issues that tend to be quoted are less important than some people claim them to be.
For instance the unauthorized streaming bug, often quoted as one of the worst jellyfin security issues, in order to work the attacker need to know the exact id of the item they want to stream, which is virtually impossible unless they are or have been an authorized client at some point.
Just set it up with the typical bruteforce protections and you’ll be fine.
Do you have a proper robots.txt file?
Do they do weird things like invalid url, invalid post tries? Weird user agents?
Millions of times by the same ip sound much more like vulnerability proving than crawler.
If that’s the case fail to ban or crowdsec. Should be easy to set up a rule to ban an inhumane number of hits per second on certain resources.