

What is wrong with the GDPR and the ePrivacy directive? The only problem I see is that they don’t go far enough (online tracking, for example)


What is wrong with the GDPR and the ePrivacy directive? The only problem I see is that they don’t go far enough (online tracking, for example)
If it works it works. You mathematicians just don’t understand the pragmatics. What is tech debt?


If you’re deliberately belittling me I won’t engage. Goodbye.


“You criticize society yet you participate in it. Curious.”


To be clear, I am not minimizing the problems of scrapers. I am merely pointing out that this strategy of proof-of-work has nasty side effects and we need something better.
These issues are not short term. PoW means you are entering into an arms race against an adversary with bottomless pockets that inherently requires a ton of useless computations in the browser.
When it comes to moving towards something based on heuristics, which is what the developer was talking about there, that is much better. But that is basically what many others are already doing (like the “I am not a robot” checkmark) and fundamentally different from the PoW that I argue against.
Go do heuristics, not PoW.


It depends on the website’s setting. I have the same phone and there was one website where it took more than 20 seconds.
The power consumption is significant, because it needs to be. That is the entire point of this design. If it doesn’t take significant a significant number of CPU cycles, scrapers will just power through them. This may not be significant for an individual user, but it does add up when this reaches widespread adoption and everyone’s devices have to solve those challenges.


It is basically instantaneous on my 12 year old Keppler GPU Linux Box.
It depends on what the website admin sets, but I’ve had checks take more than 20 seconds on my reasonably modern phone. And as scrapers get more ruthless, that difficulty setting will have to go up.
The Cryptography happening is something almost all browsers from the last 10 years can do natively that Scrapers have to be individually programmed to do. Making it several orders of magnitude beyond impractical for every single corporate bot to be repurposed for.
At best these browsers are going to have some efficient CPU implementation. Scrapers can send these challenges off to dedicated GPU farms or even FPGAs, which are an order of magnitude faster and more efficient. This is also not complex, a team of engineers could set this up in a few days.
Only to then be rendered moot, because it’s an open-source project that someone will just update the cryptographic algorithm for.
There might be something in changing to a better, GPU resistant algorithm like argon2, but browsers don’t support those natively so you would rely on an even less efficient implementation in js or wasm. Quickly changing details of the algorithm in a game of whack-a-mole could work to an extent, but that would turn this into an arms race. And the scrapers can afford far more development time than the maintainers of Anubis.
These posts contain links to articles, if you read them you might answer some of your own questions and have more to contribute to the conversation.
This is very condescending. I would prefer if you would just engage with my arguments.


On the contrary, I’m hoping for a solution that is better than this.
Do you disagree with any part of my assessment? How do you think Anubis will work long term?


I get that website admins are desperate for a solution, but Anubis is fundamentally flawed.
It is hostile to the user, because it is very slow on older hardware andere forces you to use javascript.
It is bad for the environment, because it wastes energy on useless computations similar to mining crypto. If more websites start using this, that really adds up.
But most importantly, it won’t work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.
The password you have chosen is already in use by a different user (bob@example.com). Please choose a different password.
The person who created git clearly cannot be trusted to write good kernel code. I’m CC’ing Konstantin to disable his account, whoever he is.
I asked ChatGPT and it says he still needs his glasses while not in costume. So that settles this debate.
/s
But the whole point of the doomsday machine is lost… if you keep it a secret! Why didn’t you tell the world, eh?
So what is the reason for doing it that way?


As long as it’s not an exit node, nobody will be able to tell what the traffic is. It’s all encrypted including the metadata.


That fact that you think “idealistic version of early US” is a compliment is very telling.


Your proposal is just an idealistic version of early US. You claim that corruption is fundamentally impossible, but assume that magically “the monarchs aren’t allowed to own property” without regard to enforcement. You claim to have an alternative to democracy but still propose majority voting on replacing rulers and constitutions. You simply assume that monarchs will keep each other in check and not devolve into the conspiring, warmongering tyrants that history is full of.
Power can always be abused to get more power and go against all your original ideals. The only way to definitely prevent corruption is to ensure power is never concentrated in the hands of few.


Turns out when Zuckerbot was talking about “allowing more speech on the platform”, he just meant more slurs.
Yes, the GDPR covers almost everything you do with personal data. That is the point. As long as you’re being respectful to data subjects the GDPR is surprisingly mild.
You’re the one claiming the government is regulating tech too much, below an article about Apple making that same claim. And when pressed about specifics, you brand the entire thing as off-topic.
It is very much on topic, you just don’t want to provide an argument.