privacytools.io always was shit show even before the infighting. They put their own endorsement site on Cloudflare. Despite a collossal pile of dirt emerging on #Signal:
https://github.com/privacytools/privacytools.io/issues/779
PTIO continued endorsing Signal non-stop, refusing to disclose the issues. That was also before the breakup. Dirt was routinely exposed on PTIO endorsements and it never changed their endorsement nor did they reveal the findings on their website.
Now both factions are hypocrits just as they were when they were united. The original PTIO site is back to being Cloudflared (nothing like tossing people coming to you for privacy advice into the walled garden of one of the most harmful privacy offenders), and Privacy Guides has setup on a CF’d Lemmy node. The hypocrisy has no end with these people.
Interesting, but that does not help because Mint jails all their docs in Cloudflare.
Also worth noting that #Ubuntu and #Mint both moved substantial amounts of documentation into Cloudflare (the antithisis of the values swiso claims to support). I have been moving people off those platforms.
BTW, prism-break is a disasterous project too. You know they don’t have a clue when they moved their repo from Github.com to Gitlab.com, an access-restricted Cloudflare site. There are tens if not hundreds of decent forges to choose from and PRISM Break moved from the 2nd worst to the one that most defeats the purpose of their constitution.
It might be useful to find dirt on various tech at prism-break, but none of these sites can be trusted for endorsements.
The prism-break website is timing out for me right now. I would not be surprised if they were dropping Tor packets since they have a history of hypocrisy.
If you look in their bug tracker, it actually reveals that they ignore dirt that has been dug up on their suggestions.
As others have mentioned there is little in the way of justification for these suggestions, and while I happen to agree with plenty of them, I’d personally like to see more reasoning, if not to appease people that already have opinions then to help newer users understand their options.
Indeed. In fact it’s actually worse than you describe. Swiso witholds negative information. They don’t want to inform people. They want to steer people. For example, swiso’s endorsements for donation platforms have some quite serious problems:
https://codeberg.org/swiso/website/issues/141
swiso is also aware of the serious issues with Qwant and the serious issues with DuckDuckGo. Not only failing to remove them but also failing to inform. Qwant and DDG are both Microsoft syndicates!
(if anyone is interested, one of the most privacy-respecting search services is Ombrelo¹, which is largely unknown to the world because PTIO, swiso, and prism-break don’t do the job they claim to do)
And swiso is aware because that’s their bug tracker.
/cc @Imprint9816@lemmy.dbzer0.com
¹ https://ombrelo.im5wixghmfmt7gf7wb4xrgdm6byx2gj26zn47da6nwo7xvybgxnqryid.onion/
There are a few good alternatives and swiso has been aware of them for ~4+ years:
Self-hosting is a different scenario than the way most users reach the fedi. Self-hosters certainly have fewer reasons to have multiple accounts. But obviously the one unescapable reason is privacy. If all activity is under the same account, doxxing risk is pegged.
Another reason a self hoster would want multiple accts is followship. Someone might want to follow you because they love your French posts about oil painting, for example, but since you do everything with the same account they also have to see posts in English about politics, religion, phones, movies, etc, they may not want all the other noise. Compartmentalisation improves followship.
I’m quite familiar with relays in the SMTP context which is not a context where an end user installs relays. An end user in that case would only direct their own software to use a relay that has been installed by a service provider in control of a server. So when you say install a Lemmy relay, I’m missing the concept. What exactly is a Lemmy relay? Can you walk me through this scenario: suppose someone is on Beehaw and they cannot reach node X because Beehaw defederated from node X. What are the steps for a Beehaw user to subscribe to a community that is hosted on node X?
(btw, browse.feddit.de is just a blank page for me)
Every node has a different set of relationships to other nodes. If you create only one account and you choose a small low-activity node, you’re isolated by which nodes are federated and defederated to that node. The front page timeline is also limited by the subscriptions of others on that node which narrows what’s exposed to you. And worse, because most of the population has disregard for decentralization, those subscriptions are mostly to communities on the biggest nodes which exacerbates the imbalance.
It’s good for the decentralization principle to avoid the large nodes¹, but doing that bring isolation and limited exposure. So to counter those problems you need accounts on multiple small nodes.
¹ This means not only avoiding having an account on the large nodes but also avoiding communities hosted on those giant nodes.
I don’t think those figures are trustworthy. I recall a page that tracks user counts which named some server I never heard of with a count an order of magnitude higher than lemmy.world. Might have been lemmyverse.net, not sure.
Counting active accounts is a bit tricky I can imagine. So I judge by looking at the activity. Lemmy has ghost towns and 1 person communities which appear from the timeline like an announcement community but in fact they are open discussions where hardly anyone participates except the moderator. These are not niche topics either. It’s because users only want to manage one account. The stock web client dominates, which is inherently a one-account client. So the single most popular app fails to resist the gravity of the giant nodes. There is a paltry selection of 3rd party apps. Nothing in the Debian official repos.
I went to phtn.app and just got a 500 error. So whatever that is, it’s probably not a significant factor here either way.
In Mastodon threads I see more diversity of nodes people are coming from. Whereas when reading a quite active Lemmy thread you see something like 90% of comments coming from the top 5.
I might have to try that app.
As a Debian user I tend to work close to the ideology of using apps from official Debian repos. Debian is quite popular but also disciplined with a quality standard. So an app’s inclusion in the Debian repo somewhat reflects a level of maturity that puts a project on the radar to be taken seriously. There are currently no threadiverse apps in the Debian repos.
Some would say generally that no non-Debian app is worth looking at. But I do make some exceptions and might have to take a look at Eternity despite the opening sentence: “Eternity is currently in the early stages of development. Expect many unfinished features and bugs!”
StreetComplete shows me no map, just quests on a blank canvas. OSMand shows my offline maps just fine, but apparently StreetComplete has no way to reach the offline maps. I suppose that’s down to Android security – each app has it’s own storage space secure from other apps.
In principle, we should be able to put the maps on shared SD card space and both apps should access it. But StreetComplete gives no way in the settings of specifying the map location. And apparently it fails to fetch an extra copy of the maps as well in my case.
Protonmail failed to satisfy F-Droid’s inclusion criteria because it requires gms (playstore framework) and because it uses Firebase messaging.
Since I’ve disabled gms in my device I’m not sure how Protonmail would work for me. Someone tells me I might simply lose push notifications capability. But I am confused because Snikket pushes notifications just fine on my device.
it’s worth noting that protonmail has an onion and their clearnet server also accepts tor connections. So users can control the leakage of their IP… but only if they’re willing to solve countless CAPTCHAs.
I’m on the edge of quitting protonmail. The issues:
How many websites can handle the amount of traffic that CF can handle? It’s not just about configuring your firewall, it’s about having the bandwidth. Otherwise it’s not much of a DDoS protection.
That’s what I’ve been saying throughout this thread. The only significant DDoS protection offered by Cloudflare requires CF seeing the traffic (and holding the keys) so it can treat the high-volume traffic. If CF cannot see the payloads, it cannot process it other than to pass it all through to the original host (thus defeating the DDoS protection purpose).
As I don’t have an account there I can’t see which requests containing credentials use which cert.
Why would you need an account? Why wouldn’t bogus creds take the same path?
If it’s true that this is unverifiable, that’s good cause to avoid Cloudflared banks. It’s a bad idea for customers to rely on blind trust. Customers need to know who the creds are shared with /before/ they make use of them – ideally even before they make the effort of opening an account.
And also, just because the cert is verified by cloudflare does not mean they have the private key.
This uncertainty is indeed good cause to avoid using a Cloudflared bank.
UPDATE: I’ve spoken to some others on this who assert that it is impossible for a bank customer to know for certain if a bank uses their own key to prevent disclosure to CF.
It seems like a lot of your points hinges on this being true, but it simply isn’t.
“AFAICT” expands to “as far as I know”, which means the text that follows not an assertion. It’s an intuitive expectation that is open to be proved or disproved. The pins are all set up for you to simply knock down.
There is a massive benefit to preventing DDoS attacks, and that does not require keys.
This is unexplained. I’ve explained how CF uses its own keys to offer DDoS protection (they directly treat the traffic because they can see the request). I’ve also explained why CFs other (payload-blind) techniques are not useful. You’ve simply asserted the contrary with no explanation. HOW does CF prevent DDoS in the absence of treatment of the traffic? Obviously it’s not merely CFs crude IP reputation config because any website can trivially configure their own firewall in the same way without CF. So I’m just waiting for you to support your own point.
There is no indication that banks are handing over client ctedentials to CF.
This is trivially verifiable. E.g. if you get the SSL cert for eagleone.ns3web.org, what do you see? I see CF keys. That means they’re not using the premium option to use their own keys. Thus CF sees the payloads. I’m open to being disproven so feel free to elaborate on your claim.
I’m not looking to be proven right. The purpose of the tangent discussion was to substantiate whether or not bank creds are exposed to CF. If banks are actually protecting consumer creds from CF, then it requires a bit of analysis because banks don’t even disclose the fact that they use Cloudflare. They make the switch to CF quietly and conceal it from customers (which is actually illegal - banks are supposed to disclose it but it’s not enforced in the US). AFAICT, CF’s role is mostly useless if the SSL keys are held by the site owner.
In the US, the financial system is quite sloppy with user creds and user data. There are even a couple 3rd-party services (Yodlee / Mint) that ask customers for their banking creds at all the places they bank. This service then signs on to all the banks on behalf of the customer to fetch their statements, so customers can get all their bank statements in one place. IIRC some banks even participate so that you login to a participating bank to reach Yodlee and get all your other bank statements. Yodlee and Mint are gratis services, so you have to wonder how they are profiting. The banks are not even wise enough to issue a separate set of read-only creds to their customers who use that Yodlee service. In any case, with that degree of cavalier recklessness, I don’t envision that a US bank would hesitate to use CF in a manner that gives the bank the performance advantage of CF handling the traffic directly. But I’m open to convincing arguments.
Without TLS termination Cloudflare is still useful for e.g. DDoS protection,
I’m not seeing that. Cloudflare’s DDoS protection is all about having the bandwidth to serve the traffic. If CF cannot treat the traffic itself (due to inability to see the payloads), that whole firehose of traffic must be passed through to the original host which then must be able to handle that volume. CF’s firewall in itself is not sophisticated enough to significantly reduce the traffic that’s passed along. It crudely uses IP reputation which can easily be done by one’s own firewall. What am I missing?
eclic.ro is an exclusive Cloudflare site just like change.org is. Exclusivity is obviously quite lousy for democracy. Better alternatives are here:
https://codeberg.org/swiso/website/issues/140