

I use forgejo on a raspberry pi.
I use forgejo on a raspberry pi.
Don’t include the non-encoded part of the data or it will corrupt the decryption. The decoder can’t tell the difference between data that’s not encoded and data that is encoded since it’s all text.
I also switched to the Transit app for public transit after google maps stopped showing the option to start navigation and only allows for searching for routes now. Not sure why it stopped working. Maybe because I use GrapheneOS, but it worked on Graphene before with no issue, so could be a combination of policy changes and using a non Google OS or something. It works fine with driving and walking still.
For driving, I still can’t find a comparable app that has both real-time traffic which is essential in my city to avoid constantly fluctuating bottlenecks due to construction, traffic pattern changes, and rush hour traffic. I found one at one point, but it didn’t read the names of streets, just said “turn right” instead of “turn right on first avenue”. This is confusing when streets are close together or it’s treating alleyways as streets, etc. CoMaps is the best I’ve found if it had traffic routing.
Do you mean this config option?
[server]
hosts = 0.0.0.0:5232, [::]:5232
That is binding the service to a network interface and port. For example your computer probably has a loopback interface and an Ethernet interface and WiFi interface. And you can bind to an IPv4 and or IPv6 address on those interfaces. Which ones do you want radicale to listen to traffic from and on what port? The example above listens on all interfaced both IPv4 and IPv6 and uses port 5323 on all. Of course that port must not be in use on any interface. Generally using this notation is insecure, but fine for testing. Put the real IP addresses when you’re ready.
Yeah, the definitions are actually more about alignment with the US political parties rather than left or right. And since both parties are demonstrably right of center, just to different degrees, the bias meter should only be used to determine which political party’s sponsors likely biased the article.
For example, an article saying climate change is not human caused and presenting debunked evidence will be ranked mostly center and second mostly right. But an article calling for incentives to reduce use of fossils fuels will be ranked mostly left. That’s mostly center if anything. An article calling for the government to explicitly force companies to stop using fossil fuels would be mostly left and center. One further advocating for the government to take over energy companies that don’t comply and make energy production public would be mostly left. Just presenting scientific evidence and refusing to give a voice to debunked “alternative facts” is not a leftist position, it’s a centrist one at best and should be the baseline.
Yeah, it’s easy enough to configure it properly, I have it set up on all of my servers and my laptop to treat it as a network mount, not a local one, and to try to connect on boot, but not require it. But it took me a while to understand what it was doing to even look for a solution. So, hopefully that saves you time. 🙂
NFS is really good inside a LAN, just use 4.x (preferably 4.2) which is quite a bit better than 2.x/3.x. It makes file sharing super easy, does good caching and efficient sync. I use it for almost all of my Docker and Kubernetes clusters to allow files to be hosted on a NAS and sync the files among the cluster. NFS is great at keeping servers on a LAN or tight WAN in sync in near real time.
What it isn’t is a backup system or a periodic sync application and it’s often when people try to use it that way that they get frustrated. It isn’t going to be as efficient in the cloud if the servers are widely spaced across the internet. Sync things to a central location like a NAS with NFS and then backups or syncs across wider WANs and the internet should be done with other tech that is better with periodic, larger, slower transactions for applications that can tolerate being out of sync for short periods.
The only real problem I often see in the real world is Windows and Samba (sometimes referred to as CIFS) shares trying to sync the same files as NFS shares because Windows doesn’t support NFS out of the box and so file locking doesn’t work properly. Samba/CIFS has some advantages like user authentication tied to active directory out of the box as well as working out of the box on Windows (although older windows doesn’t support versions of Samba that are secure), so if I need to give a user access to log into a share from within a LAN (or over VPN) from any device to manually pull files, I use that instead. But for my own machines I just set up NFS clients to sync.
One caveat is if you’re using this for workstations or other devices that frequently reboot and/or need to be used offline from the LAN. Either don’t mount the shares on boot, or take the time to set it up properly. By default I see a lot of people get frustrated that it takes a long time to boot because the mount is set as a prerequisite for completing the boot with the way some guides tell you to set it up. It’s not an NFS issue; it’s more of a grub and systemd (or most equivalents) being a pain to configure properly and boot systems making the default assumption that a mount that’s configured on boot is necessary for the boot to complete.
I have m.2 hats for the couple of raspberry pis that need more intense disk operations. Never use SD cards or flash drives, which generally end up being just SD cards in a USB package.
Copyright in general should be strictly limited to an extremely short time, like maybe 1-5 years. After that others should be allowed to use and expand on it unless you release a new work that expands on it yourself. Trademarks eliminate the confusion about who published it and if you aren’t actively using the content, it should be given to society to benefit everyone. This would promote progress and competition. Extended copyright, especially, is only useful for people and companies who don’t want to be productive and just get paid for one thing their ancestors/predecessors did ages ago. The original design for copyright said exactly this would happen.
Problem is many of us are stuck with very low upstream bandwidth due to cable company ISP monopolies and/or data caps or just were running things on a small raspberry pi or something and the malicious requests will create extra expense or flat put denial of service for real traffic.
To a point yes, for the crawler bots, but Anubis uses a lot more resources to keep the bots busy than a simple firewall ignoring the request. And if there’s no response vs a negative response, the requests are likely to fall off more quickly. And the even more significant load might be from malicious login attempts which use even more resources and Anubis likely won’t be as effective on those more targeted attacks depending on the types of services we’re talking about. Either way, firewall blocks are way, way less resource intensive than any of that, so as soon as you open up that firewall and start responding to those malicious or abusive requests they will become progressively more resource intensive to mitigate.
You can mitigate some risks with software like fail2ban to slow down some of the hacking attempts, but you will still be susceptible to, sometimes unintentional, denial of service attacks from ever persistent “AI” crawler bots as well as the constant barrage of automated hacking attempts. If you’re bandwidth is not able to handle it or you have bandwidth caps, you’re likely going to have issues.
Are you wanting something that you don’t have to download from GitHub yourself (so a project that hosts a docker container somewhere and just code is in GitHub is OK), or are you looking to boycott any project that is not boycotting GitHub and so any part of that project should not use GitHub for any code at all in which case possibly even dependencies should not be on GitHub even if they publish their distributions elsewhere? Or somewhere in between?
It never had a chance. There’s no way to make profit selling ads and user data and have it be decentralized. They are conflicting goals.
I’m not saying there aren’t other ways to make a profit on a decentralized platform, but they never said they had any other business model, so we had to assume that the traditional one was it, and we were right.
This is why I never used their images for any of my projects and do everything I can to use official charts made by the software vendor itself or create my own and put them in my personal git repo for automated deployments.
Any business that gives away middleware for free, likely does that in the hopes of monetizing that pretty directly and eventually will be pressured to increase monetization of those things by those investors or will be forced to stop developing those products due to lack of funding. Middleware really doesn’t have many other good ways to monetize.
If you want something similar, you could set up a cheap VPS with your own reverse proxy making sure that all of your connections are secure between the servers and VPS. But it really depends on your situation. If you have an ISP that assigns you a block of static IPv6 addresses, it’s fairly easy to then get a domain and direct based on subdomains to those addresses. I’m not lucky enough to have a halfway decent ISP available in my area, so I can’t get that or even a reasonably priced single IPv4 address for residential service, so I have to make due with dynamic DNS which makes things more complex. I fortunately don’t have an ISP that forces double NAT on me at least. So I have set up a VPS with a reverse proxy and Wireguard VPN tunnel and I use cloudflare as my domain registrar and their DDNS which I update using my OPNSense router which is also the endpoint of the VPN. I’ve been considering moving to hosting headscale on the VPS instead, but haven’t gotten around to it. It really depends on how many servers, his many services, if you have a domain, if you have a VPS or itger server outside of your home network, if your ISP gives static IPs, and you are behind a double nat kind of situation. Also depends a lot on your bandwidth. Having low upload speeds is a common problem especially if you have cable internet service. I’m lucky enough to have symmetrical fiber direct to my modem even if the ISP is way behind and doesn’t offer IPv6 other than 6rd which was meant to be a transitional system like two decades ago and is barely functional.
It’s just a hosted reverse proxy with a proprietary server backend, as far as I can tell. I don’t usually trust “free” things lime that. It’s not that expensive to do it yourself, the real expense come in high bandwidth flowing through the proxy which most self hosted applications for personal use don’t really do.
Anyway, with a reverse proxy on the security end there’s a chance of man in the middle attacks depending on the configuration. And on the privacy end, they will have the ability to log all connections. That may be where they’re planning to make money by selling that info and/or allowing MiTM attacks to inject ads like many ISPs have talked about. But “free” stuff usually isn’t actually free in the long term even if it is now while it’s being tested. Usually just takes a sale to a large corporation for it to become less free even of the original intent wasn’t to do that.
…worse for users. Better for them…in the shirt term. That’s the real issue. Short term Profit overrules everything in modern corporations.
Well, they have to be accused of something to be detained. Immigration violations of some types don’t get a court hearing to be “convicted” of those violations by a judge by default.
Depends on what you want. You can have the application have an https certificate which could either be one issued my a globally trusted issuer or could just be a self issued certificate that caddy is configured to trust. And caddy can then add the globally trusted certificates from let’s encrypt or whatever. But that definitely requires extra steps. Just, how secure do you want to be?