

Just put everything that doesn’t have OIDC behind forward auth. OIDC is overrated for selfhosting.


Just put everything that doesn’t have OIDC behind forward auth. OIDC is overrated for selfhosting.
Base 1 usually uses ones, because it represents summation at that point. Using zero as the numeral would be a bit awkward. Also historically zero is pretty new.
Tally marks are essentially a base 1 system.


You’re arguing two different points here. “A VPN can act as a proxy” and “A VPN that only acts as a proxy is no longer a VPN”. I agree with the former and disagree with the latter.
A “real” host-to-network VPN could be used as a proxy by just setting your default route through it, just like a simple host-to-host VPN could be NOT a proxy by only allowing internal IPs over the link. Would the latter example stop being a VPN if you add a default route going from one host to the other?


Fundamentally, a host-to-host VPN is still a VPN. It creates an encapsulated L2/L3 link between two points over another network. The number of hosts on either end doesn’t change that. Each end still has its own own interface address, subnet, etcetera. You could use the exact same VPN config for both a host-to-host and host-to-site VPN simply by making one of the hosts a router.
I see your point about advocating for other methods where appropriate (although personally I prefer VPNs) but I think that gatekeeping the word “VPN” is silly.


“It has effectively the same function as a proxy” isn’t the same thing as “it’s not actually a VPN”.
One could argue you’re not really using the tech to its fullest advantage, but the underlying tech is still a VPN. It’s just a VPN that’s being used as a proxy. You’re still using the same VPN protocols that could be used in production for conventional site-to-site or host-to-network VPN configurations.
Regardless, you’re the one who brought up commercial VPNs; when using OpenVPN to create a tunnel between a VPS and home server(s), it seems like it’s being used exactly to “create private communication between multiple clients”. Even by your definition that should be a VPN, right?


VPN and proxy server refer to different things. There’s lots of marketing BS around VPNs but that doesn’t make the term itself BS, they’re different and it’s relevant when you’re talking about networking.


Yeah, they mention in the article that the team tries to get “sensitive items” and “harmful substances” but Claude shuts it down. Tungsten cubes, on the other hand…


It’s only “running” the business so much. The physical stocking and purchasing happens by human hands, who would presumably not buy anything that would bankrupt the company because then it’s on them.
Here’s Anthropic’s article about the previous stage of this project that explains it pretty well. Part two is a good read too though.


The idea is that it isn’t just operating the vending machine itself, it’s operating the entire vending machine business. It decides what to stock and what price to charge based on market trends and/or user feedback.
It’s a stress test for LLM autonomy. Obviously a vending machine doesn’t need this level of autonomy, you usually just stock it with the same thing every time. But a vending machine works as a very simple “business” that can be simulated without much stakes, and it shows how LLM agents behave when left to operate on their own like this, and can be used to test guardrails in the field.
If there’s a port you want accessible from the host/other containers but not beyond the host, consider using the expose directive instead of ports. As an added bonus, you don’t need to come up with arbitrary ports to assign on the host for every container with a shared port.
IMO it’s more intuitive to connect to a service via container_name:443 instead of localhost:8443


It’s a trend for homelab folks to use Cloudflare themselves…


The UX just isn’t there for MPV. Jellyfin isn’t always ideal but it gives an interface roughly on par with a streaming service. Why should I replace that with a tool like MPV? I don’t need keyboard controls, I watch from my couch. It seems like all downsides to me.


You say /s but look at that account’s profile, it just straight up is AI lol
The heatpipes are a nonissue, I mean maybe they’re going to do a surprise heel turn with this new mainboard but the laptop 13 previously got the same heatpipe upgrade and it’s completely contained to the mainboard, it’s just as modular as before and you can switch between the parts. All the same parts work, it just makes that particular mainboard more efficient at cooling. Plus the parts they added in the 13 that they’re now bringing to the 16 are backwards compatible. The new graphics cards were announced to be backwards compatible too.
Also, the laptop 16 launched with the adjustable keyboard, but it only came out a year ago so maybe you’re thinking of Youtubers comparing it to the 13.
So far Framework has a great track record of not breaking backwards compatibility.
EDIT: You can buy the new mainboard on its own to upgrade your old laptop. I was hedging my statement before, but it’s definitely backwards compatible.


“Just got to this” doesn’t really seem like a lie to me. If they said “just read this”, that would be a lie, but “just got to this” implies they didn’t have time to reply/think about it, without commenting on whether they read it. Honestly to me “just got to this” implies it’s been on their to-do list but they didn’t get around to it until now. If they hadn’t read it at all saying “just got this” or “just read this” would make more sense.


I don’t see how? Normal HTTP/TLS validation would still apply so you’d need port forwarding. You can’t host anything on the CGNAT IP so you can’t pass validation and they won’t issue you a cert.


CGNAT is for IPv4, the IPv6 network is separate. But if you have IPv6 connectivity on both ends setting up WG is the same as with IPv4.
Only giving a /64 breaks stuff, but some ISPs do it anyway. With only a /64 you can’t subnet your network at all.
I really doubt it. We could give everyone on Earth their own /48 with less than 1% of the IPv6 address space.
I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me… I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.
Random GUI apps end up being waaaay harder to maintain because I have to remember “how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?” Rather than a couple text config files in a git repo. It’s also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.