Ah come on, we all know as software people we can never stop the spreadsheets from being the real data interchange format ;)
Ah come on, we all know as software people we can never stop the spreadsheets from being the real data interchange format ;)
Yes that’s true. I guess what I wanted to point out is that GitLab has dependencies like Postgres, Redis, Ruby (with Rails), Vue.js… whereas Forgejo can use just SQLite and jQuery.
Something not mentioned yet: Forgejo, the software running Codeberg, has a smaller feature set and narrower scope than GitLab (“GitLab is the most comprehensive AI-powered DevSecOps Platform” from their website).
Forgejo is much easier to administrate for smaller groups. For example compare the dependencies mentioned in the Forgejo installation documentation and the Gitlab installation documentation.
Devil’s advocate: what about the posts and comments I’ve made via Lemmy? They could be presented as files (like email). I could read, write and remove them. I could edit my comments with Microsoft Word or ed
. I could run some machine learning processing on all my comments in a Docker container using just a bind mount like you mentioned. I could back them up to Backblaze B2 or a USB drive with the same tools.
But I can’t. They’re in a PostgreSQL database (which I can’t query), accessible via a HTTP API. I’ve actually written a Lemmy API client, then used that to make a read-only file system interface to Lemmy (https://pkg.go.dev/olowe.co/lemmy). Using that file system I’ve written an app to access Lemmy from a weird text editing environment I use (developed at least 30 years before Lemmy was even written!): https://lemmy.sdf.org/post/1035382
More ideas if you’re interested at https://upspin.io
They even have a term for this — local-first software — and point to apps like Obsidian as proof that it can work.
This touches on something that I’ve been struggling to put into words. I feel like some of the ideas that led to the separation of files and applications to manipulate them have been forgotten.
There’s also a common misunderstanding that files only exist in blocks on physical devices. But files are more of an interface to data than an actual “thing”. I want to present my files - wherever they may be - to all sorts of different applications which let me interact with them in different ways.
Only some self-hosted software grants us this portability.
This was the provider I went with after self-hosting my mail for 7+ years on an OpenBSD VPS. I feel like Migadu is an honest and good-value service.
Each time your browser makes a request (such as updating the graphs), it’s submitting a new DNS query each time.
That would be surprising; most HTTP clients reuse network connections and connections are deliberately kept open to reduce the overhead of reopening a connection (including latency in doing a DNS lookup).
Then again, I’ve seen worse ;)
Slightly off-topic: I’m not too familiar with FreeBSD (I use OpenBSD), but others may be interested to know you may be able to configure wireguard interfaces without installing any packages.
It probably just involves running some ifconfig
commands at boot via some entries in /etc/rc.conf
. See https://docs.freebsd.org/en/books/handbook/network/
Yeah I’ve always found that AllowedIPs
name a little bit misleading. It is mentioned in the manpage:
A comma-separated list of IP (v4 or v6) addresses with CIDR masks from which incoming traffic for this peer is allowed and to which outgoing traffic for this peer is directed.
But I think it’s a little funny how setting AllowedIPs
also configures how packets are routed. I dunno.
You could start troubleshooting by manually executing DNS queries from mainDesktop.lan
, and watching the DNS server logs.
Not sure what OS the desktop is running, but assuming Windows you could run:
nslookup -type=A pihole.example.duckdns.org.
On macOS/Linux/etc.:
dig -t A pihole.example.duckdns.org.
This could rule out behaviour from the proxy or applications.
the most important part of this article now is the “Note of reflection” added 10 years after it’s inception
Agreed; amazing to see this added. I suppose it’s admirable… but the pain that has been inflicted on the teams I’ve been part of in the meantime… ugh.
I haven’t seen it applied successfully in practice.
Neither.
I could see the value, in theory, for geographically separate teams spanning many time zones juggling concurrent development efforts. But the reality for a lot of commercial software development is totally the opposite. It’s done in offices where staff are in at 9, out at 5, all working on the same features in a linear style. They’re not developing an OS kernel; they’re maintaining a CRUD app.
For that “git-flow”, code needs to be in a state where it can have patches rebased/merged independent of one another. The codebases I’ve worked on have never been anywhere near that robust.
My old workplace had a person exactly like this. We all had enough of the bullshit, but our boss didn’t care. In the end, I moved on.
Later I realised it wasn’t just that one person, it was actually a bad culture overall which wasn’t being moderated well. The managers were just really bad at their job. So I’m really happy I moved on.
There are lots of cool Linux and OSS communities out there. Even if they are not exactly about the particular distro you are interested in, there will be ways to learn and share about it.
Yeah lol I know what you mean. I think what I meant to say was that I hear Github mentioned so much now compared to, say, 10 years ago. From non-technical people, teams who historically used subversion or even just no version control system at all.
Out of curiosity I looked it up on Google trends: https://trends.google.com/trends/explore?date=2013-09-04 2023-09-04&q=github
Haha yeah actually I wonder whether people actually did ask this when Linux started making the rounds. If I read the history right BSD was already almost 15 years old at the time!
But maybe you personally don’t have to write the docs or packaging stuff; if you publish it as open source, others can have a go themselves! :)
Here’s a slightly different story: I run OpenBSD on 2 bare-metal machines in 2 different physical locations. I used k8s at work for a bit until I steered my career more towards programming. Having k8s knowledge handy doesn’t really help me so much now.
On OpenBSD there is no Kubernetes. Because I’ve got just two hosts, I’ve managed them with plain SSH and the default init system for 5+ years without any problems.
Great points. It’s the proprietary nature and lack of interoperability of “the cloud” that causes problems. My email is hosted on a remote server but I have control over my data. There’s no algorithm controlling what order I see my mail in or who I can forward stuff to. There are many different tools and clients available to me and to everyone else to work with their data.
Imagine if publishing a photo from my phone to Instagram meant copying a file from one folder to another. Or if I want to create an automatically translated voiceover from the captions of all my old Facebook photos in a video editor. Right now these operations require complex software. But the technology is all there and has been for a long time.
I often think about https://upspin.io
And sharing changes can be done with just email and regular git! https://git-send-email.io
of course!