
Fantastic! Thank you for looking into the source code and verifying it!
Fantastic! Thank you for looking into the source code and verifying it!
Not true.
The links just need to have a “no follow” attribute (which is something that Lemmy could add, if they haven’t already).
These links do not influence the search engine rankings of the destination URL because Google does not transfer PageRank or anchor text across them. In fact, Google doesn’t even crawl nofollowed links.
edit: added relevant blob of text.
Welp, I guess this means something bad is gonna happen and Spez is trying to get in front of the inevitable protests.
I wonder what it could be…
… another option: you use the web based Teams.
If you want more isolation, you could have a dedicated web browser for it.
Of course, the web version of Teams has a few annoying limitations (you can only see 4 people at the same time, opening multiple tabs to Teams kinda breaks it, etc), but it is endurable.
But the nice thing about email is it’s decentralized, and everyone already has it.
That is true, but in the case of email as an issue tracker: only the people who have received it will know of its existence (unless it’s mirrored on public facing websites - like Debian does with their issue tracking).
The thing we’d lose is the “ease of access”. Tbh, I’d see Usenet being a better distribution medium than email for OSS apps… but I really appreciate the intention behind solutions like git-issues: move the issue tracking into the same tools used to track code changes. It, in my opinion, is more in line with K. I. S. S.
You make a valid point regarding losing important contextual information like PR and open bugs.
However, I don’t think email offers the same level of visibility as we currently have with github workflows.
There is an creative Git based issue tracker, I used called git-issue. Basically, the entire bug/issue/pr process is captured as yaml (I think) files, which are kept in a dedicated branch.
When I used it (as I wanted a self-hosted bug tracker), I found it functional but a bit cumbersome. However, I could see someone creating a very nice github like web interface for it.
I’d say, let’s wait for a catastrophic event at github before we jump ship.
Git , by its nature, is distributed. If, worse case, github.com went down (without warning). Life would move on, people will have local checkouts of the “important/popular” repos that would be pushed “somewhere else”.
Yeah, github actions wouldn’t work, build that pull from github repos would need to be refactored, but life would move on.
I found traefik to be a more feature rich, load balancer when used in kubernetes environments. Other than use in kubernetes, I’d say if you’re happy with nginx, keep using nginx :)
Windows (and most other operating systems) have a “user land” and a “kernel space”.
“user land” is where all your applications run. A “user land” application can only see other applications and files owned by the same user. Eventually, a user land app will want to do “something”. This can be something like read a file from disk, make a network connection, draw a picture on the screen. To accomplish this, the user space app need to “talk” to the kernel.
If user space apps were instruments being played in an orchestra, the kernel would be the conductor. The kernel is responsible for making sure the user land apps can only see their respective users files/apps/etc.
The kernel “can see and do everything”, it reports to no one. It has complete access to all the applications and every file. Your device drivers for your printer, video card, ect all run in “kernel space”.
Basically, the OPs link: they’ve ported Doom to run effectively like a device driver. This means that if doom crashes, your PC will blue screen.
This has no practical purpose, other than saying “yeah, we did it” :)
STOP! You’re scaring the children!
I’ve got a similar set up and everything works. So, I can confirm that your assumptions are sound.
My solution is kubernetes based, so I use cert-Manager to issue/create the Let’s Encrypt (using DNS as the verification mechanism), when gets fed into a Traefik Reverse Proxy. Traefik is running on a non-standard port, which I can access from the outside world.
I’d suggest tearing your current system down and verify everything is configured correctly.
For example :
curl - -verbose - - insecure https://...
to be helpful)robots.txt
) to nginx. This would allow you to see if the problem is between the outside world and nginx or between nginx and your service.… and not to rob you of this experience, but you might want to look into Cloudflare Tunnels. It allows you to run services within your network, but are exposed/accessible directly from Cloudflare. It’s entirely secure (actually more so than your proposed system) and you don’t need to mess around with SSL.