

You can set a connection pool. Or use a database proxy.
EDIT: Oh, you are using Django for shell scripts?


You can set a connection pool. Or use a database proxy.
EDIT: Oh, you are using Django for shell scripts?
There’s no dead person out there complaining that they spent too much time in a computer…
So, if I’m reading things right, anything that runs on Z-Wave or Zigbee will necessarily run locally, because those are mesh protocols?
Anyway, thanks a lot. Those are really simple keywords to check.
I’ve been looking for some smart outlets, and it seems impossible to discover which ones can be used with normal well-known protocols and which can only be used through a phone app locked into a cloud service.
You are saying that the error messages terminate at some point?
There’s a good reason sysv isn’t on the meme.
If you think it never broke, that’s because you weren’t doing anything different or creating anything that required it.
That said, systemd had a tendency to break even if you didn’t either. But nowadays the bugs are mostly fixed, and the stupidity is contained on parts people mostly don’t adopt.
Running the code again is fast and requires no thinking. Finding the problem is slow and requires a lot of thinking.
It’s worth looking under the light-post in case your keys somehow rolled there. Just not for long.
The fact that this is becoming the most known quote from that mission is amazing on several good and bad ways.


Are extensible records usable already?
Not that I would pick TS because of that, but the disdain is undeserved when it has some very useful features that Haskell has been trying to copy for years.


Last time I said “No! This is getting silly!” and decided to try all those language-server GUI text editors I lost a couple of weeks and decided to nuke my emacs config and make LSP actually work there instead.


You can always do pre-Modern web development. There’s nothing stopping you.
In fact, with the modern browsers, it’s better than ever.


You mean sourceforge?


99.99%
TBF, no, established companies tend to have something between 99.9% and 99.99% of uptime. It only increases if the company is explicitly focused on it, at a large cost that usually needs to be paid by some customer.
But Github pretends to be one of those companies that focus on uptime. And it’s also less than 99% right now. So yeah, the main point stands.
Oh, it’s limited. But you will only discover it if you look at the bug-tracker.
People have been losing their cursors since the Xerox labs days.


Half a dozen of them have real cost/benefits trade-offs that make preaching for a single one harmful. The other thousand are just completely useless waste of time not worth mentioning.


I’m deeply sorry. If it’s any consolation, it will eventually happen to everybody.
Or at least everybody that didn’t see the light and adopt The Penguin… But let’s leave religion to another time.
You use it to write programs.
It is. But I doubt they are used by 3 combined people.
The Debian project needs to keep the machines for compiling and testing. And there’s probably no other machine of those architectures available for the enthusiasts to actually use.
What are you guys doing to your JS packages for them to last so long?