Some of the concepts in this book really stuck with me, but I had no idea what the title was! Thanks!
“Some days you’re the original, some days you’re the copy” or something like that
Some of the concepts in this book really stuck with me, but I had no idea what the title was! Thanks!
“Some days you’re the original, some days you’re the copy” or something like that
Oops my apologies, lol I checked and I must have installed the upstream NewPipe repo so long ago that I forgot that I even had it in my sources list. Literally my only repo other than Fdroid main.
No reason not to use it, though, it’s the official NewPipe repo:
Refresh your repos, I literally just downloaded and installed it
Out on Fdroid now and working
Am farmer, can confirm. I also have my chequebook with me… Non-farmers, when was the last time you wrote a cheque, aside from rent? I feel like we’re the only ones still using them.
Even with external volumes, I don’t think there should be any mechanism where a container can escape a bind mount to affect the rest of the host fs? I use bind mounts all the time, far more than docker volumes.
As I said,
C/++ with renewed appreciation
No such thing as eval in non-interpreted languages. Unless you’re crazy enough to invoke the compiler and exec() the result.
I used eval too in my Perl days which is why I specifically called it out. IMO any time you see eval used there should be another, more proper way to do it.
I love the term “write-only code”, it’s perfect. I used to love Perl as it felt like it flowed straight from my brain into the keyboard. What a free and magical language.
So it turned out I had ADHD. Took meds, went back to C/++ with renewed appreciation, haven’t touched Perl since as it horrifies me to look at it. What a nightmare of dangling references and questionable typing. Any language that allows you to cast a string to a function and call it really needs to sit down and think about what it’s doing.
QCad still sucks compared to AutoCAD, but it is only around $50 for a license where AutoCAD is pretty much subscription only at this point I believe.
We actually use it at work, because our 2d drafting use cases are very limited, but we still need something DWG compatible.
I wouldn’t try parametric models in freecad
I would clarify that you’re talking about a specific usage case, that OpenSCAD does indeed do better at. However for most CAD tasks I find OpenSCAD is overkill and less intuitive.
“Parametric design” usually refers to the workflow used in the Part Design workbench, as well as SolidWorks etc. where geometry is defined by constraints.
The Part Design workbench does work well and despite the topological naming issue is sufficient for most hobbyist and many light industrial tasks. If I need to draw up an arbitrary bracket or bushing or similar, I don’t even bother using a workflow that guards against the issue, I just use it casually like I would SolidWorks. Only if the part is complex or if I know it will need to be tweaked do I bother doing everything on datum planes etc. because it’s a lot slower and more hassle.
That’s very good news that the topological naming issue is being solved, though. #1 issue with FreeCAD IMO and the one that holds it back from serious industry use.
A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.
Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.
I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.
I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.
It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.
It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.
It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.
AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.
As per tradition, the real clown car is in the comments
Personally I live in a very rural location and I farm, so I can spend a lot of time on the road or in my tractor. 1gb wouldn’t get me through a day in the field, so I have a pretty big collection with a lot of variety. We don’t even have reliable FM radio here, so it’s bring your own music or listen to the diesel roar.
We’re talking about replacing lost content here though. And as such you can use the streaming services as a “backup” by re-ripping your whole collection if you lose it.
I’m actually doing this now as part of a library cleanup. Zotify + beets are a great combo to pull down vast quantities of music and properly sort and tag it.
Then I stream it to my phone in my truck using ampache and ultrasonic, which does have a local buffering option.
However if you have some exotics that you ripped from rare discs, demos or prerelease, live recordings with sentimental value etc. I would suggest keeping those properly backed up. I don’t have many of these, but the ones I do have are backed up both cloud and offsite.
Everything will seem to be be going great, but to actually gain access to the castle you’ll have to compare your situation to successful rescues to find the undocumented drawbridge control
I really don’t see how building a docker container afterward makes it easier
What it’s supposed to make easier is both sandboxing and reuse / deployment. For example, Docker + Traefik makes some tasks so incredibly easy and secure compared to running them on bare metal. Or if you need to spin up multiple instances, they can be created and destroyed in seconds. Without the container, this just isn’t feasible.
The dockerfile uses MySQL because it works. If you want to know if the core service works with PostgreSQL, that’s not really on the guy who wrote the dockerfile, that’s on the application maintainer. Read the docs, do some testing, create your own container using its own PostgreSQL or connecting to an external database if that suits your needs better.
Once again the flexibility of bind mounts means you could often drop that external database right on top of the one in the container. That’s the real beauty of Docker IMO, being able to slot the containers into your system seamlessly due to the mount system.
adapting can be a pita when the package is built around a really specific environment
That’s the great thing about Docker, it lets you bring that really specific environment anywhere and in an incredibly lightweight manner compared to the old days of heavyweight VMs. I’ve even got Docker containers running on a Raspberry Pi B+ that otherwise is so old that it would be nearly impossible to install the libraries required to run modern software.
The image generation can be cheap, but I was imagining this sort of watermark wouldn’t be so much a visible part of the image, but an embedded signature that hashes the image.
Require enough PoW to generate the signature, and this would at least cut down the volumes of images created, and possibly limit them to groups or businesses with clusters that could be monitored, without clamping down on image generation in general.
A modified version of what you mentioned could work too, but where just these specific images have to be vetted and signed by a central authority using a private key. Image generation software wouldn’t be restricted for general purposes, but no signature on suspicious content and it’s off to jail.
Great to hear this story of success. That plus
Reminds me of Schneider’s stupid proprietary dongle for programming their PLCs. It’s just a CH341 in a funny shaped case that fits into the funny shaped slot on the PLC, where it plugs onto an ordinary 0.1" pin header to talk logic level serial.
Plus it has a custom USB ID of course. Probably costs $2 to manufacture, sells for almost $300 as well.