• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle
  • Great to hear this story of success. That plus

    $266.99 per probe for the original proprietary one

    Reminds me of Schneider’s stupid proprietary dongle for programming their PLCs. It’s just a CH341 in a funny shaped case that fits into the funny shaped slot on the PLC, where it plugs onto an ordinary 0.1" pin header to talk logic level serial.

    Plus it has a custom USB ID of course. Probably costs $2 to manufacture, sells for almost $300 as well.










  • I love the term “write-only code”, it’s perfect. I used to love Perl as it felt like it flowed straight from my brain into the keyboard. What a free and magical language.

    So it turned out I had ADHD. Took meds, went back to C/++ with renewed appreciation, haven’t touched Perl since as it horrifies me to look at it. What a nightmare of dangling references and questionable typing. Any language that allows you to cast a string to a function and call it really needs to sit down and think about what it’s doing.



  • I wouldn’t try parametric models in freecad

    I would clarify that you’re talking about a specific usage case, that OpenSCAD does indeed do better at. However for most CAD tasks I find OpenSCAD is overkill and less intuitive.

    “Parametric design” usually refers to the workflow used in the Part Design workbench, as well as SolidWorks etc. where geometry is defined by constraints.

    The Part Design workbench does work well and despite the topological naming issue is sufficient for most hobbyist and many light industrial tasks. If I need to draw up an arbitrary bracket or bushing or similar, I don’t even bother using a workflow that guards against the issue, I just use it casually like I would SolidWorks. Only if the part is complex or if I know it will need to be tweaked do I bother doing everything on datum planes etc. because it’s a lot slower and more hassle.

    That’s very good news that the topological naming issue is being solved, though. #1 issue with FreeCAD IMO and the one that holds it back from serious industry use.


  • A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.

    Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.

    I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.


  • I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.

    It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.

    It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.

    It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.

    AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.




  • We’re talking about replacing lost content here though. And as such you can use the streaming services as a “backup” by re-ripping your whole collection if you lose it.

    I’m actually doing this now as part of a library cleanup. Zotify + beets are a great combo to pull down vast quantities of music and properly sort and tag it.

    Then I stream it to my phone in my truck using ampache and ultrasonic, which does have a local buffering option.

    However if you have some exotics that you ripped from rare discs, demos or prerelease, live recordings with sentimental value etc. I would suggest keeping those properly backed up. I don’t have many of these, but the ones I do have are backed up both cloud and offsite.



  • I really don’t see how building a docker container afterward makes it easier

    What it’s supposed to make easier is both sandboxing and reuse / deployment. For example, Docker + Traefik makes some tasks so incredibly easy and secure compared to running them on bare metal. Or if you need to spin up multiple instances, they can be created and destroyed in seconds. Without the container, this just isn’t feasible.

    The dockerfile uses MySQL because it works. If you want to know if the core service works with PostgreSQL, that’s not really on the guy who wrote the dockerfile, that’s on the application maintainer. Read the docs, do some testing, create your own container using its own PostgreSQL or connecting to an external database if that suits your needs better.

    Once again the flexibility of bind mounts means you could often drop that external database right on top of the one in the container. That’s the real beauty of Docker IMO, being able to slot the containers into your system seamlessly due to the mount system.

    adapting can be a pita when the package is built around a really specific environment

    That’s the great thing about Docker, it lets you bring that really specific environment anywhere and in an incredibly lightweight manner compared to the old days of heavyweight VMs. I’ve even got Docker containers running on a Raspberry Pi B+ that otherwise is so old that it would be nearly impossible to install the libraries required to run modern software.


  • The image generation can be cheap, but I was imagining this sort of watermark wouldn’t be so much a visible part of the image, but an embedded signature that hashes the image.

    Require enough PoW to generate the signature, and this would at least cut down the volumes of images created, and possibly limit them to groups or businesses with clusters that could be monitored, without clamping down on image generation in general.

    A modified version of what you mentioned could work too, but where just these specific images have to be vetted and signed by a central authority using a private key. Image generation software wouldn’t be restricted for general purposes, but no signature on suspicious content and it’s off to jail.