• 0 Posts
  • 141 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle








  • “Something released! Whats this?” he thinks while following the link and reading:

    OpenVox, the community-maintained open source implementation of Puppet.

    “Ah yes, Puppet, we have Puppet at home, as does everybody! I use Puppet all the time with the ladies, when they come over for Puppet and chill!”

    Be aware, of course, that even though you can type the same commands, use all the same modules and extensions, and configure the same settings, OpenVox is not yet tested to the same standard that Puppet is.

    “Of course, of course! As one should know, the Puppet and the Openvox commands, yes…”

    Giving up on extracting any usable information from the website he opens the github link and reads:

    OpenVox is fully Puppet™️ compatible, so modules from the Forge will work

    “Can’t forget the Forge now can we? Aaah all the fond memories I have of lookong at modules coming straight hot from the Forge, amiright fellas?”




  • Deckweiss@lemmy.worldtoSelfhosted@lemmy.worldMulti system synced/living OS possible?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Even when my internet doesn’t suck for a minute, I have yet to find a linux remote software that is not sluggish or ugly from compression artifacts, low res and inaccurate colors.

    I tried my usual workflows and doing any graphic design or 3d work was impossible. But even stuff like coding or writing notes made me mistype A LOT, then backspace 3-5 times, since the visual feedback was delayed by at least half a second.


  • I run this somewhat. The question I asked myself was - do I R-E-A-L-L-Y need a clone of the root disk on two devices? And the answer was: no.


    I have a desktop and a laptop.

    Both run the same OS (with some package overlap, but not identical)

    I use syncthing and a VPS syncthing server to sync some directories from the home folder. Downloads, project files, bashrc, .local/bin scripts and everything else that I would actually really need on both machines.

    The syncthing VPS is always on, so I don’t need both computers on at the same time to sync the files. It also acts as an offsite backup this way, in case of a catasprophical destruction of both my computers.

    (The trick with syncthing is to give the same directories the same ID on each machine before syncing. Otherwise it creates a second dir like “Downloads_2”.)

    That setup is easy and gets me 95% there.

    The 5% that is not synced are packages (which are sometimes only needed on one of the computers and not both) and system modifications (which I wouldn’t even want to sync, since a lot of those are hardware specific, like screen resolution and display layout).


    The downsides:

    • I have to configure some settings twice. Like the printer that is used by both computers.

    • I have to install some packages twice. Like when I find a new tool and want it on both machines.

    • I have to run updates seperately on both systems so I have been thinking about also setting up a shared package cache somehow, but was ultimately too lazy to do it, I just run the update twice.


    I find the downsides acceptable, the whole thing was a breeze to set up and it has been running like this for about a year now without any hiccups.

    And as a bonus, I also sync some important document to my phone.





  • Deckweiss@lemmy.worldtoSelfhosted@lemmy.worldNon-Cloudflare AI blocking?
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    3 months ago

    The only way I can think of is blacklisting everything by default, directing to a challanging proper captcha (can be selfhosted) and temporarily whitelisting proven human IPs.

    When you try to “enumerate badness” and block all AI useragents and IP ranges, you’ll always leave some new ones through and you’ll never be done with adding them.

    Only allow proven humans.


    A captcha will inconvenience the users. If you just want to make it worse for the crawlers, let them spend compute ressources through something like https://altcha.org/ (which would still allow them to crawl your site, but make DDoSing very expensive) or AI honeypots.




  • I don’t think we are arguing about the same scenario at all.

    Here is an example of what I have in mind:

    1. I work as a freelancer on a customers project.
    2. In my computer I have an 128G NVME (15$) which is seperate from my OS where I put the data the customer entrusted me with and the project files
    3. After the project, I take that NVME out, put it in a box on a shelf and buy a new one (15$) for the next project
    4. Some time after project completion, I can either trash the drives or send them in bulk to some data erasure service, or leave them on my shelf for ever.

    As opposed to

    1. After the project, I take that NVME-1 out, put it in a box on a shelf and buy a new one for the next project (NVME-2)
    2. for the project after that, I again take out NVME-2 and put it on a sheld, I get NVME-1 from my shelf, put it in, run secure erase for multiple hours before I can start working on the next project.

    My argument is, that the cost of the first process is negligible compared to the effort and hassle of the second process, for a freelancer that earns over 6x the cost of such drive per hour.