• 1 Post
  • 115 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • SingleFile is a browser addon to save a complete web page into a single HTML file. SingleFile is a Web Extension (and a CLI tool) compatible with Chrome, Firefox (Desktop and Mobile), Microsoft Edge, Safari, Vivaldi, Brave, Waterfox, Yandex browser, and Opera.

    SingleFile can also be integrated with bookmark managers hoarder and linkding browser extensions. So your browser does the capture, which means you are already logged in, have dismissed the cookie banner, solved the capthas or whatever else annoyance is on the webpage.

    ArchiveBox and I believe also Linkwarden use SingleFile (but as CLI from the server side) to capture web pages, as well as other tools and formats. This works well for simple/straightforward web pages, but not for annoying we pages with cookie banners, capthas, and other popups.



  • Reading your post again, you should start by moving your docker management from CasaOS to vanilla docker-compose files, and keep them in a git repo.

    I still think you definitely should look in to NixOS and what it can offer, cause it seems like that is where your mindset is going.

    But NixOS is a drastic change, you should start by just converting your individual services one by one from CasaOS management to docker-compose files. One compose file for all services is possible, but I would recommend one compose file for each service. Later you can move from Debian to NixOS while using the same docker-compose files.


  • I would like to have a system when I know what I did, what is opened/installed/activated and what is not

    You sound like you need to to look in to Nix and NixOS. The TLDR is that everything is declared in a configuration file(s), which you can and should back up in git. The config files tell you exactly what you did , and the config file comments together with git commit history tell you why.

    The whole system is built from this configuration file. Rollback is trivially easy, either by rebooting and selecting an older build during the boot manager, or reverting to an older git commit and rebuilding (no reboot required, so usually faster)

    Now fair warning, Nix (and NixOS) is a big topic, very different from normal way of thinking about software distribution and OS. Nix is not for everyone.

    You should also at the very least have a git repo for docker-compose files for your services. Again, that will declaratively tell you what you did and why.

    Also, if NixOS is too extreme, you should also look in to declarative management tools like Ansible etc



  • Hard to say without knowing which method you used to install HomeAssistant.

    But I never found mdns .local addresses to be very reliable. They work 80-90% of the time, but the remaining 10-20% are a hassle.

    Instead I’d recommend you install PiHole (in a docker container is easiest). PiHole is a DNS server intended for network-level ad-blocking. But it also have a handy feature of defining local DNS entries, so you can have HomeAssistant.myhome or HomeAssistant.whatever (.local should not be used with PiHole local DNS because .local is meant for mdns)


  • Some key points regarding Proxmox:

    • Even if you only want to run two services, you still want to keep them isolated. This can save you much pain and frustration in the future when they require upgrades
    • Proxmox let’s you easily manage VM and LXC containers. So you can easily manage backups, or spinning up a separate test instance of your service. Which again, can save you pain and frustration when it comes to future updates of your services.
    • Backups are even better if you can deploy the separate Proxmox Backup Server
    • Should you ever want to add another service in the future, you can test it out in a new VM or container without it affecting your existing services at all
    • ZFS is indeed quite memory hungry, but AFAIK it’s mainly used for the read cache, and can be tuned to use less RAM at the cost of performance
    • ZFS is mentioned a lot because it’s good, but Proxmox also supports a range of other storage technologies: LVM, mdraid, EXT4, CEPH
    • Proxmox is just standard Debian and KVM/QEMU virtual machines under the hood. Which means you can use standard tooling and workflow should you need it for some edgecase.
    • You mentioned Jellyfin in a container: My understanding is that Jellyfin in Docker has some extra limitations or complexities when it comes to hardware encoding.
      • Jellyfin also has official documentation for how to deploy in LXC container and get HW transcoding working (Less complex than in Docker).
      • LXC containers are not like Docker containers. While a Docker container is meant to be an immutable image of a (single) application, LXC is more like a full fledged VM, but without the overhead of virtualization. LXC containers are full systems, and you install software via the usual apt, dnf etc
      • The “correct” way to run Docker in Proxmox is to run Docker in a Virtual machine. Installing Docker inside a LXC container is also possible, with some caveats. Installing Docker directly on the Proxmox host is not recommended

    For reference, my oldest Proxmox server is a 2013 AMD dualcore 16GB DDR2 ram with VMs on LVMthin on a single SSD, with legacy VM doing mdraid of 3 HDDs using hardware passthrough. Performance is still OK, the overhead from Proxmox is negligible compared to strain from the actual workloads






  • Regarding DRM, Netflix (and probably others) require the Widewine library to play back DRM content. This works perfectly fine on a normal Ubuntu PC, but does not work on the Pi because the library does not support ARM, only x86.

    So Id just get any normal PC. Used enterprise mini PCs can be had for quite cheap, and they are small and efficient, and high quality. Search for HP, Dell or Lenovo mini PCs , or 1 litre PCs.




  • Regarding mini PCs; Beware of RAM overheating!

    I bought some Minisforum HM90 for Proxmox selfhosting, installed 64gb RAM (2x32gb DDR4 3200MHz sticks), ran memtest first to ensure the RAM was good, and all 3 mini PCs failed to various degrees.

    The “best” would run for a couple of days and tens of passes before throwing multiple errors (tens of errors) then run for another few days without errors.

    Turns out the RAM overheated. 85-95 C surface temperature. (There’s almost no space or openings for air circulation on that side of the PC). Taking the lid off the PC, let 2/3 computers run memtest for a week with no errors, but one still gave the occasional error bursts. RAM surface temperature with the lid off was still 80-85 C.

    Adding a small fan creating a small draft dropped the temperature to 55-60 C. I then left the computer running memtest for a few weeks while I was away, then another few weeks while busy with other stuff. It has now been 6 weeks of continuous memtest, so I’m fairly confident in the integrity of the RAM, as long as they’re cold.

    Turns out also some, but not all, RAM sticks have onboard temperature sensors. lm-sensors can read the RAM temperature, if the sticks have the sensor. So I’m making a Arduino solution to monitor the temperature with a IR sensor and also control an extra fan.



  • +1 for SingleFile

    I recently tried LinkWarden, Linkding and Archivebox for making offline copies. They all had the same issue of running in to a Captcha or login wall for the sites I wanted to capture.
    SingleFile to the rescue, as it uses your current browser session as a logged in and verified human.

    Linkeding allows you to upload the singlefile html file attached to it link, but I didn’t see such an option for Linkwarden.