• 0 Posts
  • 96 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • I think the best way to handle this would be to just encode everything and upload all files. If I wanted some amount of history, I’d use some file system with automatic snapshots, like ZFS.

    If I wanted to do what you’ve outlined, I would probably use rclone with filtering for the extension types or something along those lines.

    If I wanted to do this with Git specifically, though, this is what I would try first:

    First, add lossless extensions (*.flac, *.wav) to my repo’s .gitignore

    Second, schedule a job on my local machine that:

    1. Watches for changes to the local file system (e.g., with inotifywait or fswatch)
    2. For any new lossless files, if there isn’t already an accompanying lossy files (i.e., identified by being collocated, having the exact same filename, sans extension, with an accepted extension, e.g., .mp3, .ogg - possibly also with a confirmation that the codec is up to my standards with a call to ffprobe, avprobe, mediainfo, exiftool, or something similar), it encodes the file to your preferred lossy format.
    3. Use git status --porcelain to if there have been any changes.
    4. If so, run git add --all && git commit --message "Automatic commit" && git push
    5. Optionally, automatically craft a better commit message by checking which files have been changed, generating text like Added album: "Satin Panthers - EP" by Hudson Mohawke or Removed album: "Brat" by Charli XCX; Added album "Brat and it's the same but there's three more songs so it's not" by Charli XCX

    Third, schedule a job on my remote machine server that runs git pull at regular intervals.

    One issue with this approach is that if you delete a file (as opposed to moving it), the space is not recovered on your local or your server. If space on your server is a concern, you could work around that by running something like the answer here (adjusting the depth to an appropriate amount for your use case):

    git fetch --depth=1
    git reflog expire --expire-unreachable=now --all
    git gc --aggressive --prune=all
    

    Another potential issue is that what I described above involves having an intermediary git to push to and pull from, e.g., running on a hosted Git forge, like GitHub, Codeberg, etc… This could result in getting copyright complaints or something along those lines, though.

    Alternatively, you could use your server as the git server (or check out forgejo if you want a Git forge as well), but then you can’t use the above trick to prune file history and save space from deleted files (on the server, at least - you could on your local, I think). If you then check out your working copy in a way such that Git can use hard links, you should at least be able to avoid needing to store two copies on your server.

    The other thing to check out, if you take this approach, is git lfs. EDIT: Actually, I take that back - you probably don’t want to use Git LFS.




  • You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.


  • Wow, there isn’t a single solution in here with the obvious answer?

    You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

    Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

    1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
    2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
    3. Add Jellyfin as a service and route in your reverse proxy’s config.

    On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

    If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

    Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

    If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.


  • Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

    If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

    For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

    • q4_K_M (the default): 43 GB
    • fp16: 141 GB
    • q8: 75 GB
    • q6_K: 58 GB
    • q5_k_m: 50 GB
    • q4: 40 GB
    • q3_K_M: 34 GB
    • q2_K: 26 GB

    This is why I run a lot of Q4_K_M 70B models on two 3090s.

    Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

    TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.


  • I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

    I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.








  • From the feature comparison at https://github.com/meichthys/foss_note_apps only two FOSS apps support handwriting: Joplin (with a plugin) which gets a subjective 6/10 score, and TriliumNext, which gets a subjective 2/10 score. I personally dislike Joplin but many people love it, so I recommend giving it a shot. EDIT: I installed Joplin using the APK from the site and both the handwriting and Excalidraw plugins were “not available on mobile,” so I have to rescind my recommendation. On my iOS device, the plugins didn’t even show up in the search.

    I think TriliumNext is great, but the mobile experience is still lacking (though they are tracking several issues to improve here). There’s no dedicated mobile app but they at least have a PWA. It also needs to be self-hosted, but doing so is straightforward if you’re already using Docker. The handwriting is done via a built-in Excalidraw integration.

    Here are some options not captured in that list:

    Obsidian is not open source, but also has an Excalidraw plugin. I’ve not used it yet but I’ve seen multiple discussions saying that it’s very well done and has additional functionality on top of base Excalidraw. There’s also an open source (MIT) plugin for Obsidian that adds support for handwritten notes. I only use Obsidian on my work computer and haven’t used it either, though I plan to install the Excalidraw plugin Monday.

    StylusLabs Write is FOSS (AGPL 3.0), multiplatform, and has a free Android apk available. Note that the Google Play version has had updates suspended. I just learned about it and don’t know how it otherwise measures up. I’m planning to check it out, though.

    You can use any note app that has Excalidraw support, so long as you don’t need your handwritten text to be OCRed. That means that the following are all options:



  • I primarily use Standard Notes. It’s a fantastic tool and I can use it anywhere, online or offline. It’s not great for collaboration, though, and it doesn’t have a canvas option. But I use it for scratch pads, for todo lists, for project tracking, for ideas, plans, plotting for my tabletop (Monster of the Week) game, software design and architecture, for drafting comments, etc…

    Standard Notes also has a ton of options for automated backups. I get a daily email with a backup of my notes; I can host my notes on my home server and the corporate one; I can also set up automated backups on any desktop.

    I don’t use it for saving links. I’m still using Raindrop.io for that, even though I’m self-hosting both Linkding and Linkwarden.

    For sharing and collaboration, I either publish to Listed with Standard Notes or use Hedgedoc, which is great for collaboration and does a great job presenting nodes, too.

    For canvas notes, I use GoodNotes on a tablet or the Onyx Boox’s default Notes app. I’d love a better FOSS, self-hosted option, especially for the Boox, but my experiences thus far have been negative (especially on the Boox).

    I’ve been trying out SilverBullet lately, since I want to try out cross-note querying and all that, but I’m too stuck in my habits and keep going back to Standard Notes. I think I’ll have better luck if I choose one app and go with it.

    I also have a collection of Mnemosyne notebooks that I use with fountain pens (mostly the Lamy 2000, but also quite commonly a Platinum 3776 or a Twsbi). Side note: the Lamy 2000 was my first fountain pen and after getting it I went deep into fountain pens. I explored a ton of different options, found a lot of nice pens across a number of brands… and yet how I still haven’t found something that I consistently like more. The Pilot VP is great but deceptive; a fancy clicky pen that only holds 30 minutes of ink (in a converter, at least) is decidedly inconvenient.

    I’ve also been checking out Obsidian on my work computer. So far I haven’t seen anything that makes me prefer it over my existing set of tools.


  • Hedgedoc is fantastic. If you’re okay with your notes app being web-only (without an app or even a PWA) and you don’t need canvas notes or multi-note queries, you should check it out.

    First, every note is Markdown, but it supports a ton of things natively. It has native Vim, Emacs, and Sublime (the default) editors and it’s built to be great for collaboration (if you want).

    It also has

    • syntax highlighting for a ton of languages
    • Mermaid.js support
    • LaTeX support
    • easy drag and drop image uploads
    • a solid mobile interface (for a webapp in your browser, at least)
    • built in revision history
    • support for other diagram tools, like graphviz, flowchart.js
    • a bunch of other little Markdown enhancements that make using it feel oddly intuitive

    And best of all, they have a Hedgehog for the icon! (I may be biased.)


  • If a communication norm is just about other people’s preferences, why should they change? Who’s to say that other people’s preferences are more important than their own, particularly given that this particular preference is shared by millions of other people.

    If inconsistent use of capitalization actually hinders understanding for some subset of their audience, then that’s a different story. My experience is that people are more likely to be annoyed than to actually have issues understanding all lowercase text. All caps text, on the other hand, is a different matter - and plenty of government and corporate entities are fine putting important text in all caps. But all caps text is a known accessibility issue. When I search for “all lowercase accessibility,” though, all I get is a bunch of results saying to not use all caps text for accessibility reasons.

    If you have sources showing that all lowercase text is an accessibility concern, then you should share them. Heck, you should have led with that. But as it is, your argument ultimately boils down to “someone else should change what they do, that works for them, because it annoys me.”