

Yep, that’s the idea! This post basically boils down to “does this exist for HASS already, or do I need to implement it?” and the answer, unfortunately, seems to be the latter.
Yep, that’s the idea! This post basically boils down to “does this exist for HASS already, or do I need to implement it?” and the answer, unfortunately, seems to be the latter.
Thanks, had not heard of this before! From skimming the link, it seems that the integration with HASS mostly focuses on providing wyoming endpoints (STT, TTS, wakeword), right? (Un)fortunately, that’s the part that’s already working really well 😄
However, the idea of just writing a stand-alone application with Ollama-compatible endpoints, but not actually putting an LLM behind it is genius, I had not thought about that. That could really simplify stuff if I decide to write a custom intent handler. So, yeah, thanks for the link!!
Thanks for your input! The problem with the LLM approach for me is mostly that I have so many entities, HASS exposing them all (or even the subset of those I really, really want) is already big enough to slow everything to a crawl, and to get bad results from all models I’ve tried. I’ll give the model you mentioned another shot though.
However, I really don’t want to use an LLM for this. It seems brittle and like overkill at the same time. As you said, intent classification is a wee bit older than LLMs.
Unfortunately, the sentence template matching approach alone isn’t sufficient, because quite frequently, the STT is imperfect. With HomeAssistant, currently the intent “turn off all lights” is, for example, not understood if STT produces “turn off all light”. And sure, you can extend the template for that. But what about
A human would go “huh? oh, sure, I’ll turn off all lights”. An LLM might as well. But a fuzzy matching / closest Levensthein distance approach should be more than sufficient for this, too.
Basically, I generally like the sentence template approach used by HASS, but it just needs that little bit of additional robustness against imperfections.
Thanks for sharing your experience! I have actually mostly been testing with a good desk mic, and expect recognition to get worse with room mics… The hardware I bought are seeed ReSpeaker mic arrays, I am somewhat hopeful about them.
Adding a lot of alternative sentences does indeed help, at least to a certain degree. However, my issue is less with “it should recognize various different commands for the same action”, and more “if I mumble, misspeak, or add a swear word on my third attempt, it should still just pick the most likely intent”, and that’s what’s currently missing from the ecosystem, as far as I can tell.
Though I must conceit, copying your strategy might be a viable stop-gap solution to get rid of Alexa. I’ll have to pay around with it a bit more.
That all said, if you find a better intent matcher or another solution, please do report back as I am very interested in an easier solution that does not require me to think of all possible sentence ahead of time.
Roger.
Never heard about willow before - is it this one? Seems there is still recent activity in the repo - did the creator only recently pass away? Or did someone continue the project?
How’s your experience been with it?
And sure, will do!
Lol, exact same situation here.
Quick question, did the migration to continuwuity break calls for you as well?
Because a commit should be an “indivisible” unit, in the sense that “should this be a separate commit?” equates to “would I ever want to revert just these changes?”.
IDK about your commit histories, but if I’d leave everything in there, there’d be a ton of fixup commits just fixing spelling, satisfying the linter,…
Also, changes requested by reviewers: those fixups almost always belong to the same commit, it makes no sense for them to be separate.
And finally, I guess you do technically give up some granularity, but you gain an immense amount of readability of your commit history.
Same. And even if you were to fuck up, have people never heard of the reflog
…?
Every job I’ve worked at it’s been the expectation to regularly rebase your feature branch on main, to squash your commits (and then force push, obv), and for most projects to do rebase-merges of PRs rather than creating merge commits. Even the, uh, less gifted developers never had an issue with this.
I think people just hear the meme about git being hard somewhere and then use that as an excuse to never learn.
Lmao I kept thinking you forgot to put quotes and was waiting for the inevitable “…this is what too many idiots think, even though it is obvious bullshit”, and yet it just…never came. Amazing. This might be the single most stupid comment I’ve ever read, and I’ve been on the internet for a while.
TBH, it sounds like you have nothing to worry about then! Open ports aren’t really an issue in-and-on itself, they are problematic because the software listening on them might be vulnerable, and the (standard-) ports can provide knowledge about the nature pf the application, making it easier to target specific software with an exploit.
Since a bot has no way of finding out what services you are running, they could only attack caddy - which I’d put down as a negligible danger.
My ISP blocks incoming data to common ports unless you get a business account.
Oof, sorry, that sucks. I think you could still go the route I described though: For your domain example.com
and example service myservice
, listen on port :12345
and drop everything that isn’t requesting myservice.example.com:12345
. Then forward the matching requests to your service’s actual port, e.g. 23456
, which is closed to the internet.
Edit: and just to clarify, for service otherservice
, you do not need to open a second port; stick with the one, but in addition to myservice.example.com:12345
, also accept requests for otherservice.example.com:12345
, but proxy that to the (again, closed-to-the-internet) port :34567
.
The advantage here is that bots cannot guess from your ports what software you are running, and since caddy (or any of the mature reverse proxies) can be expected to be reasonably secure, I would not worry about bots being able to exploit the reverse proxy’s port. Bots also no longer have a direct line of communication to your services. In short, the routine of “let’s scan ports; ah, port x is open indicating use of service y; try automated exploit z” gets prevented.
I am scratching my head here: why open up ports at all? It it just to avoid having to pay for a domain? The usual way to go about this is to only proxy 443 traffic to the intended host/vm/port based on the (sub) domain, and just drop everything else, including requests on 443 that do not match your subdomains.
Granted, there are some services actually requiring open ports, but the majority don’t (and you mention a webserver, where we’re definitely back to: why open anything beyond 443?).
Client side, under advanced:
That’s a setting
InfCloud. Works well with Radicale, and does contacts, too.
It’s not pretty, but works very well for the 5/100 times I want to check through a browser instead of Calendar app / Thunderbird.
Yes. Using simple-nixos-mailserver as the foundation.
Really great experience, and have had no deliverability issues.
No problem. If you do decide to give NixOS a try, feel free to ask about anything should things be unclear :)
Yeah… I heard that too, about half a year after I got really into nix.
To be honest, I try to keep away from community drama as much as possible, so I am not entirely up to date here. I think (and I might be wrong, if someone reading this knows better, correct me!) there’s three main points of contention:
My position on all three points is this: They are not great; but a) they do not threaten the ecosystem, which is mature and independent of this drama, and not reliant on one or a couple of central, potentially problematic, people; and b) there are community projects that actively and effectively do distance themselves from all of these points (namely: Lix) and which are drop-in replacements for the core nix language and compiler, meaning if the upstream project actively did something to really piss you of, you could move with very little work to something independent of Nix.
I hope this will not become necessary, because Nix is genuinely magic. Once you get the hang of it, nothing on your computer is particularly difficult anymore. You also get the best-in-class package management (and it’s easy! Once you have configured your own system to your liking, you already know everything you need to package your own software and contribute to nixpkgs!), being “bleeding edge” yet at the same time incredibly stable (seriously, I have switched all of my servers and VMs to Nix and I have not had one single incident once, including after updating machines after forgetting about them for 1.5+ years).
Anyways. Sorry for the wall of text lol.
As someone else has said: NixOS. You said in a comment that you use Arch because of the AUR. Good news, nixpkgs is larger and fresher than the AUR, without needing to tap into any kind of third-party/unofficial repo.
The unstable branch is essentially a rolling release (and very stable despite its name). I am happily gaming on it with Steam. During installation, you can just choose to not install a desktop. (However, due to how nix works, it’s trivial to rip out the entire DE at any point, should you so choose.)
But it is a learning curve for sure. Steep, but not very long.
Please read the title of the post again. I do not want to use an LLM. Selfhosted is bad enough, but feeding my data to OpenAI is worse.