

Yep, that’s the idea! This post basically boils down to “does this exist for HASS already, or do I need to implement it?” and the answer, unfortunately, seems to be the latter.
Yep, that’s the idea! This post basically boils down to “does this exist for HASS already, or do I need to implement it?” and the answer, unfortunately, seems to be the latter.
Thanks, had not heard of this before! From skimming the link, it seems that the integration with HASS mostly focuses on providing wyoming endpoints (STT, TTS, wakeword), right? (Un)fortunately, that’s the part that’s already working really well 😄
However, the idea of just writing a stand-alone application with Ollama-compatible endpoints, but not actually putting an LLM behind it is genius, I had not thought about that. That could really simplify stuff if I decide to write a custom intent handler. So, yeah, thanks for the link!!
Thanks for your input! The problem with the LLM approach for me is mostly that I have so many entities, HASS exposing them all (or even the subset of those I really, really want) is already big enough to slow everything to a crawl, and to get bad results from all models I’ve tried. I’ll give the model you mentioned another shot though.
However, I really don’t want to use an LLM for this. It seems brittle and like overkill at the same time. As you said, intent classification is a wee bit older than LLMs.
Unfortunately, the sentence template matching approach alone isn’t sufficient, because quite frequently, the STT is imperfect. With HomeAssistant, currently the intent “turn off all lights” is, for example, not understood if STT produces “turn off all light”. And sure, you can extend the template for that. But what about
A human would go “huh? oh, sure, I’ll turn off all lights”. An LLM might as well. But a fuzzy matching / closest Levensthein distance approach should be more than sufficient for this, too.
Basically, I generally like the sentence template approach used by HASS, but it just needs that little bit of additional robustness against imperfections.
Thanks for sharing your experience! I have actually mostly been testing with a good desk mic, and expect recognition to get worse with room mics… The hardware I bought are seeed ReSpeaker mic arrays, I am somewhat hopeful about them.
Adding a lot of alternative sentences does indeed help, at least to a certain degree. However, my issue is less with “it should recognize various different commands for the same action”, and more “if I mumble, misspeak, or add a swear word on my third attempt, it should still just pick the most likely intent”, and that’s what’s currently missing from the ecosystem, as far as I can tell.
Though I must conceit, copying your strategy might be a viable stop-gap solution to get rid of Alexa. I’ll have to pay around with it a bit more.
That all said, if you find a better intent matcher or another solution, please do report back as I am very interested in an easier solution that does not require me to think of all possible sentence ahead of time.
Roger.
Never heard about willow before - is it this one? Seems there is still recent activity in the repo - did the creator only recently pass away? Or did someone continue the project?
How’s your experience been with it?
And sure, will do!
That is actually a really interesting approach to moderation, huh.
Amazing. She’s a great role model.
Ah! Finally! Something where I can look up at the sky and go:
without people looking at me like I’m a weirdo and that wasn’t what everyone does when faced with difficult questions.
Yeah. Back left is the only burned in the right size for my pasta pot. Back right is a copy of front left and thus uniquely useless.
…benutzt Wurzel-ebene Gegentäuschung. Damit hat eine Amerikanisch-Saudische Firma eine Wurzel-Werkzeugkiste auf deinem Rechner. Ich weiß ja nicht, wie begehrenswert das ist.
Disagree. CSS allows you to do whatever you want with it, usually with just a handful of lines. The “it’s so difficult to center things!” meme is, well, a meme.
Ironic. Every AI who would be worthy of that name, would also be capable of understanding the context of “AI-negativity” and thus clearly not “hyperstition itself into existence”.
Actually… From a data-loss POV, it’s actually pretty much fine; since the server only serves an e2ee file anyways, each end device’s data is sufficient to recover everything.
I.e. if you host Vaultwarden, log into it on your mobile device, save all your logins; then fuck up the server, it doesn’t matter, because your mobile device not only still has everything, but also does not need a server connection to export everything in a way that can then be imported again on a new server installation.
Yeah but why would I make myself dependent on Cloudflare.
To be fair, you can simply selfhost MinIO.
Wait, not the other way round? One tale per 70-min episode?
(With the priest’s tale getting an initial 1.5hrs opening episode? 👀)
And what is the advantage of that?
Also I am pretty sure I have at least some secrets in my shell history
Hah… Fair 😄 Hope you’ll get the chance!
Please read the title of the post again. I do not want to use an LLM. Selfhosted is bad enough, but feeding my data to OpenAI is worse.