I found this project which is just a couple of small python scripts glueing various tools together: https://github.com/vndee/local-talking-llm
It’s pretty basic, but couldn’t find anything more polished. I did a little “vibe coding” to use a faster Chatterbox fork, stream the output back so I don’t have to wait for the entire LLM to finish before it starts “talking,” start recording on voice detection instead of the enter key, and allow interruption of the agent. But, like most vibe-coded stuff, it’s buggy. Was curious if there was something better that already exists before I commit to actually fixing the problems and pushing a fork.
There are not many models that support any-to-any, currently the best seems to be Qwen3-Omni, the audio quality is not great and it is not supported by llama.cpp: https://github.com/ggml-org/llama.cpp/issues/16186
Thanks! if anyone has more (good) alternatives or something like a curated list, I’d have a look at that as well… always a bit complicated to stay up to date and go through the myriad of options myself…