Ask HN: What's the current best local/open speech-to-speech setup?

Posted by dsrtslnd23 1 day ago

Counter217Comment54OpenOriginal

I’m trying to do the “voice assistant” thing fully locally: mic → model → speaker, low latency, ideally streaming + interruptible (barge-in).

Qwen3 Omni looks perfect on paper (“real-time”, speech-to-speech, etc). But I’ve been poking around and I can’t find a single reproducible “here’s how I got the open weights doing real speech-to-speech locally” writeup. Lots of “speech in → text out” or “audio out after the model finishes”, but not a usable realtime voice loop. Feels like either (a) the tooling isn’t there yet, or (b) I’m missing the secret sauce.

What are people actually using in 2026 if they want open + local voice?

Is anyone doing true end-to-end speech models locally (streaming audio out), or is the SOTA still “streaming ASR + LLM + streaming TTS” glued together?

If you did get Qwen3 Omni speech-to-speech working: what stack (transformers / vLLM-omni / something else), what hardware, and is it actually realtime?

What’s the most “works today” combo on a single GPU?

Bonus: rough numbers people see for mic → first audio back

Would love pointers to repos, configs, or “this is the one that finally worked for me” war stories.

Comments

Comment by d4rkp4ttern 4 hours ago

This is not strictly speech-to-speech, but I quite like it when working with Claude Code or other CLI Agents:

STT: Handy [1] (open-source), with Parakeet V3 - stunningly fast, near-instant transcription. The slight accuracy drop relative to bigger models is immaterial when you're talking to an AI. I always ask it to restate back to me what it understood, and it gives back a nicely structured version -- this helps confirm understanding as well as likely helps the CLI agent stay on track.

TTS: Pocket-TTS [2], just 100M params, and amazing speech quality (English only). I made a voice plugin [3] based on this, for Claude Code so it can speak out short updates whenever CC stops. It uses a non-blocking stop hook that calls a headless agent to create the 1/2-sentence summary. Turns out to be surprisingly useful. It's also fun as you can customize the speaking style and mirror your vibe etc.

The voice plugin gives commands to control it:

    /voice:speak stop
    /voice:speak azelma (change the voice)
    /voice:speak <your arbitrary prompt to control the style or other aspects>
[1] Handy https://github.com/cjpais/Handy

[2] Pocket-TTS https://github.com/kyutai-labs/pocket-tts

[3] Voice plugin for Claude Code: https://github.com/pchalasani/claude-code-tools?tab=readme-o...

Comment by skrebbel 25 minutes ago

Wow Handy works impressively well! Excellent UX too (on Windows at least).

Comment by indigodaddy 46 minutes ago

Hi, so I'm looking for an stt that can happen on a server/cron, that will use a small local model (I have 4 vCPU threadripper CPU only and 20G ram on the server) and be able to transcribe from remote audio URLs (preferably, but I know that local models probably don't have this feature so will have to do something like curl the audio down to memory or /tmp and then transcribe and then remove the file etc).

Have any thoughts?

Comment by 3dsnano 2 hours ago

posts like this are why i visit HN daily!!!

thanks for sharing your knowledge; can’t wait to try out your voice plugin

Comment by supermatt 1 hour ago

There was a great post the other day showing low latency end to end using Nvidia models on a single GPU with pipecat

Discussion: https://news.ycombinator.com/item?id=46528045

Article: https://www.daily.co/blog/building-voice-agents-with-nvidia-...

Comment by mpaepper 19 hours ago

You should look into the new Nvidia model: https://research.nvidia.com/labs/adlr/personaplex/

It has dual channel input / output and a very permissible license

Comment by zaken 11 hours ago

Oh man that space emergency example had me rolling

Comment by 2 hours ago

Comment by albert_e 8 hours ago

Ha --

and the "Customer Service - Banking" scenario claims that it demos "accent control" and the prompt gives the agent a definitely non-indian name, yet the agents sounds 100% Indian - I found that hilarious but also isn't it a bad example given they are claiming accent control as a feature?

Comment by mikkupikku 4 hours ago

"Sanni Virtanen", I guess it was meant to be Finnish? Maybe the "bank customer support" part threw the AI off, lmao.

Comment by adabyron 3 hours ago

Changing my title to "Astronaut" right now... I'll be using that line as well anytime someone asks me to do something.

Comment by hnlmorg 6 hours ago

Oh wow. Thats definitely something…

Comment by cbrews 17 hours ago

Thanks for sharing this! I'm going to put this on my list to play around with. I'm not really an expert in this tech, I come from the audio background, but recently was playing around with streaming Speech-to-Text (using Whisper) / Text-to-Speech (using Kokoro at the time) on a local machine.

The most challenging part in my build was tuning the inference batch sizing here. I was able to get it working well for Speech-to-Text down to batch sizes of 200ms. I even implement a basic local agreement algorithm and it was still very fast (inferencing time, I think, was around 10-20ms?). You're basically limited by the minimum batch size, NOT inference time. Maybe that's a missing "secret sauce" suggested in the original post?

In the use case listed above, the TTS probably isn't a bottleneck as long as OP can generate tokens quickly.

All this being said a wrapped model like this that is able to handle hand-offs between these parts of the process sounds really useful and I'll definitely be interested in seeing how it performs.

Let me know if you guys play with this and find success.

Comment by dsrtslnd23 19 hours ago

oh - very interesting indeed! thanks

Comment by vulkoingim 9 hours ago

I'm using https://spokenly.app/ in local mode, which is free. Very happy with it. It supports a bunch of models, including whisper and parakeet. Right now I'm mostly using parakeet v3 on my desktop, but it tends to do a bit more errors, although it is very fast. I cycle betwen it and Distil-Whisper Large V3.5, which is a bit slower.

On iOS I'm also using the same app, with the Apple Speech model, which I found out to be better performing for me than the parakeet/whisper. One drawback for the apple model is that you need iOS/Mac 26+ - and I haven't bothered to update to Tahoe on my mac.

Both of the models work instantly for me (Mac M1, iphone 17 Pro).

Edit: Aaaand I just saw that you're looking for speech-to-speech. Oops, still sleeping.

Comment by timwis 8 hours ago

Home Assistant have a fully local voice assistant experience that's very pluggable and customisable. I believe it uses a fast whisper model for STT and piper for TTS.

You can run it on a raspberry pi (or ideally an N100+), and for the microphone/speaker part, you can make your own or buy their off the shelf voice hardware, which works really well.

https://www.home-assistant.io/voice-pe/

Comment by stavros 6 hours ago

Unfortunately I didn't manage to figure out how to make their hardware to work without a HA installation. I'd really love to do that, if anyone has any info on how their protocol works, please do tell.

I looked at their Wyoming docs online but couldn't really see how to even let it find the server, and the ESPhome firmware it runs offered similarly few hints.

Comment by dfajgljsldkjag 16 hours ago

It requires a bit of tinkering, but I think pipecat is the way to go. You can plug in pretty much any STT/LLM/TTS you want and go. It definitely supports local models but its up to you to get your hands on those models.

Not sure if there's any turnkey setups that are preconfigured for local install where you can just press play and go though.

Last I heard E2E speech to speech models are still pretty weak. I've had pretty bad results from gpt-realtime and that's a proprietary model, I'm assuming open source is a bit behind.

Comment by storystarling 1 hour ago

I suspect the glued pipeline is going to remain dominant for a while, mostly because the intermediate text layer is structural, not just a byproduct. If you drop the text for a pure E2E model, you suddenly lose the ability to easily inject RAG context or handle complex tool use. I've been building some agent workflows recently and having that text state to pass into something like LangGraph is the only way to reliably control the logic. Without it, you are basically flying blind on the backend.

Comment by dsrtslnd23 9 hours ago

yes, I am currently playing with pipecat - both with ASR + LLM + TTS pipeline and also speech to text (ultravox) + TTS but haven't been successful with local speech to speech setups yet.

Comment by nsbk 5 hours ago

I'm putting together a streaming ASR + LLM + streaming TTS setup based on Nvidia speech models: nemotron ASR and magpie TTS, pipecat to glue everything together, plus an LLM of your choice. I added Spanish support using canary models, as magpie models are English-only and it still works really well.

The work is based on a repo by pipecat that I forked and modified to be more comfortable to run (docker compose for the server and client), added Spanish support via canary models, and added Nvidia Ampere support so it can run on my 3090.

The use case is a conversation partner for my gf who is learning Spanish, and it works incredibly well. For LLM I settled with Mistral-Small-3.2-24B-Instruct-2506-Q4_K_S.gguf

https://github.com/nsbk/nemotron-january-2026

Comment by amelius 17 hours ago

Comment by schobi 8 hours ago

Oh... Having a local-only voice assistant would be great. Maybe someone can share the practical side of this.

Do you have the GPU running all day at 200W to scan for wake words? Or is that running on the machine you are working on anyway?

Is this running from a headset microphone (while sitting at the desk?) or more like a USB speakerphone? Is there an Alexa jailbreak / alternative firmware as a frontend and run this on a GPU hidden away?

Comment by butvacuum 5 hours ago

Wake words are generally processed extremely early in the pipeline. So if you capture audio with, say, an ESP32 the uC does the wale word watching.

Theres even microphone ADCs and DSPs(if you use a mic that outputs PCM/i2S instead of analog) that do the processing internally.

Comment by marsbars241 15 hours ago

Tangential: What hardware are you using for the interface on these? Is there a good array microphone that performs on par with echos/ghomes/homepods?

Comment by andhuman 9 hours ago

I built this recently. I used nvidia parakeet as STT, open wake word as the wake word detection, mistral ministral 14b as LLM and pocket tts for tts. Fits snugly in my 16 gb VRAM. Pocket is small and fast and has good enough voice cloning. I first used the chatterbox turbo model, which perform better and even supported some simple paralinguistic word like (chuckle) that made it more fun, but it was just a bit too big for my rig.

Comment by PhilippGille 8 hours ago

OP asked:

> Is anyone doing true end-to-end speech models locally (streaming audio out), or is the SOTA still “streaming ASR + LLM + streaming TTS” glued together?

Your setup is the latter, not the former.

Comment by doonielk 10 hours ago

I did a MLX "streaming ASR + LLM + streaming TTS" pipeline in early 2024. I haven't worked on it since then so it's dated. There are now better versions of all the models I used.

I was able to conversational latency with the ability to interrupt the pipeline on a Mac, using a variety of tricks. It's MLX, so only relevant if you have a Mac.

https://github.com/andrewgph/local_voice

For MLX speech to speech, I've seen:

The mlx-audio package has some MLX implementations of speech to speech models: https://github.com/Blaizzy/mlx-audio/tree/main

kyutai Moshi, maybe old now but has a MLX implementation of their speech to speech model: https://github.com/kyutai-labs/moshi

Comment by zahlman 10 hours ago

What exactly do you want the pipeline to do that cares about the input being "speech", or indeed that's different from just sending mic -> speaker directly? (I can imagine a few different things, but I want to figure out if your use case sounds like mine, or what suggestions are appropriate for what tasks.)

Comment by sgt 7 hours ago

While on this subject, what's the go to transcribe speech to text model (open source or proprietary, doesn't matter) if you have to support a lot of languages really well?

Comment by nemima 7 hours ago

If propeietary/SaaS fits your use case I can reccomend Speechmatics. Has a wider range of languages than a lot of the competition: https://speechmatics.com

(Full disclosure I'm an engineer there)

Comment by sgt 4 hours ago

Will it work with say - someone speaking English with some hindi mixed in? I'm not from there so I'm not sure how prevalent that is, but I've been told it's quite common to "mix it up" in India, and I need to probably cater for that use case.

PS if you can share your email I'll pop you an email about Speechmatics. I tried the English version and it's impressive.

Comment by nemima 3 hours ago

This is definitely the sort of use case we aim to support! I would need to check about Hindi specifically, but we have several bilingual models already with more to come:

https://docs.speechmatics.com/speech-to-text/languages#trans...

Drop me an email at mattn@speechmatics.com and we can chat about further details :)

Comment by dvfjsdhgfv 6 hours ago

I spent a few days on similar scenario without much success (scenario where one person speaks and then their speech is translated, and I want juts the original or both).

An API call to GPT4o works quite well (it basically handles both transcription and diarization), but I wanted a local model.

Whisper is really good for 1 person speaking. With more people you get repetitions. Qwen and other open multimodal models gives subpar results.

I tried multipass approach, with the first one identifying the language and chunking and the next one the actual transcription, but this tended to miss a lot of content.

I'm going to give canary-1b-v2 a try next weekend. But it looks like in spite of enormous development in other areas, speech recognition stalled since Whisper's release (more than 3 years already?).

Comment by sails 9 hours ago

Looking for an iOS app to test this as I’m generally curious about the capabilities of on devices TTS (yet to find an app, but there are loads for text gen)

It can’t be too far off considering Siri and TTS has been on devices for ages

Comment by 8 hours ago

Comment by varik77 15 hours ago

I have used https://github.com/SaynaAI/sayna . What I like the most is that you can switch between the providers easily and see what works for you the best. It also supports local models.

Comment by ripped_britches 12 hours ago

speech to speech is not nearly as good as livekit IMO ("old school" sequence of transcribe, LLM, synthesize). depends on what you're doing of course, but this is just because the LLMs are just way smarter than the speech to speech models which are pretty much the worst (again IMO) at anything beyond basic banter. and livekit is just a framework so you can hook it up with any models in the stack. im not an expert on the local parts but i would assume this pretty easy to glue together.

Comment by vidarh 7 hours ago

They work for two entirely different things. The problem with these pipelines is that unless the latency is very low they simply aren't suitable replacements for Alexa etc. For that use case, low latency beats smarts.

Comment by ripped_britches 2 hours ago

The latency is very very low in my experience, it would definitely work well as an Alexa style assistant

Comment by hedgehog 15 hours ago

I haven't tried them myself but the Kyutai has a couple projects that could fit.

https://kyutai.org

Comment by Johnny_Bonk 17 hours ago

Anyone using any reasonably good small speech to text os models?

Comment by d4rkp4ttern 4 hours ago

Parakeet V3 is near-instant transcription, and the slight accuracy drop relative to the slower/bigger Whisper models is immaterial when talking to AIs that can “read between the lines”.

Comment by woudsma 8 hours ago

I’m using whisper with superwhisper on my mac. I’ve assigned a key on my keyboard, when I press the key it starts listening and when I release it, the text gets copied to the current cursor location. It works pretty well.

Comment by garblegarble 17 hours ago

For my inputs, whisper distil-large-v3.5 is the best. I tried Parakeet 0.6 v3 last night but it has higher error rates than I'd like (but it is fast...)

Comment by Johnny_Bonk 17 hours ago

Nice I'll try it, as of now for my personal stt workflow I use eleven labs api which is pretty generous but curious to play around with other options

Comment by garblegarble 17 hours ago

I assume that will be better than whisper - I haven't benchmarked it against cloud models, the project I'm working on cannot send data out to cloud models

Comment by BiraIgnacio 17 hours ago

oh I've been looking into whisper and vosk in the last few days. I'll probably go with whisper (with whisper.cpp) but has anyone compared it to vosk models?

Comment by jauntywundrkind 19 hours ago

It was a little annoying getting old qt5 tools installed but I really enjoyed using dsnote / Speech Note. Huge model selection for my amd gpu. Good tool. I haven't done enough specific studying yet to give you suggestions for which model to go with. WhisperFlow is very popular.

Kyutai some very interesting work always. Their delayed streams work is bleeding edge & sounds very promising especially for low latency. Not sure why I have not yet tried it tbh. https://github.com/kyutai-labs/delayed-streams-modeling

There's also a really nice elegant simple app Handy. Only supports Whisper and Parakeet V3 but nice app & those are amazing models. https://github.com/cjpais/Handy

Comment by soulofmischief 12 hours ago

I have a great local assistant that works end-to-end with voice. It's built on local, web-first technologies, it fits small LLMs in memory and manages inference and TTS/STT without stuttering. I've been shaping it up over a couple years and constantly switching out new models.

If you want something simple that runs in browser, look at vosk-browser[0] and vits-web[1].

I'd also recommend checking out KittenTTS[2], I use it and it's great for the size/performance. However, you'd need to implement a custom JavaScript harness for the model since it's a python project. If you need help with that, shoot me an email and I can share some code.

There are other great approaches too if you don't mind python, personally I chose the web as a platform in order to make my agent fully portable and remote once I release it.

And of course, NVIDIA's new model just came out last week[3] but I haven't gotten to test it out just yet, and also there was the recent Sparrow-1[4] announcement which shows people are finally putting money into the problems plaguing voice agents that are rigged up from several models and glue infrastructure, vs a single end-to-end model or at least a conversational turn-taking model to keep things on rails.

[0] https://www.npmjs.com/package/vosk-browser

[1] https://github.com/diffusionstudio/vits-web

[2] https://github.com/KittenML/KittenTTS

[3] https://research.nvidia.com/labs/adlr/personaplex/

[4] https://www.tavus.io/post/sparrow-1-human-level-conversation...

Comment by masardigital 3 hours ago

good

Comment by DANmode 15 hours ago

https://handy.computer got good marks from a very nontechnical user in my life this week!

Local, FOSS

Comment by benatkin 13 hours ago

To save a click, it's just a fancy front end for Whisper plus a weaker CPU-only model. It has a demo video that seems impressive, but the speech is careful to sound casual while having no meaningful flaws that would cause it to mess up. If you want to make a speech to speech tool, which is what this post asks about, it would make more sense to go straight to Whisper.

Comment by joshribakoff 11 hours ago

I use it, sponsor it, and did a small pr. One of its goals is to be the most “forkable” starting point if i recall. But yes its just voice input. It’s meaningfully better than the mac dictation for me.

Comment by tuananh 12 hours ago

you can use gpu too. i have to admit the app is very easy to use and super convenient. kudos to creator

Comment by benatkin 11 hours ago

Yes, and with GPU, it's Whisper, which has been mentioned elsewhere in this article's comments. I mean that handy.computer provides the other option as a fallback for those who can't or don't want to use the GPU.

Comment by bakeryenjoyer 6 hours ago

[dead]

Comment by hackomorespacko 16 hours ago

[flagged]