Readit News logoReadit News
Posted by u/braden-w 6 months ago
Show HN: Whispering – Open-source, local-first dictation you can trustgithub.com/epicenter-so/e...
Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app.

I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went.

So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. Your data is stored locally on your device, and your audio goes directly from your machine to a local provider (Whisper C++, Speaches, etc.) or your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.). For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before).

Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office.

Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs, and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ.

There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://github.com/cjpais/Handy). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model.

Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter (https://github.com/epicenter-so/epicenter), which I should explain a bit...

I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it.

Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now.

I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon.

Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here (https://github.com/epicenter-so/epicenter) and join the Discord here (https://go.epicenter.so/discord). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!

pstroqaty · 6 months ago
If anyone's interested in a janky-but-works-great dictation setup on Linux, here's mine:

On key press, start recording microphone to /tmp/dictate.mp3:

  # Save up to 10 mins. Minimize buffering. Save pid
  ffmpeg -f pulse -i default -ar 16000 -ac 1 -t 600 -y -c:a libmp3lame -q:a 2 -flush_packets 1 -avioflags direct -loglevel quiet /tmp/dictate.mp3 &
  echo $! > /tmp/dictate.pid
On key release, stop recording, transcribe with whisper.cpp, trim whitespace and print to stdout:

  # Stop recording
  kill $(cat /tmp/dictate.pid)
  # Transcribe
  whisper-cli --language en --model $HOME/.local/share/whisper/ggml-large-v3-turbo-q8_0.bin --no-prints --no-timestamps /tmp/dictate.mp3 | tr -d '\n' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//'
I keep these in a dictate.sh script and bind to press/release on a single key. A programmable keyboard helps here. I use https://git.sr.ht/%7Egeb/dotool to turn the transcription into keystrokes. I've also tried ydotool and wtype, but they seem to swallow keystrokes.

  bindsym XF86Launch5 exec dictate.sh start
  bindsym --release XF86Launch5 exec echo "type $(dictate.sh stop)" | dotoolc
This gives a very functional push-to-talk setup.

I'm very impressed with https://github.com/ggml-org/whisper.cpp. Transcription quality with large-v3-turbo-q8_0 is excellent IMO and a Vulkan build is very fast on my 6600XT. It takes about 1s for an average sentence to appear after I release the hotkey.

I'm keeping an eye on the NVidia models, hopefully they work on ggml soon too. E.g. https://github.com/ggml-org/whisper.cpp/issues/3118.

Y_Y · 6 months ago
This is my favorite kind of software, to write and to use. I am reminded of Upton Sinclair's evergreen quote:

> It is difficult to get a man to understand something, when his salary depends upon his not understanding it!

in the special case where the thing to be understood is "your app doesn't need to be a Big Fucking Deal". Maybe it pleases some users to wrap this in layers of additional abstraction and chrome and clicky buttons and storefronts, but in the end the functionality is already there with a couple of FOSS projects glued together in a bash script.

I used to think the likes of Suckless were brutalist zealots, but more and more I think they (and the Unix patriarchs) were right and the path to enlightenment is expressed in plain text.

genewitch · 6 months ago
before i even bother opening that github: does it work on windows? so far, all of the whisper "clones" run poorly, if at all, on windows. I do have a 3060 and a 1070ti i could use just for whisper on linux, but i have this 3090 on my windows desktop that works "fine" for whisper, TTS, SD, LLM.

Whisper on windows, the openai-whisper, doesn't have these q8_0 models, it has like 8 models, and i always get an error about triton cores (something about timeptamping i guess), which windows doesn't have. I've transcribed >1000 hours of audio with this setup, so i'm used to the workflow.

generalizations · 6 months ago
If you want to stay near the bleeding edge with this stuff, you probably want to be on some kind of linux (or lacking that, Mac). Windows is where stuff just trickles down to eventually.

Deleted Comment

braden-w · 6 months ago
Thanks for sharing a great alternative! It seems that that setup can go a long way for Linux users.
wkcheng · 6 months ago
Does this support using the Parakeet model locally? I'm a MacWhisper user and I find that Parakeet is way better and faster than Whisper for on-device transcription. I've been using push-to-transcribe with MacWhisper through Parakeet for a while now and it's quite magical.
braden-w · 6 months ago
Not yet, but I want it too! Parakeet looks incredible (saw that leaderboard result). My current roadmap is: finish stabilizing whisper.cpp integration, then add Parakeet support. If anyone has bandwidth to PR the connector, I’d be thrilled to merge it.
Bolwin · 6 months ago
Unfortunately, because it's Nvidia, parakeet doesn't work with Whisper.cpp as far as I'm aware. You need onnx
daemonologist · 6 months ago
Parakeet is amazing - 3000x real-time on an A100 and 5x real-time even on a laptop CPU, while being more accurate than whisper-large-v3 (https://huggingface.co/spaces/hf-audio/open_asr_leaderboard). NeMo is a little awkward though; I'm amazed it runs locally on Mac (for MacWhisper).
wkcheng · 6 months ago
Yeah, Parakeet runs great locally on my M1 laptop (through MacWhisper). Transcription speed of recordings feel at least 10x faster than Whisper, and the accuracy is better as well. Push to talk for dictation is pretty seamless since the model is so fast. I've observed no downside to Parakeet if you're speaking English.
warangal · 6 months ago
A bit tangential statement, about parakeet and other Nvidia Nemo models, i never found actual architecture implementations as pytorch/tf code, seems like all such models, are instant-ized from a binary blob making it difficult to experiment! Maybe i missed something, does anyone here have more experience with .nemo models to shed some more light onto this?
polo · 6 months ago
+1 for MacWhisper. Very full featured, nice that it's a one time purchase, and the developer is constantly improving it.
mark212 · 6 months ago
seems like "not yet" is the answer from other comments
chrisweekly · 6 months ago
> "I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it."

Yes! This. I have almost no experience w/ tts, but if/when I explore the space, I'll start w/ Whispering -- because of Epicenter. Starred the repo, and will give some thought to other apps that might make sense to contribute there. Bravo, thanks for publishing these and sharing, and congrats on getting into YC! :)

braden-w · 6 months ago
Thanks so much for the support! Really appreciate the feedback, and it’s great to hear the vision resonates. No worries on the STT/TTS experience; it’s just awesome to connect with someone who shares the values of open-source and owning our data :) I’m hoping my time in YC can be productive and, along the way, create more support for other OSS developers too. Keep in touch!
sebastiennight · 6 months ago
I think we're talking about STT (speech-to-text) here, not TTS.
chrisweekly · 6 months ago
whoops! absolutely correct, that's what I meant.
spullara · 6 months ago
IF you do want to then ALSO have a cloud version, you can just use the AgentDB API and upload them there and just change where the SQL runs.
dev0p · 6 months ago
That's a good idea... Just git repo your whole knowledge base and build on top of it.
braden-w · 6 months ago
For those checking out the repo this morning, I'm in the middle of a release that adds Whisper C++ support!

https://github.com/epicenter-so/epicenter/pull/655

After this pushes, we'll have far more extensive local transcription support. Just fixing a few more small things :)

teiferer · 6 months ago
You mentioned that you got into YC .. what is the road to profitability for your project(s) if everything is open source and local?
marcodiego · 6 months ago
> I’m basically obsessed with local-first open-source software.

We all should be.

braden-w · 6 months ago
Agreed!
dumbmrblah · 6 months ago
I’ve been using whispering for about a year now, it has really changed how I interact with the computer. I make sure to buy mice or keyboards that have programmable hotkeys so that I can use the shortcuts for whispering. I can’t go back to regular typing at this point, just feels super inefficient. Thanks again for all your hard work!
braden-w · 6 months ago
Thank you so much for your support! It really means a lot :) Happy to hear that it's helped you, and keep in touch if you ever have any issues!
Tmpod · 6 months ago
I've been interested in dictation for a while, but I don't want to be sending any audio to a remote API, it all has to be local. Having tried just a couple of models (namely the one used by the FUTO Keyboard), I'm kinda feeling like we're not quite there yet.

My biggest gripe perhaps is not being able to get decent content out of a thought stream; the models can't properly filter out the pauses, "uuuuhmms", and much less so handle on the fly corrections to what I've been saying, like going back and repeating something with a slight variation and whatnot.

This is a challenging problem I'd love to see being tackled well by open models I can run on my computer or phone. Are there new models more capable of this? Is it not just a model thing, and I missing a good app too?

In the meanwhile, I'll keep typing, even though it can be quite a bit less convenient to do; especially true for note taking on the go.

hephaes7us · 6 months ago
Have you tried Whisper itself? It's open-weights.

One of the features of the project posted above is "transformations" that you can run on transcripts. They feed the text into an LLM to clean it up. If you're willing to pay for the tokens, I think you could not only remove filler-words, but could probably even get the semantically-aware editing (corrections) you're talking about.

braden-w · 6 months ago
^Yep, unfortunately, the best option right now seems to pipe the output into another LLM to do some cleanup, which we try to help you do in Whispering. Recent transcription models don't have very good built-in inference/cleanup, with Whisper having the very weak "prompt" parameter. It seems like this is probably by design to keep these models lean/specialized/performant in their task.

Deleted Comment

glial · 6 months ago
This is wonderful, thank you for sharing!

Do you have any sense of whether this type of model would work with children's speech? There are plenty of educational applications that would value a privacy-first locally deployed model. But, my understanding is that Whisper performs pretty poorly with younger speakers.

braden-w · 6 months ago
Thank you! And you’re right, I think Whisper struggles with younger voices. Haven’t tested Parakeet or other models for this yet, but that’s a great use case (especially since privacy matters in education). I would also shoutout Hypernote! (https://hyprnote.com/) They might be expanding their model options, as they have shown with OWhisper (https://docs.hyprnote.com/owhisper/what-is-this).