Source code: https://github.com/fastrepl/hyprnote Demo video: https://hyprnote.com/demo
We built Hyprnote because some of our friends told us that their companies banned certain meeting notetakers due to data concerns, or they simply felt uncomfortable sending data to unknown servers. So they went back to manual note-taking - losing focus during meetings and wasting time afterward.
We asked: could we build something just as useful, but completely local?
Hyprnote is a desktop app that transcribes and summarizes meetings on-device. It captures both your mic input and system audio, so you don't need to invite bots. It generates a summary based on the notes you take. Everything runs on local AI models by default, using Whisper and HyprLLM. HyprLLM is our proof-of-concept model fine-tuned from Qwen3 1.7B. We learned that summarizing meetings is a very nuanced task and that a model's raw intelligence (or weight) doesn't matter THAT much. We'll release more details on evaluation and training once we finish the 2nd iteration of the model (still not that good we can make it a lot better).
Whisper inference: https://github.com/fastrepl/hyprnote/blob/main/crates/whispe...
AEC inference: https://github.com/fastrepl/hyprnote/blob/main/crates/aec/sr...
LLM inference: https://github.com/fastrepl/hyprnote/blob/main/crates/llama/...
We also learned that for some folks, having full data controllability was as important as privacy. So we support custom endpoints, allowing users to bring in their company's internal LLM. For teams that need integrations, collaboration, or admin controls, we're working on an optional server component that can be self-hosted. Lastly, we're exploring ways to make Hyprnote work like VSCode, so you can install extensions and build your own workflows around your meetings.
We believe privacy-first tools, powered by local models, are going to unlock the next wave of real-world AI apps.
We're here and looking forward to your comments!
The social proof logo list is an old scheme on the growth hacking checklist. There was a time when it was supposed to mean the company had purchased the software. Now it just means they knew someone who worked at those companies who said they’d check it out.
At this point, when I visit a small product’s landing page and see the logo list the first thing I think of is that they’re small and desperate to convince me they’re not.
Of course, dishonesty is as old as time, but these last couple of years have been hard to watch…
Help me understand what this means
Well, it looks a lot like you're playing word games to get clout-by-association that you don't necessarily deserve. That doesn't seem like something an authentic person (or people) would try to do. Are the other claims about your team and software equally unserious?
I hope, since it’s opensource, you are thinking about exposing api / hooks for downstream tasks.
I’m the opposite: If something is expected to accurately summarize business content, I want to use the best possible model for it.
The difference between a quantized local model that can run on the average laptop and the latest models from Anthropic, Google, or OpenAI is still very significant.
Calendar integration would be nice to link transcripts to discrete meetings.
Would be great if you could include in your launch message how you plan to monetize this. Everybody likes open source software and local-first is excellent too, but if you mention YC too then everybody also knows that there is no free lunch, so what's coming down the line would be good to know before deciding whether to give it a shot or just move on.
We have a Pro license implemented in our app. Some non-essential features like custom templates or multi-turn chat are gated behind a paid license. (A custom STT model will also be included soon.) There's still no sign-up required. We use keygen.sh to generate offline-verifiable license keys. Currently, it's priced at $179/year.
For business:
If they want to self-host some kind of admin server with integrations, access control, and SSO, we plan to sell a business license.
Let's actively not support software that chooses anti-security.
Also Linux issue pointer: https://github.com/fastrepl/hyprnote/issues/67#issuecomment-...
Anyway, thanks for your work and good luck!
I made it required to prevent accidentally ship app without any analytics/error tracking. (analytics can be opted out)
For ex, https://github.com/fastrepl/hyprnote/blob/327ef376c1091d093c...
EDIT: Prod -> release
I find myself often using otter.ai - because while it's inferior to Whisper in many ways, and anything but on-device, it's able to show words on the live transcript with minimal delay, rather than waiting for a moment of silence or for a multi-second buffer to fill. That's vital if I'm using my live transcription both to drive async summarization/notes and for my operational use in the same call, to let me speed-read to catch up to a question that was just posed to me while I was multitasking (or doing research for a prior question!)
It sometimes boggles me that we consider the latency of keypress-to-character-on-screen to be sacrosanct, but are fine with waiting for a phrase or paragraph or even an entire conversation to be complete before visualizing its transcription. Being able to control this would be incredible.
Doing it locally is hard, but we expect to ship it very soon. Please join our Discord(https://hyprnote.com/discord) if you are interested to hear from us.
But because MacWhisper does not store transcripts or do much with them (other than giving you export options), there are some missed opportunities: I'd love to be able to add project tags to transcripts, so that any new transcript is summarized with the context of all previous transcript summaries that share the same tag. Thinking about it maybe I should build a Logseq extension to do that myself as I store all my meeting summaries there anyway.
Speaker detection is not great in MacWhisper (at least in my context where I work mostly with non native English speakers), so that would be a good differentiation too.
automated meeting detection - working on this. push to transcribe - want to understand more about this. (could we talk more over at our discord? https://hyprnote.com/discord)
if you're using logseq, we'd love to build an integration for you.
finally, speaker identification is a big challenge for us too.
so many things to do - so exciting!
But your home page makes it looks like you already have it. I just tried it in a 30-minute meeting with 20 people and it put the entire conversation under a single speaker, in a single paragraph.
From a business perspective, and as someone looking also into the open-source model to launch tools, I'd be interested though how you expect revenue to be generated?
Is it solely relying on the audience segment that doesn't know how to hook up the API manually to use the open-source version? How do you calculate this, since pushing it via open-source/github you would think that most people exposed to it are technical enough to just run it from source.
Hope that make sense
First of all I wanna try this on my personal device. How can I implement your product in my devices?
I also want to discuss some points (on-device, self-hosting, open-source, etc.) to persuade our compliance team, as well
How about the Window(more of native Window OS than VM or WSL) case? What's your plan for Window version launch and support?