TL:DR During an online meeting, the app instantly transcribes and translates all audio you hear, and allows you to decide when you translate your voice and when you don't. It's invisible to others (like Granola), and works everywhere without any meeting bots. Try it at startpinch.com
Here's a live demo we recorded this morning, without cuts: https://youtu.be/ltM2p-SosLc
When we first launched Pinch, we shipped a video conferencing solution with a human-like AI interpreter that was an active participant in your call. Our users hold the spacebar down while speaking to the translator, and when they release the spacebar the translator speaks out to the entire room.
That design was intentional - it puts the task of context selection on the user and prevents people from interrupting each other awkwardly (only one person can press spacebar at a time). It also comes with heavy tradeoffs, namely:
* Latency - Up to 2x longer meeting lengths due to everyone hearing your full sentence and then the translation of your full sentence
* Friction with first-time users - Customers using Pinch for external communication often meet with new people each time, and we've learned of several that send out an instruction doc pre-meeting on how to join and use translation in the Pinch call. Bad signal for our UX.
* Restricting our customers to those who are meeting creators
Benefits of the desktop app:
1. It creates a virtual microphone that you can use in any meeting app
2. Instant transcription+translation means you can understand what's going on in real-time and interrupt where necessary
3. Simultaneous translation - after you start speaking, the others will hear your translated audio as fast as we can generate it, without interrupting your flow.
Over the last months our focus has been on developing a model and UX to support high translation accuracy while automating context selection - knowing exactly when it has enough words to start the translated sentence. We’ve rolled this out to the desktop app first.
We're incredibly excited to go public beta today, you can give it a try at www.startpinch.com
Cheers, - Christian
I'm developer of "Linguist", a browser extension for translation in browser, and I say you that nowadays it is possible to translate text locally in device. Linguist have embedded offline translator. The same with TTS and voice recognition.
All this features may run locally in-device, even in browser extension, but not in macOS application?
This product looks rather like a malware that will spy on users and then blackmail us or sell our conversations to email scammers for better targeting or anything.
Additionally it is interesting that Chinese and Korean languages is not supported. You just use cloud services, they are all support these languages, why you don't? Is it to fake something?
"12 hours translation per month" for $29. 12 hours it's about 6-12 meetings? Who is your audience then?
If you add Deepgram listen API compatibility, you can do live transcription via either Deepgram (cloud) or OWhisper (local): https://news.ycombinator.com/item?id=44901853
I am the IDEAL user, willing to pay a lot if this works. Count on me feedbacks.
So far:
good: - initial config as good, easy and very simple to feel secure
bad: - high cpu usage - zoom asked me to restart mic - could not make sure if the software works... Google meet has not voice testing loop... Zoom has it, but it did not work for me.