This is my first comment so I'm not sure how to do this but I made a BYO-API key VSCode extension that uses the OpenAI realtime API so you can have interactive voice conversations with a rubber ducky. I've been meaning to create a Show HN post about it but your comment got me excited!
In the future I want to build features to help people communicate their bugs / what strategies they've tried to fix them. If I can pull it off it would be cool if the AI ducky had a cursor that it could point and navigate to stuff as well.
Please let me know if you find it useful https://akshaytrikha.github.io/deep-learning/2025/05/23/duck...
Its as if the rubber duck was actually on the desk while youre programming and if we have an MCP that can get live access to code it could give you realtime advice.
Hey there buddy! Have you tried brushing with Sensodyne now available at your nearest CVS only for $9.99!
What kind of interesting challenges have you run into, and how have your work influenced the OpenAI's realtime API?
PS: Your github readme is quite well crafted, nowadays hard to come across.
It seems pointless to think that everyone should cross that C++/Audio barrier to make something cool. Using this cuts a lot of dev time and brings products out to market wayy quicker. The repo basically helps launch your AI toy brand
Not the first time I ran into it, but I did not bother commenting.
I can recognize it from far away. Thankfully I am not the only one.
The circuit diagram in on figma
And demo video edited on capcut
There's plenty of things that you need to make an AI agent that I woudn't want to re-implement or copy and paste each time. The most annoying being automatic conversation history summarization (e.g. I accidentally wasted $60 with the latest OpenAI realtime model, because the costs go up very quickly as the conversation history grows). And I'm sure we'll discover more things like that in the future.