There are a few drawbacks to local, I've discovered. For example I doubt the new plugins can be extended to beyond ChatGPT's web UI. Also, it doesn't stream response tokens as they're generated, which is a pain. I haven't looked into whether OpenAPI let you do that though.
Nice work!
[0] https://github.com/vocodedev/vocode-python/blob/main/vocode/...
You're right sampling rate doesn't change speed, whoops. But on that page you have to change / set the "Set Output Sampling Rate" to slow down the default voice speed.
Important feedback on the live demo page: Make the default output sampling rate a normal talking speed. Right now it defaults to the highest rate if you don't set it / know which rate is best. First thing I did on the page was click the mic. The voice was too fast, and since the active mic disables the settings, I thought I couldn't change them so it might be broken. Also you want to make it clear that you can change the settings by turning off the mic. That took me a while to figure out.
Again, well done!
However, let's take a step back and consider a few basic arguments that this article makes - I suspect most here would only disagree with the very last.
Firstly, the demand for quality software is far greater than the current supply. If there were suddenly twice the number of mid-level software engineers, there would still be more than enough work for them.
Secondly, software engineering is a process that can be refined and created almost entirely without the interference of non-deterministic real-world systems like roads, weather, or courts. This makes it an ideal field for automation and AI to play a larger role.
Thirdly, a sufficiently intelligent computer could exponentially increase its own efficacy in performing purely-digital tasks -- basically an [intelligence explosion](https://www.lesswrong.com/tag/intelligence-explosion) but much softer and much much more achievable (how smart is a mid-level SWE, really?).
Finally, LLMs that get a ~100 on an IQ test are enough to start that cycle.
Perhaps you're super duper convinced that the last point is wrong. Perhaps you have strongly-held convictions about explainability, symbolic reasoning, higher level thinking, etc. But if you really sit down and think, what are the chances you're wrong? What are the chances that we get another leap or two in the next 1-5 years like we just got with DL transformers?
If you're feeling excited and anxious about the implications of this, you're not alone. I've found that it's a difficult topic to discuss with those close to me, especially if they're not familiar with the latest developments in AI. If you have thoughts on how to use our SWE experience to navigate this exciting but uncertain landscape while maintaining a sense of self-preservation and helping others as much as possible, I'd be interested in hearing your ideas.
Anybody, and I mean anybody, can point out little things that are wrong in something. Especially in construction, by people that have never done it themselves but holds financial power. Also, I guarantee all the slight changes he thought were critical enough to warrant constant intervention actually weren’t, or could’ve been addressed later. The cost would have been significantly higher if the contractor factored in the time it took to satisfy him. He was getting what he paid for.