GUIs provide information in 2D, letting eyes skim and bypass information that's not useful.
VUIs provide information in 1D, forcing you to take information in a linear stream and providing weak at best controls for skipping around without losing context or place.
Not coincidentally, this is why I absolutely hate watching videos for programming and news. If it's an article or post or thread, I can quickly evaluate if it has what I want and bypass the fluff. Videos simply suck at being usable, unless it's for something physical like how to work on a particular motor or carve a particular pattern into wood.
None of these basic realities are accounted for in current technology. Instead we have these dumb robot voices reading us results from a preprocessed script that it thinks answers our question. No wonder the monkey part of our brain immediately picks up on the fact that this whole facade isn't just a lie, but an excruciating lie. It's excruciating because it's immediately obvious that there's nothing else 'there' to interact with. Even when speaking to another person over the phone, there's a huge amount of nuance you can pick up on. Are they happy? Are they sad? Are they frazzled? Are they in a rush? Are they relaxed? And you automatically calibrate your responses and what you say in the conversation based on all of these perfectly obvious things. Normal humans automatically calibrate what they say, how they sound, what they suggest based on these cues. It works really well!
There's no reason voice stuff has to suck. It has worked pretty great for humans for thousands of years. We're evolutionarily tuned to it. It's just that all the technology we've created around it totally sucks and people are delusional if they think it's anywhere near prime time.
The one main situation where NL interfaces are superior is when you are mobile (like driving) or hands are tied up.
I think this affects GPTs just like it did Alexa. Which means that GPTs aren’t the final UI. The real innovation will be in the right AI UX.
The core problem is that these systems are just so incorrect in fundamental ways that they're effectively useless.
Imagine a buddy of yours tells you about an event he's pretty sure you'll be interested in. Why does he tell you about this event? Well, he knows your interests, what kind of things you enjoy, when you're free, who you might want to go to the event with, how much money you're willing to spend, how far you're willing to travel, when you like to go out... So when you're on the receiving end of such a suggestion it often feels great! It's like you've struck gold.
Now imagine your average 'AI' powered recommendation engine reading you a list of events. It doesn't feel magical. It doesn't even feel like it knows what the hell you enjoy doing half the time. Forget about knowing about your free time, budgetary restrictions, family restrictions, who you'd be able to go with; None of that stuff is even sort of in the picture. And it's all delivered to you in a voice that sounds like it would be as happy to kill you as give you advice. There's no lively back and forth on the logistics of the event. No feeling of discovery as you two talk it out, honing the plan that brings it from an abstract concept to reality.
It's just dead and lifeless and shitty.
Please get real.
Or they'll do something hilarious like sell VCs on a world wide cryptocurrency that is uniquely joined to an individual by their biometrics and somehow involves AI. I'm sure they could wrangle a few hundred million out of the VC class with a braindead scheme like that.
From my read, Ilya's goal is to not work with Sam anymore, and relatedly, to focus OpenAI on more pure AGI research without needing to answer to commercial pressures. There is every indication that he will succeed in that. It's also entirely possible that that may mean less investment from Microsoft etc, less commercial success, and a narrower reach and impact. But that's the point.
Sam's always been about having a big impact and huge commercial success, so he's probably going to form a new company that poaches some top OpenAI researchers, and aggressively go after things like commercial partnerships and AI stores. But that's also the point.
Both board members are smart enough that they will probably get what they want, they just want different things.
Any decision that doesn't make the 'line go up' is considered a dumb decision. So to most people on this site, kicking Sam out of the company was a bad idea because it meant the company's future earning potential had cratered.
I grew up and got into software by messing around with self-hosting web servers and game communities as a kid. As time has gone on I felt like we had lost some of the magic of easily sharing your machines and your creations with other people. We have a ton of services where you can now deploy and share your creations, but we've moved further and further away from direct sharing. There were plenty of good reasons why this has happened, with security being the most obvious factor, but it still makes me a little sad. I want my things to be able to talk to each other no matter where I am. I want to be able to invite my friends in and have access to my stuff.
Tailscale makes all of that quick, easy and awesome. I think it's really neat, makes me feel like a little nerdy kid again.