my optimistic hypothesis is he really wants to control AGI because he believes he can make a more efficient use of it than other people. he might even be right and that's what scares me, because I don't trust his measures of efficiency.
I'd rather not let my pessimistic fantasies run wild here.
[0] https://www.cnbc.com/2023/03/24/openai-ceo-sam-altman-didnt-...
we talk dollars all day long but we haven't quantified power nearly as well
They're basically owned by Microsoft, they're bleeding tech/ethnical talent and credibility, and most importantly Microsoft Research itself is no slouch (especially post-Deepmind poaching) - things like Phi are breaking ground on planets that openai hasn't even touched.
At this point I'm thinking they're destined to become nothing but a premium marketing brand for Microsoft's technology.
- OpenAI approached Scarlett last fall, and she refused.
- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)
- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.
- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.
Perhaps Sam’s next tweet should read “red-handed”.
"Midler was asked to sing a famous song of hers for the commercial and refused. Subsequently, the company hired a voice-impersonator of Midler and carried on with using the song for the commercial, since it had been approved by the copyright-holder. Midler's image and likeness were not used in the commercial but many claimed the voice used sounded impeccably like Midler's."
As a casual mostly observer of AI, even I was aware of this precedent
Deleted Comment
This person is famous?
sounds like he's mad about being sloppy seconds `:)`
Either way, if it's enough to cause them both to think it's better to research outside of the opportunities and access to data that OpenAI provides, I don't see a scenario where this doesn't indicate a significant shift in OpenAI's commitment to superalignment research and safety. One hopes that, at the very least, Microsoft's interests in brand integrity incentivize at least some modicum of continued commitment to safety research.
Besides the infamous Tay there was that apparently un-aligned Wizard-2[or something like that] model from them which got released by mistake for about 12 hours
I recently received an job recruitment email for an AI role in all-lowercase and I was baffled how to interpret it.
Add a grade in red at the top if you're feeling extra cheeky