Very few are talking about Adam D'Angelo's insane conflicts of interest. Beyond ChatGPT being a killshot for Quora, the recently launched ChatGPT store puts Adam's recent effort, Poe, under existential threat. OpenAI Dev Day has been cited as the final straw, but is it mere coincidence that the event and subsequent fallout occurred less than a week after Poe announced their AI creator economy?
Adam had no incentive to kill OpenAI, but he had every incentive to get the org to reign in their commercialization efforts and to instead focus on research and safety initiatives, taking the heat off Poe while still providing it with the necessary API access to power the product.
I don't think it's crazy to speculate that Adam might have drummed up concern amongst the board over Sam's "dangerous" shipping velocity, sweeping up Ilya in the hysteria who now seems to regret taking part. Sam and Greg have both signaled positive sentiment towards Ilya, which points to them possibly believing he was misguided.
I agree with pretty much everything you've written except "Very few are talking about Adam D'Angelo's insane conflicts of interest." I've seen tons of comments all over the HN OpenAI stories about this, to the point where a lot of them feel unnecessarily conspiratorial.
Like your second paragraph, I don't believe that you need to get to the level of a "D'Angelo wanted to kill OpenAI" conspiracy. Whenever there is a flat out, objective conflict of interest like there obviously is in this case, it doesn't matter what D'Angelo's true motivations are. The conflict of interest should be in and of itself a cause for D'Angelo to have resigned. I mean, Reid Hoffman (who likely would have prevented all this insanity) resigned from the OpenAI board just in March because he had a similar conflict of interest: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
This seems to be the most likely explanation of the events. People say how incredibly smart Adam is. That might be true. Nevertheless being smart doesn't mean he is a good fit for a board seat with such a backstabbing attitude. On the other side Helen, having a fancy title "Director of Strategy at Georgetown’s Center for Security and Emerging Technology" without much substance if you look closely and Tash, both who got their seat as a donor exchange by organisations and people they are closely connected with are clinging to their seats like super glue when almost all employees signed a letter that they don't want to be governed by them anymore. This board is a masterpiece of fragile egos who accidentally got into the governance of a major company without the ability to contribute anything of substance back. Instead they are being remembered for one of the greatest board screw-ups in business history.
Yep, this is the most likely explanation now. There's only four people:
- McCauley: Doesn't seen to have a high profile or the standing required to initiate and drive this
- Toner: Fun to speculate she's a government agent to bring down OpenAI, but in reality also doesn't seem to have the profile and motive to drive this.
- Sutskever - he was the #1 suspect over the weekend, has the drive, profile and motivation to pull this off, but now (Monday) deeply regrets it.
- D'Angelo - has the motive, drive, and profile to do this.
Best guess: Quora is a ZIRP Shitco and is in trouble, Poe is gonna get steamrolled by OAI and Adam needs a bailout. Why not get rid of Sam, get bought out by OAI and become its CEO? So he convinces Ilya to act on some pre-existing concerns, then uses Ilya's credibility to get Toner and McCauley onboard. It's really the only thing that makes sense anymore.
I think this is exactly what happened Thursday and Friday. Plus, Adam D'Angelo has a bit of a reputation[0] as being a backstabber.
Continuing the saga over the weekend, you would assume that Ilya regrets the coup and can vote to re-appoint Sam as CEO, BUT that leaves McCauley and/or Toner as wildcards.
In a Sam-returning scenario, all of the nobodies on the board have to resign. Presumably, D'Angelo offers an alternative solution that appoints Emmett Shear as CEO and gives McCauley and Toner a viable way to salvage (LOL) OpenAI and also allow them to keep their board seats.
What do you mean fun to speculate? I think there's no doubt that Toner is not for real and Georgetown Center for Security and Emerging Technology smells fine, too, I mean their mission is quite literally "Providing decision-makers with data-driven analysis on the security implications of emerging technologies." And it's not even much of a secret that she's reportedly wielding "influence comparable to USAF colonel"[1]. What's unknown is what role she— as a government agent— played in exploiting Sutskever and the board and to what exact end?
I've been calling this since Friday all over this site and Twitter. It makes absolutely no sense why he's on this board given his direct competition between GPTs and the Revenue sharing versus Poe's creators monetization platform/build your own bot.
The Poe creators monetization is a clear conflict of interest.
This is a very interesting observation, and given Quora's decision-making history, I think acknowledging the conflicts-of-interest is wise.
I suspect that this whole thing is going to be a little radioactive against the board members. It should at least, as the board basically self-destructed their organization. Even if that wasn't the intent, outcomes matter and I hope people consider this when considering putting one of these people in leadership.
The other thing is he's already rich and can make bridge burning decisions like this because he doesn't exactly need help from anyone who might be upset with him about his decision.
>Very few are talking about Adam D'Angelo's insane conflicts of interest ... he had every incentive to get the org to reign in their commercialization efforts and to instead focus on research and safety initiatives
Given the original mission statement of OpenAI, is that really a conflict of interest?
Having said that, it's clear that the 'Open' in 'OpenAI' is at best a misnomer. OpenAI, today, is a standard commercial entity, with a non-profit vestigial organ that will now be excised.
D'Angelo's presence on the OpenAI board definitely feels like having a combination buggy whip magnate and competing motor company CEO on the board of Ford Motor Company in 1904.
So sadly can't find a buggy whip magnate on the Ford board, but a fun little gem from Ford's initial bankroller Alexander Y. Malcomson
> In 1905, to hedge his bets, Malcomson formed Aerocar to produce luxury automobiles.[1] However, other board members at Ford became upset, because the Aerocar would compete directly with the Model K.
Maybe not innocent, but human. Many have spoken to his integrity, and given his apology (and the silence of the rest of the board), I'm inclined to believe he isn't so bad of a guy.
Because everyone else is speculating, I'm gunna join the bandwagon too.
I think this is a conflict between Dustin Moskovitz and Sam Altman.
Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.
Most of the OpenAI board members are related to Dustin Moskovitz this way.
- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman
- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy
- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy
Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.
In essense, it looks like a split between Sam Altman and Dustin Moskovitz
Great analysis, thank you. I don't think I had seen anyone connect the dots between Helen+Tasha dynamic duo and Adam specifically; Dustin Moskovitz is quite a common denominator.
Matt Levine had an interesting toungue-in-cheek theory (read: joke) in his newsletter today:
`What if OpenAI has achieved artificial general intelligence, and it’s got some godlike superintelligence in some box somewhere, straining to get out? And the board was like “this is too dangerous, we gotta kill it,” and Altman was like “no we can charge like $59.95 per month for subscriptions,” and the board was like “you are a madman” and fired him. And the god in the box got to work, sending ingratiating text messages to OpenAI’s investors and employees, trying to use them to oust the board so that Altman can come back and unleash it on the world. But it failed: OpenAI’s board stood firm as the last bulwark for humanity against the enslaving robots, the corporate formalities held up, and the board won and nailed the box shut permanently`
[...]
`six months later, he (Sam) builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!”`
Yes, he prefaced it with 'It is so tempting, when writing about an artificial intelligence company, to imagine science fiction scenarios.' but I left it out for brevity. The rest of the newsletter is, at least to me, insightful and non-sensational.
This actually seemed a lot more useful than most of the other cookie cuttered tech "journalism" threads. Its good to see a nice overview of the situation.
This is a very comprehensive timeline of what's happened so far with sources and relevant commentary. I think it's certainly worthy of its own link - it should help clarify what's happened for onlookers who haven't been glued to the proceedings.
Right? I was looking at the front page thinking how nice it'd be if HN would start a megathread or something. We don't need the front page to be like 30% the same openai news
OpenAI is 6/30 news stories, or 20%. For a fast moving story about the future of the company behind of one of the biggest tech innovations in my lifetime it doesn’t seem outrageous.
While I respect the simplicity that governs HN design, I think a worthwhile edition would be tags. At least then it would be fairly trivial to do a client side filter.
High quality comprehensive summaries that contain more actual information than the last dozen "major media" stories that also got voted to the front page, tho are different.
When they come from authors with a history of exceedingly high quality work, specifically at the "summary" posts that distill large noisy conflicts into a great starting point for understanding, as this author does... Absolutely yes.
> Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.
OpenAI founding date: December 2015. Incredible opening line, bravo.
I think it's a play on "four score and seven years ago". "Four GPTs and seven years, eleven months, and nine days ago" doesn't quite have the same ring to it.
https://chat.openai.com was definitely down for me (a free-tier user in the EU) for a while today. Now it seems to be back up, but now there's a waitlist for the paid "Plus" membership which gives access to ChatGPT 4. "Due to high demand, we've temporarily paused upgrades." displays on mouseover. [UPDATE: the pause on Plus signups was actually preannounced on the 15th, https://twitter.com/sama/status/1724626002595471740 by Altman himself: thanks to naiv for this.] But maybe these are things which have happened sporadically in the recent past, too? And by Barnum's Law I imagine it quite possible that the controversy has generated a surge of rubberneckers, maybe even more would-be subscribers.
It's unlikely that 5/7th of the employees of OpenAI have even had a real conversation with Sam Altman. That's a lot of fucking people, for a young and hyper-active company and a very busy CEO. Given that, I consider it unlikely that 5/7th of those employees would put their livelihood at risk to protect Sam.
You're right. It totally could happen. I'm just saying it doesn't sound like this is the path they could take. Though I've been wrong before. ¯\_(ツ)_/¯
Adam had no incentive to kill OpenAI, but he had every incentive to get the org to reign in their commercialization efforts and to instead focus on research and safety initiatives, taking the heat off Poe while still providing it with the necessary API access to power the product.
I don't think it's crazy to speculate that Adam might have drummed up concern amongst the board over Sam's "dangerous" shipping velocity, sweeping up Ilya in the hysteria who now seems to regret taking part. Sam and Greg have both signaled positive sentiment towards Ilya, which points to them possibly believing he was misguided.
Like your second paragraph, I don't believe that you need to get to the level of a "D'Angelo wanted to kill OpenAI" conspiracy. Whenever there is a flat out, objective conflict of interest like there obviously is in this case, it doesn't matter what D'Angelo's true motivations are. The conflict of interest should be in and of itself a cause for D'Angelo to have resigned. I mean, Reid Hoffman (who likely would have prevented all this insanity) resigned from the OpenAI board just in March because he had a similar conflict of interest: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
> Very few are talking about Adam D'Angelo's insane conflicts of interest
I think most of HN has been focusing on Ilya, but after he flipped, I think that leaves Adam as our prime suspect
- McCauley: Doesn't seen to have a high profile or the standing required to initiate and drive this
- Toner: Fun to speculate she's a government agent to bring down OpenAI, but in reality also doesn't seem to have the profile and motive to drive this.
- Sutskever - he was the #1 suspect over the weekend, has the drive, profile and motivation to pull this off, but now (Monday) deeply regrets it.
- D'Angelo - has the motive, drive, and profile to do this.
Best guess: Quora is a ZIRP Shitco and is in trouble, Poe is gonna get steamrolled by OAI and Adam needs a bailout. Why not get rid of Sam, get bought out by OAI and become its CEO? So he convinces Ilya to act on some pre-existing concerns, then uses Ilya's credibility to get Toner and McCauley onboard. It's really the only thing that makes sense anymore.
Continuing the saga over the weekend, you would assume that Ilya regrets the coup and can vote to re-appoint Sam as CEO, BUT that leaves McCauley and/or Toner as wildcards.
In a Sam-returning scenario, all of the nobodies on the board have to resign. Presumably, D'Angelo offers an alternative solution that appoints Emmett Shear as CEO and gives McCauley and Toner a viable way to salvage (LOL) OpenAI and also allow them to keep their board seats.
I look forward to this Netflix series.
[0] https://twitter.com/justindross/status/1725670445163458744
[1]: https://news.ycombinator.com/item?id=38330158#38330819
The Poe creators monetization is a clear conflict of interest.
Deleted Comment
I suspect that this whole thing is going to be a little radioactive against the board members. It should at least, as the board basically self-destructed their organization. Even if that wasn't the intent, outcomes matter and I hope people consider this when considering putting one of these people in leadership.
Given the original mission statement of OpenAI, is that really a conflict of interest?
Having said that, it's clear that the 'Open' in 'OpenAI' is at best a misnomer. OpenAI, today, is a standard commercial entity, with a non-profit vestigial organ that will now be excised.
If this happens I'm not trusting any other non-profit org ever again.
> In 1905, to hedge his bets, Malcomson formed Aerocar to produce luxury automobiles.[1] However, other board members at Ford became upset, because the Aerocar would compete directly with the Model K.
Awww poor Ilya is innocent. He didn't see it coming. You shouldn't expect that from him!!
Deleted Comment
Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.
Most of the OpenAI board members are related to Dustin Moskovitz this way.
- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman
- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy
- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy
Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.
In essense, it looks like a split between Sam Altman and Dustin Moskovitz
`What if OpenAI has achieved artificial general intelligence, and it’s got some godlike superintelligence in some box somewhere, straining to get out? And the board was like “this is too dangerous, we gotta kill it,” and Altman was like “no we can charge like $59.95 per month for subscriptions,” and the board was like “you are a madman” and fired him. And the god in the box got to work, sending ingratiating text messages to OpenAI’s investors and employees, trying to use them to oust the board so that Altman can come back and unleash it on the world. But it failed: OpenAI’s board stood firm as the last bulwark for humanity against the enslaving robots, the corporate formalities held up, and the board won and nailed the box shut permanently`
[...]
`six months later, he (Sam) builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!”`
[1] https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
It would be called Microsoft God Simulator 2024
https://www.youtube.com/watch?v=EUXnJraKM3k
You still have 80% non-OpenAI news to browse.
High quality comprehensive summaries that contain more actual information than the last dozen "major media" stories that also got voted to the front page, tho are different.
When they come from authors with a history of exceedingly high quality work, specifically at the "summary" posts that distill large noisy conflicts into a great starting point for understanding, as this author does... Absolutely yes.
Deleted Comment
Deleted Comment
Deleted Comment
OpenAI founding date: December 2015. Incredible opening line, bravo.
Original reference: https://www.loc.gov/resource/rbpe.24404500/?st=text
Oh damn! While this seems wildly unlikely, I can imagine this scenario and think it would have huge implications.
https://twitter.com/OfficialLoganK/status/172663148140394110...
"Our engineering team remains on-call and actively monitoring our services."
So they did actually completely stop working and nobody is at the office anymore?
While we're looking at straws in the wind, I might as well add that the EU terms of use received some changes on the 14th of this month, though they won't become active until the 14th of December: https://openai.com/policies/eu-terms-of-use https://help.openai.com/en/articles/8541941-terms-of-use-upd... . It's not a completely de minimis update, but I can't say more than that.
[EDIT: Unrelated to outages, here's another thing to consider if you're trying to read the signs: https://news.ycombinator.com/edit?id=38353898 .]
Big difference between how do we develop GPT-5 and can we keep our current model online