CEO of a company (or worse, non-profit!) and a member of its board creates another, for-profit company (in partial secrecy/lack of transparency) that the non-profit would eventually pay a lot of money.
This is almost a fraudulent level of siphoning non-profit money.
Btw, this is hilarious - regular employees have non-competes in their contracts (sometimes void/illegal, depending on the local jurisdiction) and breaching them is an immediately fireable offense (sometimes leading to more severe consequences). You work on a small thing on the side? Better be careful, ask your manager/HR, risk it getting taken over by the company (luckily, IIUC this part is mostly illegal now in all jurisdictions that "matter" for tech).
But sitting on multiple boards where you have much more room and possibilites for creating conflicts of interest and damaging the company? All fine and common!
There's an even bigger problem here: if he were just making money, that would be a normal-sized problem. If he were just making a supplier for OA, heck, that might be a good thing on net for OA; a subsidiary doing hardware might be justifiable given the importance of securing hardware.
But that's not what he's doing.
Creating such an independent hardware startup comes off as basically directly opposed to OA's safety mission - GPUs are one of the biggest limitations to creating better-than-OA models! The best customers would be the ones who are most sabotaging the OA mission. (You can't run the UN and also head an arms manufacturer dreaming of democratizing access to munitions.)
How is it opposed to OpenAI's goals to have a friendly company selling them chips instead of NVIDIA, which is, at-best, a neutral company?
Software is always more important than hardware. All the big players have access to NVIDIA chips today and yet only OpenAI has ChatGPT, proving the point.
OpenAI probably wishes someone would create competition to NVIDIA and this is Sam Altman trying to make that happen himself, since no one else seems to have been able to pull it off so far.
A conflict of interest would be OpenAI buying Altman's chips at inflated prices or something like that.
But if he makes a bunch of money selling OpenAI chips and OpenAI gets better/cheaper chips, that seems like pure win-win and totally free of ethical conflict.
Agree with this take. Sam previously stated (eg on Lex’s podcast) that a slow takeoff soon was his goal, to give society maximum time to adjust and to prevent an unexpected fast takeoff from capability overhang. I bought his take when he said it.
Going off and trying to accelerate hardware capabilities (especially with an outside company that presumably sells these processors on the open market) seems indefensible in this framework unless you have already solved alignment, which they clearly have not.
> basically directly opposed to OA's safety mission
OpenAI does not have a mission to ensure that the entire industry is safe.
And if anyone actually believes that then they are frankly delusional because right now AI is a geo-political fight between nation states. Is OpenAI really going to have any ability to control what China or UAE do with their LLMs. No.
> This sounds like an insane conflict of interest.
It would be, but: this company hasn't been formed yet and this sketch does not justify the haste with which they kicked him out, there were all kinds of boxes that needed to be checked before they could do that without risking damage to OpenAI, this is a founder and the CEO we're speaking of.
Besides that: stupidly enough the contract that Sam has with OpenAI does not have a non-compete in it (this has been confirmed by multiple sources now I take that as true) and I don't see how it directly would harm OpenAI other than that his attention might be diluted and that it should be clear which cap he is wearing. But until that company formed and Sam named himself CEO of it (or took up some other high profile role) it leaves him so many outs that it only makes the board look like bumbling idiots. Because now he can simply say: "I would only be an investor" and that would be that, just like the rest of the investors in OpenAI (and, notably some members of the board) have conflicts of interest at least as large.
So if this was it they're in even more trouble than they were before because now it is the boards' conflicts of interest that will be held up to the light and those are not necessarily smaller.
stupidly enough the contract that Sam has with OpenAI does not have a non-compete
Does it really matter? If the behavior appears to be in conflict with OpenAI, and the board doesn’t like it, then that’s enough to let him go. It doesn’t need to be a contract violation, he just wasn’t doing the job they wanted him to do.
Do you think there's a problem with him pitching his hardware side-gig to investors who approached him with an interest in OpenAI's tender offer? That gives the appearance of quid pro quo; with the hope that the investors get to skip the next line at the next OpenAI investment opportunity. Imagine someone high up on the Nvidia sales org telling a customer they are all out of H200 graphics cards for the half, but they have a fantastic timeshare investment opportunity they are selling on the side while the customer waits for the next batch.
Why do you keep calling him a founder? Is my history wrong or did he jump from leading THIS very website to OpenAI when they came through for a round of sweet hn bucks.
> regular employees have non-competes in their contracts (sometimes void/illegal, depending on the local jurisdiction)
Non-competes mostly have an effective after period, so Altman's situation is a bit more akin to my companies "only job" policy. It means I can't have a side hustle or alternative means of making money. Enforceable or not, while you're employed you can be fired for anything.
I don't see a conflict at all but IANAL. His biggest issue is GPU cost. He was hustling to vertically integrate and knew that the non-profit nature of OpenAI would not allow for it. So he starts to think about the creation of a separate company to handle that with exclusivity of some kind. It makes perfect sense. No idea if he could get the board to go for it of course, but that's clearly something he would have needed to do to make it a reality. And it's completely in his remit to make these kinds of bets. This board may have seen this as an overstep but all they needed to do is tell him no. I'm sure he would have made a persuasive argument had they let him. This board seems completely out of touch with the reality of running a company like this. And GPUs have nothing to do with AI safety, that's like saying a faster neuron makes a person evil or good.
You seem to be arguing that it isn't a conflict with the non-profit charter. What people are saying is that it's a conflict of interest. It's called self-dealing and one of the most common forms of conflict of interest.
Assume someone had been working for a startup, as CEO, for free. For years. That someone had cut himself away from any way to compensate his work directly. Due to altruism or poor planning or a result of a negotiation.
At a later time other ventures that this person had been propping started failing and a money injection deemed necessary. It would not be surprising, if that person would try to risk his unpaid work position and monetize it.
Recent tweets from Sam “go for a full value of my stock” seem to indicate towards this direction.
I think the reverse is true. If you're a grunt no one really cares and it's not worth it to enforce. If you're c suite and leave to start a competitor you can be sure you'll be hearing from company lawyers. Similarly mandatory gardening leave generally grows with your title
I sincerely don't give a shit and am just kibitzing but: why would this be a conflict of interest at all? One of OpenAI's biggest strategic concerns is getting out from under Nvidia. OpenAI is never going to do hardware; they don't even want to rack servers. The ultimate customer for a new AI chip would be Microsoft, not OpenAI. OpenAI is research and software. Hardware is a complement, not a competitor.
You seriously don't see a conflict of interest with a board member and CEO creating a separate company that sells to the company he is the CEO of? That is the definition of conflict of interest.
A more usual method is to have the non-profits contract with a for-profit "management consulting firm" that does the running of the real things. Nonprofit can take in money, pay the for-profit for providing the thing the donation was earmarked for; all legal and fine. The same people can be employed by both companies.
Any profits the "for profit" arm makes can then be donated back to the non-profits for financial advantage in holding the money, if any. Lather, rinse, repeat.
I think Altman hails from a kind of hustle culture that may not go great with AI safety:
>I just saw Sam Altman speak at YCNYC and I was impressed. I have never actually met him or heard him speak before Monday, but one of his stories really stuck out and went something like this:
> "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.
> We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.
> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."
> I think the reason why PG respects Sam so much is he is charismatic, resourceful, and just overall seems like a genuine person.
> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."
That strategy was tried with Barry Minkow, with "ZZZZ Best", the fake building maintenance company fraud.[1] He did prison time for that.
There was another AI start-up, one backed / supported by various conservative politians from Germany and Austria thatvdid the same thing, hiring people to look busy when customers (bad) and investors (a lot worse) showed up at the offices: Augustus Intelligence.
At best, this is fake it till you make it. At worst, it is fraud. The tiny, tiny differce is whether that next investor shows up or not. Just ask FTX.
I do not respect people resorting to that kind of thing.
AND a complete Moby Dick move. You're going to suddenly get up and compete with all the chip designers and beat them at their job? Computer chips are literally the most complex products in the world. It is an astonishing development that the most profitable company in the world was able to make chips, let alone chips that beat Intel and Nvidia in some key metrics. It took Apple more than a decade of shipping chips in phones to be able to move up to desktop-class.
The board was right to get rid of a guy who would rather hunt a white whale than do his job.
> SoftBank and others had hoped to be part of this deal, one person said, but were put on a waitlist for a similar deal at a later date. In the interim, Altman urged investors to consider his new ventures, two people said.
major mistake by OpenAI's board not to mention this if it actually did play a role in removing him, public opinion now is that it was a coup over AI safety stuff. OpenAI board seemed to have no PR plan
The claim is that he's starting different AI hardware companies on the side, not that he's doing it under OpenAI's umbrella. It's more like if Sundar tried to get funding for some side hustles while talking with potential customers or contractors of Google, without disclosing to Google's board ahead of time.
Every single one of Elon's ventures are conflicts of interest with each other. Simply seeking money for a chip venture is small compared to pulling out Tesla engineers to work on X, the stuff with SolarCity, etc
> On Sunday, a person familiar with the board stood by the board’s explanation on Friday that cited candor. This person said there was no one precipitating incident but rather a mounting loss of trust over communications with Altman. The person declined to offer examples.
> According to one person with knowledge of the situation, Altman had been attempting to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to establish a new microchip development company which could compete with Nvidia and TSMC. Those efforts, in the weeks before his sacking, had caused concerns on the board, this person said.
> Two days after OpenAI’s board of directors fired him, Sam Altman is expected to join executives at the company’s San Francisco headquarters Sunday as they push the board to reinstate him, Interim CEO Mira Murati told staff on Sunday morning, according to people with knowledge of the situation.
> This person said there was no one precipitating incident but rather a mounting loss of trust over communications with Altman.
Yet it was apparently so time sensitive that there was no time for discussion with key stakeholders. This board appears to have been run by the keystone kops.
The only stakeholder here of note not on the board is Microsoft, who not only is a strong proponent for the kinds of commercialization and acceleration that the board is supposedly concerned with, but is also one of the single most powerful non-government organization on the planet.
I’m not saying the board’s decision to remove Altman is or isn’t a good idea, but if they did decide it needed it to be done, running it by Microsoft ahead of time seems like one of the single dumbest things they could have done.
How could you even expect any ROI on a 100B investment? It seems like there’s way too much money sloshing around. People have no idea what to do with it.
How can you not make money, especially if you want to be in the business of making and selling chips? If you have that kind of $ you had better be getting returns. In fact, the more $ you have, the easier it should be to get returns. Such is capitalism. Main risk is on technical and execution, tough to bootstrap a chip maker.
Heh, "compete with Nvidia and TSMC". FT apparently having no idea what they're talking about, unless they were really planning to get into leading edge semiconductor manufacturing?!
I think they were really planning to do that. They wouldn't need $100B otherwise. Fabless chip company would cost 10-100x less, even if they order millions of chips right-away. You just wouldn't be able to spend THAT amount of money. Cutting-edge lithography on the other hand...
If you’re pitching a deal of epic scale (“AI is here and we’ll be at the center of its hardware”), you’d include roadmaps to complete independence and vertical integration. People want to see numbers for the big win before they place their bets.
So you’re probably not launching there, but of course you use it as the first horizon you’ll try sailing towards.
That’s the reading I am getting: MS lawyers really screwed up this one. What kind of lawyer worth their salt would let the company put $10BN on an investment on “capped profit” where you’re supposed to treat “investments as donations”? The more I read about it the more I keep thinking what the lawyers must have been on to not make more noise about this earlier.
> Adam D’Angelo is working on ChatGPT competitor Poe.
> Helen Toner & Tasha McCauley are working with OpenPhilanthropy/Anthropic on GovAI board (former board member Holden Karnofsky left when his wife started to get involved with Anthropic)
These seem like much larger conflicts of interest (working with direct competitors) than working on a company that one day this company might want to purchase a product from.
The board was anemic, lost people and didn’t replace them, and then was small enough that you could fire the chairman of the board and then remove the CEO in a friday night massacre.
Those conflicts disappear when you look at the OpenAI nonprofit through its aspiration to steward safe AGI no matter where it arises. It’s meant to transcend specific interests and look more like a technocratic standards body, like an IEEE precursor for AI, than a corporation seeking a private edge. It should have representation of diverse interests.
And that’s exactly why its ownership of an apparent VC rocketship in its profit-making entity was becoming so problematic. Suddenly, it’s critically important transcendence and independence was being threatened by its subsidiary’s outsized and rapid success.
It’s really important to keep in mind that the OpenAI board represents fundamentally different interests than those of the subsidiary developing and selling ChatGPT products. That they may have murdered the latter is not necessarily an accident or mistake.
This is pretty much directly in violation of OpenAI's charter, not to mention how concerning it is to have a CEO putting so much effort into side hustles.
Can you point to what's in violation? I don't see anything at all.
And "side hustles" are not inherently concerning at all. There are quite a number of well-known tech CEO's running multiple companies at once. The only things that matter are a) that the board is aware and feel like they're paying the right amount for the proportion of the CEO's time that they're getting, and b) that they don't involve a conflict of interest. (And generally speaking, being a potential supplier for your company isn't a conflict -- to the contrary, it's a pattern that's been successfully followed before.)
I love it how when a CEO runs 7 companies at once, he's seen as a titan of tech, master of multitasking, hulk of hustle. Everyone is in awe and points to him as an example of awesomeness. But if I, a worker bee, were to get a second full-time job as software engineer at TechCompany2 unrelated to the work of my TechCompany1, I would be a traitor, disloyal, distracted, double dipping, deserving of being fired.
So, if a CEO was talking to investors about a new venture (or a set of new ventures) in a closely related field, perhaps even sharing some details about the existing company with those potential investors, would it be fair to say that said CEO was not being entirely candid with the board?
Those are some big "if" statements. I think we would have to see what information was disclosed by Altman to the board with respect to these discussions before making judgements. The precise details of such disclosures may be a matter of legal interpretation.
Or, more likely, all of this will get settled behind closed doors and we will never really know.
>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Concentration of power, mostly. No power greater than a fab for that right now.
Not a lawyer, but I don’t think it is allowed to take a company intellectual property or research results and use these results for a side venture, without proper and timely disclosure. Even if there is no direct conflict of interests at the time when it happens.
Unless you specifically have a contract that allows you to avoid disclosures. Or have a specific agreement that transfers intellectual property or research results to you.
The obvious argument is that since we have not solved alignment, accelerating AGI is unsafe. I’d wager that is the general line of reasoning from Sutskever and the board.
“We are growing quickly enough and a GPU shortage gives everyone more time to catch up on safety research” seems like a logically consistent position to me.
If anything, it seems to me that unlocking OpenAI and the broader market from what’s been an effective monopoly through more chip competition would be inline with the charter.
> Technical leadership
> To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
I’m surprised that nobody has brought up that Altman was busy this summer dumping his nuclear startup Oklo onto the public market via a SPAC. He seems to be a talented fundraiser and dealmaker and not so much of a singleminded visionary who wants to spend the next decade perfecting the details of AGI. Maybe the right place for him is focusing on his next few startups.
He literally founded a defunct social network and then hung out in YC world before falling into AI at the right time to raise money. There isn't a single element of magic or secret sauce to Sam Altman in the AI space. So yeah. Have him mess around in more startups and raise cash for them.
I think he's also talented at assembling great teams. He has a deep personal network of incredibly talented people to draw from.
It's really hard to pull the right people together and sweet talk them into walking away from all the other incredible things they're working on, but it might be the single most important thing that an executive at a new, growing company has to do.
For some reason this reminds me of the olden days of Railroad Tycoons. Like those old timey movies where a bunch of land from the common folks is bought out for super cheap prices because the guys building the railways know where the stations are going to be placed.
This also helps make sense to me why so many SV types were jumping to the defense of Altman on X/Twitter. They are probably salivating at the potential investments that were in back-room discussions involving Altman and feel the need to prop those up. For example, there may have been literal billions agreed upon in principle but not yet signed amongst several ventures. The tarnishment of his reputation could scuttle much more than OpenAI if this report is correct.
If I was a pure AI researcher and my entire goal in life was to build AGI - I would loathe to be in a company that was engaging in such shenanigans. I'm not for or against either side in this dispute but I can totally appreciate why each side would want to retain control of what has been built so far. It is just there is a fork in the road here and only one vehicle.
This is still common practice in places where zoning is not fully developed. Politicians (or their friends) buy cheap ag land, and a few years down the road it’s rezoned and politician can sell it for massive profit. In my country there‘s a me-too like cascade happening right now around this very issue.
> Altman had been traveling to the Middle East to fundraise for the project, which was code-named Tigris, the people said. The OpenAI chief executive officer planned to spin up an AI-focused chip company that could produce semiconductors that compete against those from Nvidia Corp., which currently dominates the market for artificial intelligence tasks. Altman’s chip venture is not yet formed and the talks with investors are in the early stages, said the people, who asked not to be named as the discussions were private.
Using your celebrity status as CEO of a non-profit to raise money for multiple related for-profits does seem like you may be more dedicated to personal financial motivations than that of the non-profit's mission and charter.
No indication this was the trigger event, but reasonable that the board took notice of all these things. Even viewing this in the best possible, most innocent light, it still looks bad and Sam should have known better.
Not exactly. Ilya allegedly disliked Sam fundraising side hustles off his profile at OpenAI. It sounds like a confluence of factors precipitated the ousting.
CEO of a company (or worse, non-profit!) and a member of its board creates another, for-profit company (in partial secrecy/lack of transparency) that the non-profit would eventually pay a lot of money. This is almost a fraudulent level of siphoning non-profit money.
Btw, this is hilarious - regular employees have non-competes in their contracts (sometimes void/illegal, depending on the local jurisdiction) and breaching them is an immediately fireable offense (sometimes leading to more severe consequences). You work on a small thing on the side? Better be careful, ask your manager/HR, risk it getting taken over by the company (luckily, IIUC this part is mostly illegal now in all jurisdictions that "matter" for tech).
But sitting on multiple boards where you have much more room and possibilites for creating conflicts of interest and damaging the company? All fine and common!
But that's not what he's doing.
Creating such an independent hardware startup comes off as basically directly opposed to OA's safety mission - GPUs are one of the biggest limitations to creating better-than-OA models! The best customers would be the ones who are most sabotaging the OA mission. (You can't run the UN and also head an arms manufacturer dreaming of democratizing access to munitions.)
Software is always more important than hardware. All the big players have access to NVIDIA chips today and yet only OpenAI has ChatGPT, proving the point.
OpenAI probably wishes someone would create competition to NVIDIA and this is Sam Altman trying to make that happen himself, since no one else seems to have been able to pull it off so far.
A conflict of interest would be OpenAI buying Altman's chips at inflated prices or something like that.
But if he makes a bunch of money selling OpenAI chips and OpenAI gets better/cheaper chips, that seems like pure win-win and totally free of ethical conflict.
Going off and trying to accelerate hardware capabilities (especially with an outside company that presumably sells these processors on the open market) seems indefensible in this framework unless you have already solved alignment, which they clearly have not.
OpenAI does not have a mission to ensure that the entire industry is safe.
And if anyone actually believes that then they are frankly delusional because right now AI is a geo-political fight between nation states. Is OpenAI really going to have any ability to control what China or UAE do with their LLMs. No.
If this was simply reframed as, by creating your own GPUs - it would radically lower OpenAI costs in a material way, it’d be more understandable.
It would be, but: this company hasn't been formed yet and this sketch does not justify the haste with which they kicked him out, there were all kinds of boxes that needed to be checked before they could do that without risking damage to OpenAI, this is a founder and the CEO we're speaking of.
Besides that: stupidly enough the contract that Sam has with OpenAI does not have a non-compete in it (this has been confirmed by multiple sources now I take that as true) and I don't see how it directly would harm OpenAI other than that his attention might be diluted and that it should be clear which cap he is wearing. But until that company formed and Sam named himself CEO of it (or took up some other high profile role) it leaves him so many outs that it only makes the board look like bumbling idiots. Because now he can simply say: "I would only be an investor" and that would be that, just like the rest of the investors in OpenAI (and, notably some members of the board) have conflicts of interest at least as large.
So if this was it they're in even more trouble than they were before because now it is the boards' conflicts of interest that will be held up to the light and those are not necessarily smaller.
What a complete circus.
Does it really matter? If the behavior appears to be in conflict with OpenAI, and the board doesn’t like it, then that’s enough to let him go. It doesn’t need to be a contract violation, he just wasn’t doing the job they wanted him to do.
Non-competes mostly have an effective after period, so Altman's situation is a bit more akin to my companies "only job" policy. It means I can't have a side hustle or alternative means of making money. Enforceable or not, while you're employed you can be fired for anything.
How so? Seeking more efficiency and cost-effectiveness absolutely does not conflict with a non-profit mission.
At a later time other ventures that this person had been propping started failing and a money injection deemed necessary. It would not be surprising, if that person would try to risk his unpaid work position and monetize it.
Recent tweets from Sam “go for a full value of my stock” seem to indicate towards this direction.
Any profits the "for profit" arm makes can then be donated back to the non-profits for financial advantage in holding the money, if any. Lather, rinse, repeat.
>I just saw Sam Altman speak at YCNYC and I was impressed. I have never actually met him or heard him speak before Monday, but one of his stories really stuck out and went something like this:
> "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.
> We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.
> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."
> I think the reason why PG respects Sam so much is he is charismatic, resourceful, and just overall seems like a genuine person.
https://news.ycombinator.com/item?id=3048944
The crypto WorldCoin thing is probably a bigger example.
That strategy was tried with Barry Minkow, with "ZZZZ Best", the fake building maintenance company fraud.[1] He did prison time for that.
[1] https://en.wikipedia.org/wiki/Barry_Minkow
At best, this is fake it till you make it. At worst, it is fraud. The tiny, tiny differce is whether that next investor shows up or not. Just ask FTX.
I do not respect people resorting to that kind of thing.
The board was right to get rid of a guy who would rather hunt a white whale than do his job.
What a mess.
Deleted Comment
JFC
That's not a fact. It's your assumption.
And not a particular good one given that Microsoft is OpenAI's partner and is providing compute services at no cost.
Better make it a long one.
Sam got fired, so obviously not fine.
And so everything needs to be caveated with "this is an assumption".
Deleted Comment
The other story was later constructed in forums and media.
This is no different than Google designing and manufacturing their own chips (TPU, tensor processing unit)
And it has been rumored that D'Angelo helped a lot with this coup because he did it before at Quora.
Despite Musk saying it was terrible.
https://www.theinformation.com/articles/after-elon-musk-bash...
https://www.wsj.com/tech/openai-trying-to-get-sam-altman-bac...
> According to one person with knowledge of the situation, Altman had been attempting to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to establish a new microchip development company which could compete with Nvidia and TSMC. Those efforts, in the weeks before his sacking, had caused concerns on the board, this person said.
https://www.ft.com/content/466bf00a-1e76-4255-be2b-3c1d37508...
> Two days after OpenAI’s board of directors fired him, Sam Altman is expected to join executives at the company’s San Francisco headquarters Sunday as they push the board to reinstate him, Interim CEO Mira Murati told staff on Sunday morning, according to people with knowledge of the situation.
https://www.theinformation.com/articles/openai-execs-invite-...
Yet it was apparently so time sensitive that there was no time for discussion with key stakeholders. This board appears to have been run by the keystone kops.
I’m not saying the board’s decision to remove Altman is or isn’t a good idea, but if they did decide it needed it to be done, running it by Microsoft ahead of time seems like one of the single dumbest things they could have done.
Why no give it away just to have a foot in the door, that tech could come in very handy one day.
Deleted Comment
So you’re probably not launching there, but of course you use it as the first horizon you’ll try sailing towards.
Dead Comment
This is the worst possible response. They tanked MS’ stock price for this? MS will have their heads.
> Helen Toner & Tasha McCauley are working with OpenPhilanthropy/Anthropic on GovAI board (former board member Holden Karnofsky left when his wife started to get involved with Anthropic)
These seem like much larger conflicts of interest (working with direct competitors) than working on a company that one day this company might want to purchase a product from.
The board was anemic, lost people and didn’t replace them, and then was small enough that you could fire the chairman of the board and then remove the CEO in a friday night massacre.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
And that’s exactly why its ownership of an apparent VC rocketship in its profit-making entity was becoming so problematic. Suddenly, it’s critically important transcendence and independence was being threatened by its subsidiary’s outsized and rapid success.
It’s really important to keep in mind that the OpenAI board represents fundamentally different interests than those of the subsidiary developing and selling ChatGPT products. That they may have murdered the latter is not necessarily an accident or mistake.
I'm not surprised the board had to act.
And "side hustles" are not inherently concerning at all. There are quite a number of well-known tech CEO's running multiple companies at once. The only things that matter are a) that the board is aware and feel like they're paying the right amount for the proportion of the CEO's time that they're getting, and b) that they don't involve a conflict of interest. (And generally speaking, being a potential supplier for your company isn't a conflict -- to the contrary, it's a pattern that's been successfully followed before.)
What a double standard.
So, if a CEO was talking to investors about a new venture (or a set of new ventures) in a closely related field, perhaps even sharing some details about the existing company with those potential investors, would it be fair to say that said CEO was not being entirely candid with the board?
Those are some big "if" statements. I think we would have to see what information was disclosed by Altman to the board with respect to these discussions before making judgements. The precise details of such disclosures may be a matter of legal interpretation.
Or, more likely, all of this will get settled behind closed doors and we will never really know.
Concentration of power, mostly. No power greater than a fab for that right now.
Unless you specifically have a contract that allows you to avoid disclosures. Or have a specific agreement that transfers intellectual property or research results to you.
Does that hold true for a nonprofit, though? "Oops, revenues exceed expenses this year; we better order some supplies to avoid being profitable..."
If they really want to ensure that ensure that AI benefits all of humanity they have to keep it affordable to everyone.
Scaling out of expensive GPUs, and making the service cheap and therefore generally available to more humans sounds sensible.
“We are growing quickly enough and a GPU shortage gives everyone more time to catch up on safety research” seems like a logically consistent position to me.
A watchdog sounded an alert.
If anything, it seems to me that unlocking OpenAI and the broader market from what’s been an effective monopoly through more chip competition would be inline with the charter.
> Technical leadership
> To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
LOL!
Deleted Comment
Deleted Comment
It's really hard to pull the right people together and sweet talk them into walking away from all the other incredible things they're working on, but it might be the single most important thing that an executive at a new, growing company has to do.
This also helps make sense to me why so many SV types were jumping to the defense of Altman on X/Twitter. They are probably salivating at the potential investments that were in back-room discussions involving Altman and feel the need to prop those up. For example, there may have been literal billions agreed upon in principle but not yet signed amongst several ventures. The tarnishment of his reputation could scuttle much more than OpenAI if this report is correct.
If I was a pure AI researcher and my entire goal in life was to build AGI - I would loathe to be in a company that was engaging in such shenanigans. I'm not for or against either side in this dispute but I can totally appreciate why each side would want to retain control of what has been built so far. It is just there is a fork in the road here and only one vehicle.
Dead Comment
No indication this was the trigger event, but reasonable that the board took notice of all these things. Even viewing this in the best possible, most innocent light, it still looks bad and Sam should have known better.