All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.
But it does suggest a possibility of the appearance of a sudden motive:
Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.
Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.
I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.
The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."
"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.
Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.
Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.
theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab
Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack
What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.
GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.
More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.
But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.
If any company can find a way to avoid having to pay up on those credits it's Microsoft.
"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."
OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.
A good example of how just having your foot in the door creates serendipitous opportunity in life.
Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.
I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.
If they want to go full bell labs/deep mind style, they might not need the majority of those 700.
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.
Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)
If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!
A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.
Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
There's no way to read any of this other than that the entire operation is a clown show.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"
I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
> There's no way to read any of this other than that the entire operation is a clown show.
In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.
They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.
There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.
This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.
OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.
Welcome to reality, every operation has clown moments, even the well run ones.
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.
This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.
Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.
I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.
I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...
Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.
O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.
The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.
I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.
Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)
That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.
It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.
Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.
> I think everyone is a "loser" in the current situation.
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.
What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.
Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.
Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.
Ilya ruined everything and shamelessly playing innocent, how low can he go?
Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
This is an extremely uncharitable take based on pure speculation.
>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
> Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.
I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.
> removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse
I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.
> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.
They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.
Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.
Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.
The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.
Was it due to incompetence though? The way it has played out has made me feel it was always doomed. It is apparent that those concerned with AI safety were gravely concerned with the direction the company was taking, and were losing power rapidly. This move by the board may have simply done in one weekend what was going to happen anyways over the coming months/years anyways.
People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.
>His "side-projects" could have been hugely beneficial to them over the long term.
How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
Calling it a delusion seems too provocative. Another way to say it is that principles take agreement and trust to follow. The board seems to have been so enamored with its principles that it completely lost sight of the trust required to uphold them.
This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).
> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
> Microsoft is going to come out of this better than anyone
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.
> In hindsight firing Sam was a self-destructing gamble by the OpenAI board
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
> Frankly it's looking like Microsoft is going to come out of this better than anyone
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.
> it's looking like Microsoft is going to come out of this better than anyon
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.
I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.
We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.
It really depends on what you're researching. Rad AI started with only 4m investment and used that to make cutting edge LLMs that are now in use by something like half the radiologists in the US. Frankly putting some cost pressure on researchers may end up creating more efficient models and techniques.
NN/ai concepts have been around for a while. It is just computers had not been fast enough to make it practical. It was also harder to get capital back then. Those guys put the silicon in silicon valley.
> Doesn't it look like the complete opposite is going to happen though?
Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.
I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.
If I weren't so adverse to conspiracy theories, I would think that this is all a big "coup" by Microsoft: Ilya conspired with Microsoft and Altman to get him fired by the board, just to make it easy for Microsoft to hire him back without fear of retaliation, along with all the engineers that would join him in the process.
Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.
Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.
No, I don’t think there’s any grand conspiracy, but certainly MS was interested in leapfrogging Google by capturing the value from OpenAI from day one. As things began to fall apart there MS had vast amounts of money to throw at people to bring them into alignment. The idea of a buyout was probably on the table from day one, but not possible till now.
If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.
Conspiracy theories that involve reptilian overlords and ancient aliens are suspect. Conspiracy theories that involve collusion to makes massive amounts of money are expected and should be the treated as the most likely scenario. Occam's razor does not apply to human behavior, as humans will do the most twisted things to gain power and wealth.
My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.
Why would they be afraid of retaliation? They didn't sign sports contracts, they can just resign anytime, no? That just seems to overcomplicate things.
I mean, I don't actually believe this. But I am reminded of 2016 when the Turkish president headed off a "coup" and cemented his power.
More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.
However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.
However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.
Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.
He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.
I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.
He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.
That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.
So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.
Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.
It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.
To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.
The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].
Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!
I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38347868&p=2
https://news.ycombinator.com/item?id=38347868&p=3
https://news.ycombinator.com/item?id=38347868&p=4
etc...
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.
Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.
https://en.wikipedia.org/wiki/Visual_J%2B%2B
The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
https://twitter.com/karaswisher/status/1726782065272553835
Caveat: I don't know anything.
Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...
Deleted Comment
To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.
There's your "credit".
I'm wondering why that option hasn't been used yet.
Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack
GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.
MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.
Everything else won't benefit competitors.
To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.
"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."
A good example of how just having your foot in the door creates serendipitous opportunity in life.
Sounds like Altman's biography.
Deleted Comment
Deleted Comment
Deleted Comment
If they want to go full bell labs/deep mind style, they might not need the majority of those 700.
If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.
This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.
OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.
A terrible mess.
Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.
The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.
The most organized and professional silicon valley startup.
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.
Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...
Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.
The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.
I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.
Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
I'm not joking.
Where is OpenAI talent going to go?
There's a list and everyone on that list is a US company.
Nothing to worry about.
It is a great time to be a lobbyist.
Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.
Deleted Comment
Deleted Comment
Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
Deleted Comment
And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.
Are you talking about American hospitals?
I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.
They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.
Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.
The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.
No one knows why the board did this. No one is talking about that part. Yet every one is on twitter talking shit about the situation.
I have worked with a lot of PhD's and some of them can be, "disconnected" from anything that isn't their research.
This looks a lot like that, disconnected from what average people would do, almost childlike (not ish, like).
Maybe this isn't the group of people who should be responsible for "alignment".
If this incident is representative, I'm not sure there was ever a possibility of good governance.
People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.
How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
How far along were they on GPT-5?
They could've asked ChatGPT for hints.
Deleted Comment
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
Some background: https://sl.bing.net/dEMu3xBWZDE
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...
Wasn't a key enabler of early transitor work that required capital investment was modest?
SotA AI research seems to be well past that point.
They were simple in principle but expensive at scale. Sounds like LLMs.
Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.
So we went from "Faux NGO" to, "For profit", to "100% Closed".
Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.
I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.
Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.
Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.
But no, that's not possible.
If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.
My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.
More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.
However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.
However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
https://twitter.com/ilyasut/status/1726590052392956028
He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.
Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.
It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.
Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.
The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].
Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!
I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
[1]: https://news.ycombinator.com/item?id=38330819
[2]: https://nitter.net/jeremyphoward/status/1725712220955586899