I don't think this is what really happened at all. The reason this decision was made was because 95% of employees sided with Sam on this issue, and the board didn't explain themselves in any way at all. So it was Sam + 95% of employees + All investors against the board. In which case the board should lose (since they are only governing for themselves here).
I think in the end a good and fair outcome. I still think their governing structure is decent to solve the AGI problem, this particular board was just really bad.
Bigger picture, I don't think the "money/VC/MSFT/commercialization faction destroyed the safety/non-profit faction" is mutually exclusive with "the board fucked up." IMO, both are true
It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.
I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]
> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed
[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration
Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."
[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...
But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.
For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
One counter-point, just from my own personal experience, but adopting a pseudonym online has actually allowed me to be more authentic and more sociable. I've made a lot of awesome friends that I don't think I'd have made had it not been for being pseudonymous. It can be quite liberating and reduces the fear/impact of trying new things, speaking to new people, and more.
I’m still skeptical that work is a context where I’d want/need this. But it’s a thought-provoking idea!
When you have only 5 people in your vicinity you're going to form deep relationships with them, whether or not you want to - or whether or not you like them. It will simply happen due to the fact you're going to be around these people day in, day out. 500? Well that gets more difficult. It'll require some degree of mutual effort to form relationships, but it's still very doable especially as you'll be still somewhat regularly bumping into the same people.
5,000,000? You will, in all probability, never see the same person twice. And even if you do, you probably won't remember them among the jungle of faces. You will never form any sort of a relationship unless you aggressively go out of your way to do so. And whoa, who's this random guy trying to be so aggressively buddy buddy with me? This dude is weird. Let me smile, nod, and find the nearest exit. And in the internet, you're around hundreds of millions to billions.
I think the opposite is usually (not always) true, and that many of the issues we see in today’s internet stem from the fact that we have completely lost a human connection to the people on the other end of our interactions. (You can even go further and connect this to the larger loss-of-community trends across modern society.) Developing a real relationship with someone increases empathy and trust; it leads to healthier, clearer, and more productive communication; and it generally is good for everyone’s mental health and happiness.
Personally, I want a less anonymous and more communal internet (and society).
>I’ve always found it hard to climb out of the plumbing, forget about it.
This is the crux it of it, and it's true for any web project not relying on "magic" frameworks. And of course, it comes with tradeoffs. Namely, frameworks can be inflexible, and they can be difficult to understand/debug under the hood.
There is no escaping these tradeoffs, with any framework or language. More magic means less plumbing and more initial speed, but less flexibility and potential issues as complexity/scale increases. It's all about trying to choose the best tool for the job.