In any case, for actual usage it should surprise nobody why everyone conflates Ethereum with money. No, your L2 chain does not qualify as an official solution.
I would argue the exact opposite. A website will be deployed to different versions of different browsers on different operating systems. A smart contract will exist on a single distributed computer. It sounds like the actual problem is people treating smart contract development as cavalierly as web app development
1) it's a response to a user's request, i.e., not initiated by the repo author
2) it depends on consensus of the user.
3) it's not automated (most of the items from the policy are related to automation).
4) the repo author has no obligation whatsoever of maintaining the project. He is not paid or forced to do it.
5) if the user really wants to apply this change or disagrees with this practice, he/she can always fork it.
That said, I understand that it may still not feel "fair" compared to other projects that don't follow this practice. Or the feeling of "wanting to help but you're asked to do some things first".
Companies already do that to accept your pull requests though [0], which takes way longer than giving a star - and I didn't see a complaint about it on HN
The board claims Altman lied. Is that it? About what? Did he consistently misinform the board about a ton of different things? Or about one really important issue? Or is this just an excuse disguising the actual issues?
I notice a lot of people in the comments talking about Altman being more about profit than about OpenAI's original mission of developing safe, beneficial AGI. Is Altman threatening that mission or disagreeing with it? It would be really interesting if this was the real issue, but if it was, I can't believe it came out of nowhere like that, and I would expect the board to have a new CEO lined up already and not be fumbling for a new CEO and go for one with no particular AI or ethics background.
Sutskever gets mentioned as the primary force behind firing Altman. Is this a blatant power grab? Or is Sutskever known to have strong opinions about that mission of beneficial AGI?
I feel a bit like I'm expected to divine the nature of an elephant by only feeling a trunk and an ear.
Specifically:
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
> it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.
People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.
It remains to be seen, but the days where I scoffed at that idea are firmly in the past where they belong. Today we are building machines with intelligence high enough that it is forcing us to reconsider and redefine what intelligence is. And there is a huge amount of progress just sitting in front of us, waiting to be fed into models.