Readit News logoReadit News
hinterlands · a month ago
It is fairly rare to see an ex-employee put a positive spin on their work experience.

I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.

This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

harmonic18374 · a month ago
I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).

Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!

Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.

m00x · a month ago
Calvin cofounded Segment that had a $3.2B acquisition. He's not your typical employee.
44520297 · a month ago
>This guy even says they scour social media!

Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.

Deleted Comment

rrrrrrrrrrrryan · a month ago
> There's no Bond villain at the helm. It's good people rationalizing things.

I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.

stickfigure · a month ago
Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.
lucianbr · a month ago
> We are all very good and kind and not at all evil, trust us if we do say so ourselves

Do these people have even minimal self-awareness?

darkmarmot · a month ago
VGT?
Bratmon · a month ago
> It is fairly rare to see an ex-employee put a positive spin on their work experience

Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.

rvz · a month ago
Absolutely correct.

There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.

"OpenAI is nothing without it's people"

All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.

fragmede · a month ago
The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.
tedsanders · a month ago
OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)
torginus · a month ago
Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.

Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).

Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.

matco11 · a month ago
> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own

Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.

…but of course not everybody likes to go to hackathons

sensanaty · a month ago
> There's no Bond villain at the helm

We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?

ben_w · a month ago
It might be one of the cover stories for a Bond villain, but they have lots of mundane cover stories. Which isn't to say you're wrong, I've learned not to trust my gut in the category (rich business leaders) to which he belongs.

I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.

teiferer · a month ago
There is lots of rationalizing going on in his article.

> I returned early from my paternity leave to help participate in the Codex launch.

10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.

baggachipz · a month ago
The very fact that he did this exemplifies everything that is wrong about the tech industry and our current society. He's praising himself for this instead of showing remorse for his failure as a parent.
usaar333 · a month ago
Odd take. Openai gives 5 months of paternity leave and author is independently wealthy. What difference does it make between spending more time with a 4 month old vs a 4 year old? Or is your prescription that people should just be retiring once they have children?
Aurornis · a month ago
> It is fairly rare to see an ex-employee put a positive spin on their work experience.

The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.

Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.

> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.

eddythompson80 · a month ago
> The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

Yeah I had to re-read the sentence.

The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.

bigiain · a month ago
> It is fairly rare to see an ex-employee put a positive spin on their work experience.

Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:

"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"

iLoveOncall · a month ago
Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.
Spooky23 · a month ago
I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.

Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.

Wilder7977 · a month ago
Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."

I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.

ben_w · a month ago
> It is fairly rare to see an ex-employee put a positive spin on their work experience.

FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.

yen223 · a month ago
Same here. If I wrote an honest piece about my last employer, it would sound very similar in tone to what was written in this article
Timwi · a month ago
> everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!

The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”

newswasboring · a month ago
> It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.

saghm · a month ago
We already have bad guys doing X right now (literally, not the placeholders variable)

Deleted Comment

curious_cat_163 · a month ago
> It is fairly rare to see an ex-employee put a positive spin on their work experience.

I liked my jobs and bosses!

tptacek · a month ago
Most posts of the form "Reflections on [Former Employer]" on HN are positive.
TeMPOraL · a month ago
I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:

> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.

This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).

And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.

vlovich123 · a month ago
> That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things

I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.

energy123 · a month ago

  > It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
It's also a performance art to acquire attention

humbleferret · a month ago
What a great post.

Some points that stood out to me:

- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.

- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.

- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.

- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."

- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."

- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.

mikae1 · a month ago
> I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA)

Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.

ceroxylon · a month ago
Jokes aside, it was interesting to me that the 'three horse race' excluded a company who is announcing 5GW data centers the size of Manhattan[0].

[0] https://techcrunch.com/2025/07/14/mark-zuckerberg-says-meta-...

spoaceman7777 · a month ago
And don't forget xAI, which has MechaHitler in its product DNA
smath · a month ago
lol, I almost missed the sarcasm there :)
pyman · a month ago
"Hey, Twitter vibes are a metric, so make sure to mention the company on Twitter if you want to be heard."

Twitter is a one-way communication tool. I doubt they're using it to create a feedback loop with users, maybe just to analyse their sentiment after a release?

The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US are more sceptical, especially after everything revealed about OpenAI in the book Empire of AI.

lz400 · a month ago
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
maest · a month ago
Similar to Pascal's wager, which pretty much amounts to "yeah, God is probably not real, _but what if it is_? The utility of getting into heaven is infinite (and hell is infinitely negative), so any non-zero probability that God is real should make you be religious, just in case."

https://en.wikipedia.org/wiki/Pascal%27s_wager#Analysis_with...

ievans · a month ago
This is explicitly not the conclusion Pascal drew with the wager, as described in the next section of the Wikipedia article: "Pascal's intent was not to provide an argument to convince atheists to believe, but (a) to show the fallacy of attempting to use logical reasoning to prove or disprove God..."
billmcneale · a month ago
I am convinced!

Which god should I believe in, though? There are so many.

And what if I pick the wrong god?

adamgordonbell · a month ago
See also Pascal's mugging, from Eliezer Yudkowsky. Some would say AI Safety research is a form of Pascal's mugging.

https://en.wikipedia.org/wiki/Pascal%27s_mugging

tim333 · a month ago
I know you're not being serious but building AGI as in something that thinks like a human, as proven possible by millions of humans wandering all over the place is very different from "building god".
tartoran · a month ago
Except that humans cannot read millions of books (if not all books ever published) and keep track of massive amounts of information. AGI presuposes some kind of super human capabilities that no one human has. Whether that's ever accomplished remains to be seen, I personally am a bit skeptical that it will hapen in our lifetime but think it's possible in the future.
lz400 · a month ago
Not sure about that one. I do agree with the AI bros that, _if_ we build AGI, ASI looks inevitable shortly after, at least a "soft ASI". Because something with the agency of a human but all the knowledge of the world at its fingertips, the ability to replicate itself, think at order of magnitudes faster and paralelly on many things at the same time and modify itself... really looks like it won't stay comparable to a baseline human for long.
uh_uh · a month ago
> I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI.

Not sure how you can say this so confidently. Many would argue they're already pretty close, at least on a short time horizon.

J_Shelby_J · a month ago
Many would argue that you should give them a billion dollars funding, and that’s what they’re doing when they say AGI is close.

There is a decade + worth of implementation details and new techniques to invent before we have something functionally equivalent to Jarvis.

lz400 · a month ago
I mean, they're wrong? LLMs don't have agency, don't learn, don't do anything except react to prompts really.

Deleted Comment

ivape · a month ago
"but what if it's true?"

There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.

echoangle · a month ago
> There was nothing hypothesized that next-token prediction and scale could show emergent properties.

Nobody ever hypothesized it before it happened? Hard to believe.

pyman · a month ago
100%
bhl · a month ago
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.

There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.

Worried about how sustainable this is for its people, given the risk of burnout.

alwa · a month ago
If anyone tried to demand that I work that way, I’d say absolutely not.

But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!

Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.

Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.

ishita159 · a month ago
I think senior folks at OpenAI realized this is not sustainable and hence took the "wellness week".
bradyriddle · a month ago
I'd be curious to know about this community. Is this a formal group or just the people that you've collected throughout your life?
ZYbCRq22HbJ2y7 · a month ago
I think any reasonable manager would appreciate that sort of interest in a project and would support it, not demand it.
ml-anon · a month ago
This guy who is already independently wealthy chose working 16-17h 7 days a week instead of raising his newborn child and thanks his partner for “childcare duties”. Pretty much tells you everything you need to know.
yawnr · a month ago
Yeah as someone who has a young child this entire post made me feel like I was taking crazy pills. Working this much with a newborn is toxic behavior and if a company demands it then it is toxic culture. And writing about it as anything but that feels like some combination of Stockholm syndrome, being a workaholic, and marketing spin.

Being passionate about something and giving yourself to a project can be amazing, but you need to have the bandwidth to do it without the people you care about suffering because of that choice.

ec109685 · a month ago
The independently wealthy part is strange, but you only live once so folks should find the path satisfies them.

As for caring for a newborn, that is the least impactful moment you have with your kids.

Seems like he made a reasonable trade-off and will be there for all their formative years.

tptacek · a month ago
It's not sustainable, at all, but if it's happening just a couple times throughout your career, it's doable; I know people who went through that process, at that company, and came out of it energized.
6gvONxR4sf7o · a month ago
I couldn't imagine asking my partner to pick up that kind of childcare slack. Props to OP's wife for doing so, and I'm glad she got the callout at the end, but god damn.
maxnevermind · a month ago
I think Altman said in Lex F. podcast that he works 8 hours, 4 first one being the most productive ones and he doesn't believe CEO claiming they work 16 hours a day. Weird contrast to what described in the article. This confirms my theory that there are two types of people in startups: founders and everybody else, the former are there to potentially make a lot of money, and the later are there to learn and leave.
datadrivenangel · a month ago
The author left after 14 months at OpenAI, so that seems like a burnout duration.
pyman · a month ago
It's worst than that. Lots of power struggles and god-like egos. Altman called one of the employees "Einstein" on Twitter, some think they were chosen to transcend humanity, others believe they're at war with China, some want to save the world, others see it burn, and some just want their names up there with Gates and Jobs.

This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.

fhub · a month ago
He references childcare and paternity leave in the post and he was a co-founder in a $3B acquisition. To me it seems it is a time-of-life/priorities decision not a straight up burnout decision.
kaashif · a month ago
Working a job like that would literally ruin my life. There's no way I could have time to be a good husband and father under those conditions, some things should not be sacrificed.
Rebelgecko · a month ago
How did they have any time left to be a parent?
ambicapter · a month ago
> I returned early from my paternity leave to help participate in the Codex launch.

Obvious priorities there.

ZYbCRq22HbJ2y7 · a month ago
They were showered with assets for being a lucky individual in a capital driven society, time is interchangeable for wealth, as evidenced throughout history.

This guy is young. He can experience all that again, if it is that much of a failure, and he really wants to.

Sure, there are ethical issues here, but really, they can be offset by restitution, lets be honest.

baggachipz · a month ago
How did they have any time to create the child in the first place?

Deleted Comment

sashank_1509 · a month ago
My hot take is I don’t think burn out has much to do with raw hours spent working. I feel it has a lot more to do with sense of momentum and autonomy. You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it. But if it feels like wading through a swamp, you will burn out very quickly, even if it’s just 50 hours a week. I also find ownership has a lot to do with sense of burnout
matwood · a month ago
And if the work you're doing feels meaningful and you're properly compensated. Ask people to work really hard to fill out their 360 reviews and they should rightly laugh at you.
ip26 · a month ago
At some level of raw hours, your health and personal relationships outside work both begin to wither, because there are only 24 hours in a day. That doesn’t always cause burnout, but it provides high contrast - what you are sacrificing.
catoc · a month ago
Exactly this - if not at all about hours spent (at least that’s not a good metric; working less will benefit a burned out person; but the hours were not the root cause). The problem is lack of autonomy, lack of control over things you care about deeply. If those go out the window, the fire burns out quickly. Imho when this happens it’s usually because a company becomes too big, and the people in control lack subject matter expertise, have lost contact with the people that drive the company, and instead are guided by KPIs and the rules they enforced grasping for that feeling of being in control.
petesergeant · a month ago
2024 my wife and I did a startup together. We worked almost every hour we were awake, 16-18 hours a day, 7 days a week. We ate, we went for an hour's walk a day, the rest of the time I was programming. For 9 months. Never worked so hard in my life before. And, not a lick of burnout during that time, not a moment of it, where I've been burned out by 6 hour work days at other organizations. If you're energized by something, I think that protects you from burnout.
apwell23 · a month ago
> You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it.

Something about youth being wasted on young.

parpfish · a month ago
i hope thats not a hot take because it's 100% correct.

people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.

you can fix overwork with a vacation. burnout is a deeper existential wound.

my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.

laidoffamazon · a month ago
I don't really have an opinion on working that much, but working that much and having to go into the office to spend those long hours sounds like torture.
suncemoje · a month ago
I’m sure they’ll look back at it and smile, no?
ojr · a month ago
for the amount of money they are giving that is relatively easy, normal people are paid way less in harder jobs, for example, working in an Amazon Warehouse or doing door-to-door sales, etc.
babelfish · a month ago
This is what being a wartime company looks like

Dead Comment

beebmam · a month ago
Those that love the work they do don't burn out, because every moment working on their projects tends to be joyful. I personally hate working with people who hate the work they do, and I look forward to them being burned out
procinct · a month ago
Sure, but this schedule is like, maybe 5 hours of sleep per night. Other than an extreme minority of people, there’s no way you can be operating on that for long and doing your best work. A good 8 hours per night will make most people a better engineer and a better person to be around.
chrisfosterelli · a month ago
"You don't really love what you do unless you're willing to do it 17 hours a day every day" is an interesting take.

You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.

lvl155 · a month ago
I am not saying that’s easy work but most motivated people do this. And if you’re conscious of this that probably means you viewed it more as a job than your calling.
rvz · a month ago
> Worried about how sustainable this is for its people, given the risk of burnout.

Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.

There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.

For those who love what they do, the word "burnout" doesn't exist for them.

cylemons · a month ago
For these prestigious companies it makes sense, work hard for a few years then retire early.
a_bonobo · a month ago
>Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2

This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...

>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.

darkoob12 · a month ago
I think in this structure people only think locally and they are not concerned with the overall mission of the company and do not actively think about morality of the mission or if they are following it.
DrewADesign · a month ago
In my experience, front-line and middle managers will penalize workers that stray from their explicit goals because they think something else more readily contributes to the company’s mission.
samultio · a month ago
Kind of sounds like a traditional public company is a constitutional monarchy, not always the best but at least there's a balance of interests. While a private company could either be an autocracy or oligarchy where sucking up and playing tribal politics is the only way to survive.

Deleted Comment

mcosta · a month ago
Are you implying that a top-down corporate structure is better?

Deleted Comment

NicuCalcea · a month ago
If not better, certainly more honest.
tptacek · a month ago
This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
edanm · a month ago
Yes, same, that's a fascinating question that people are pretty tight-lipped about.

Note he was specifically on the team that was launching OpenAI's version of a coding agent, so I imagine the numbers before that product existed could be very different to the numbers after.

wilkomm · a month ago
That's a good question!
girvo · a month ago
Same! I was really hoping it was discussed. I’m assuming “lots, but it depends on what you’re working on”?
vFunct · a month ago
He describes 78,000 public pull requests per engineer over 53 days. LMAO. So it's likely 99.99% LLM written.

Lots of good info in the post, surprised he was able to share so much publicly. I would have kept most of the business process info secret.

Edit: NVM. That 78k pull requests is for all users of Codex, not all engineers of Codex.

reducesuffering · a month ago
"Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."

This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:

"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]

[0] https://blog.samaltman.com/machine-intelligence-part-1

troupo · a month ago
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:

--- start quote ---

The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.

--- end quote ---

Replace that with OpenAI

[1] https://archive.is/2019.04.15-165942/https://twitter.com/joh...