Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
During the first ever layoff at $company in 2001, part of the dotcom implosion, one of my coworkers who got whacked complained that it didn’t make sense as he was one of the companies biggest boosters and believers.
It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).
In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.
But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.
The one and ONLY way I've ever seen "company" loyalty rewarded in any way is if you have a DIRECT relationship with a top level senior manager (C-suite). They will specifically protect you if they truly believe you are on "their side" and you are at their beck and call.
Companies appreciate loyalty… as long as long as it doesn’t cost them anything. The moment you ask for more money or they need to reduce the workforce, all of that goes out the window.
I think loyalty has value to the company but not as much as people think. To simplify it, multiple things contribute to "value" and loyalty is just a small part of it.
100% agree. There is no reason for employees to be loyal to a company. LLM building is not some religious work. It’s machine learning on big data. Always do what is best for you because companies don’t act like loyal humans, they act like large organizations that aren’t always fair or rationale or logical in their decisions.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?
If the answer to any of these questions is no then they're not worth committing to.
Only place you can say if you are an employee and a missionary is well if you are a missionary or working in a charity/ NGO etc trying to help people/animals etc.
I think there's more to work than just taking home a salary. Not equally true among all professions and times in your life. But most jobs I took were for less money with questionable upside. I just wanted to work on something else or with different people.
The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.
Especially for an organization like OpenAI that completely twisted its original message in favor of commercialization. The entire missionary bit is BS trying to get people to stay out of a sense of what exactly?
I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.
It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.
That’s because you don’t believe/realize in the mission of the product and its impact to society. When if work at Microsoft, you are just working to make MS money as they are like a giant machine.
That said it seems like every worker can be replaced. Lost stars replaced by new stars
Big picture, I'll always believe we dodged a huge bullet in that "AI" got big in a nearly fully "open-source," maybe even "post open-source" world. The fact that Meta is, for now, one of the good guys in this space (purely strategically and unintentionally) is fortunate and almost funny.
Another funny possibly sad coincidence is that the licenses that made open source what it is will probably be absolutely useless going forward, because as recent precedent has shown, companies can train on what they have legally gained access to.
On the other hand, AGPL continues to be the future of F/OSS.
Open source may be necessary but it is not sufficient. You also needed the compute power and architecture discoveries and the realisation that lots of data > clever feature mapping for this kind of work.
A world without open source may have given birth to 2020s AI but probably at a slower pace.
Dont make the mistake of anthropomorphizing Mark Zuckerberg. He didnt open source anything because he's a "good guy", he's just commoditizing the complement.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass, but the awful things done in pursuit of them certainly do. So the motivation behind those means doesn't excuse them. And I see no reason the inverse of this doesn't hold true. I couldn't care less if Zuckerburg thinks open sourcing Llama is some grand scheme to let him to take over the world to become its god-king emperor. In reality, that almost certainly won't happen. But what certainly will happen is the world getting free and open source access to LLM systems.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
> Dont make the mistake of anthropomorphizing Mark Zuckerberg
Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this
Oh, absolutely -- I definitely meant that in the least complimentary way possible :). In a way, it's just the triumph of the ideals of "open source," -- sharing is better for everyone, even Zuck, selfishly.
We would have to know their intent to really know if they fit a general understanding "the good guys."
Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.
It's really hard to tell. If instructions like the current extreme trend of "What a great question!" and all the crap that forces one to put
* Do not use emotional reinforcement (e.g., "Excellent," "Perfect," "Unfortunately").
* Do not use metaphors or hyperbole (e.g., "smoking gun," "major turning point").
* Do not express confidence or certainty in potential solutions.
into the instructions, so that it doesn't treat you like a child, teenager or narcissistic individual who is craving for flattery, can really affect the mood and way of thinking of an individual, those Chinese models might as well have baked in something similar but targeted at reducing the productivity of certain individuals or weakening their beliefs in western culture.
I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.
Do we know if Meta will stick to its strategy of making weights available (which isn't open source to be clear) now that they have a new "superintelligence" subdivision?
Why not? Current open models are more capable than the best models from 6 months back. You have a choice to use a model that is 6 months old - if you still choose to use the closed version that’s on you.
Most of Meta's models have not been released as open source. Llama was a fluke, and it helps to commoditize your compliment when you're not the market leader.
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
Llama is not open source. It is at best weights available. The license explicitly limits what kind of things you are allowed to use the outputs of the models for.
yeah, I used to work in the medical tech space, they love to tell you how much you should be in it for the mission and that's why your pay is 1/3 what you could make at FAANG... of course, when it came to our sick customers, they need to pay market rates.
There are a couple of ways to read the "coup" saga.
1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.
2) Altman was trying to raise cash and saw an opportunity to make loads of money
3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.
Now, what were the board's concerns?
The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?
The answer to the above shapes the reaction I feel I would have as a missionary
If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.
However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
> I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing,
OpenAI announced in April they'd build a social network.
I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.
I'm not very informed about the coup -- but doesn't it just depend on what side most of the employees sat/sit on? I don't know how much of the coup was just egos or really an argument about philosophy that the rank and file care about. But I think this would be the argument.
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
Not to mention, missionaries are exploitative. They're trying to harvest souls for God or (failing the appearance of God to accept their bounty) to expand the influence of their earthbound church.
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
> “I have never been more confident in our research roadmap,” he wrote. “We are making an unprecedented bet on compute, but I love that we are doing it and I'm confident we will make good use of it. Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems.”
tldr. knife fights in the hallways over the remaining life boats.
yeah... didnt the missionaries all leave after the coup? and the folks who remain are the mercenaries looking for the big stock win after sama figures out a way to be acquired or IPO?
all the chatter here at least was that the OpenAI folks were sticking around because they were looking for a big payout
"As we know, big tech companies like Google, Apple, and Amazon have been engaged in a fierce battle for the best tech talent, but OpenAI is now the one to watch. They have been on a poaching spree, attracting top talent from Google and other industry leaders to build their incredible team of employees and leaders."
We shouldn't use the word "poaching" in this way. Poaching is the illegal hunting of protected wildlife. Employees are not the property of their employers, and they are free to accept a better offer. And perhaps companies need to revisit their compensation practices, which often mean that the only way for an employee to get a significant raise is to change companies.
Sam vs Zuck... tough choice. I'm rooting for neither. Sam is cleverly using words here to make it seem like OpenAI are 'the good guys' but the truth is that they're just as nasty and power/money hungry as the rest.
Sam Altman literally casts himself a God apparently and that's somehow to be taken as an indictment of his rivals. Maybe it's my GenX speaking but that's CEO bubblespeak for "OpenAI is fucked, abandon ship".
Pretty telling that OpenAI only now feels like it has to reevaluate compensation for researchers while just weeks ago it spent $6.5 billion to hire Jony Ive. Maybe he can build your superintelligence for you.
Do I "poach" a stock when I offer more money for it than the last transaction value?
"Poaching" employees is just price discovery by market forces. Sounds healthy to me. Meta is being the good guys for once.
Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).
In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.
But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?
If the answer to any of these questions is no then they're not worth committing to.
When it comes down to it, you’re expendable when your leadership is backed into a corner.
The rest of us are mercenaries only.
#6: Never allow family to stand in the way of opportunity.
#111: Treat people in your debt like family… exploit them.
#211: Employees are the rungs on the ladder of success. Don't hesitate to step on them.
The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.
I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.
It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.
That said it seems like every worker can be replaced. Lost stars replaced by new stars
Deleted Comment
On the other hand, AGPL continues to be the future of F/OSS.
A world without open source may have given birth to 2020s AI but probably at a slower pace.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this
Don’t let the perfect be the enemy of the good.
That's a cool smaht phrase but help me understand, for which Meta products are LLMs a complement?
Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.
I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.
You imply there are some good guys.
What company?
Dead Comment
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
Deepseek, Baidu.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
https://news.ycombinator.com/newsguidelines.html
I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.
(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)
1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.
2) Altman was trying to raise cash and saw an opportunity to make loads of money
3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.
Now, what were the board's concerns?
The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?
The answer to the above shapes the reaction I feel I would have as a missionary
If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.
However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
- online gambling
- kids gambling
- algorithmic advertising
Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.
And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.
All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.
OpenAI announced in April they'd build a social network.
I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.
There ain't no missionary, they all doing it for the money and will apply it to anything that will turn dollars.
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
No different than "we are a family"
tldr. knife fights in the hallways over the remaining life boats.
all the chatter here at least was that the OpenAI folks were sticking around because they were looking for a big payout
Dead Comment
From March of this year,
"As we know, big tech companies like Google, Apple, and Amazon have been engaged in a fierce battle for the best tech talent, but OpenAI is now the one to watch. They have been on a poaching spree, attracting top talent from Google and other industry leaders to build their incredible team of employees and leaders."
https://www.leadgenius.com/resources/how-openai-poached-top-...
Dead Comment
Unsurprising, unhelpful for anyone other than sama, unhealthy for many.
I don't imagine Sam Altman said this because he thinks it'll somehow save him money on salaries down the line.
Deleted Comment
I don't think the context is the same. In the context of Altman, he wants 'losers'.