Because the AI works so well, or because it doesn't?
> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
Big tech is suffering from the incumbents disease.
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...
Isn't that "move fast and break things" by another name?
> pointed at internal gatekeepers who mainly seem to say no to product releases.
I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.
Makes sense. It's easier to be right by saying no, but this mindset costs great opportunities. People who are interested in their own career management can't innovate.
You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.
>>I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.
If we are serious about productivity.
I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.
Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.
Death of languages like Perl, Lisp and Prolog only proves this point.
... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.
One of the eternal struggles of BigCo is there are structural incentives to make organizations big and slow. This is basically a bureaucratic law of nature.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Pournelle's law of bureaucracy. Any sufficiently large organization will have two kinds of people: those devoted to the org's goals, and those devoted to the bureaucracy itself, and if you don't stop it the second group will take control to the point that bureaucracy itself becomes the goal secondary to all others.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
> By reducing the size of our team, fewer conversations will be required to make a decision
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
A solution to that scaling problem is to have most of the n not actually doing anything. Sitting there and getting paid but adding no value or overhead.
I just assume they over hired. Too much hype for AI. Everyone wants to build the framework people use for AI nobody wants to build the actual tools that make AI useful.
They’ve done this before with their metaverse stuff. You hire a bunch, don’t see progress, let go of people in projects you want to shut down and then hire people in projects you want to try out.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense. It's hard to blame the average developer for not enduring the hard things when nobody involved seems truly concerned with the value proposition of any of this.
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
There is a real question of if a more productive developer with AI is actually what the market wants right now. It may actually want something else entirely, and that is people that can innovate with AI. Just about everyone can be "better" with AI, so I'm not sure if this is actually an advantage (the baselines just got lifted for all).
I haven’t even thought of Meta as a competitor when it comes to AI. I’m a semi-pro user and all I think of when I think of AI is OpenAI, Claude, Gemini, and DeepSeek/Qwen, plus all the image/video models (Flux, Seedance, Veo, Sora)
My voice activated egg timer is amazing. There are millions of useful small tools that can be built to assist us in a day-to-day manner... I remain skeptical that anyone will come up with a miracle tool that can wholesale replace large sections of the labor market and I think that too much money is chasing after huge solutions where many small products will provide the majority of the gains we're going to get from this bubble.
"Load bearing." Isn't this the same guy that sold his company for $14B. I hope his "impact and scope" are quantifiably and equivalently "load bearing" or is this a way to sacrifice some of his privileged former colleagues at the Zuck altar.
Seems like a purge - new management comes in, and purges anyone not loyal to it. standard playbook. Happens in every org. Instead of euphemisms like "load-bearing" they could have straight out called it eliminating the old-guard.
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
Maybe I’m not understanding, but why is that wild? Is it just the fact that those people lost jobs? If it were a justification for a re-org I wouldn’t find it objectionable at all
Having worked at Meta, I wish they did this when I was there. Way too many people not agreeing on anything and having wildly different visions for the same thing. As an IC below L6 it became really impossible to know what to do in the org I was in. I had to leave.
They could do like in the Manhattan project: have different team competing on similar products. Apparently Meta is willing to throw away money, could be better than giving the talents to their competitors.
They properly fucked FAIR. it was a lead, if not the leading AI lab.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
The thing that many, so called smart people, dont realise is that leadership and vision are incredibly scarce traits.
Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.
> Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
To be fair, almost every company has a performance system that rewards bullshitters. You’re rewarded on your ability to schmooze and talk confidently and write numerous great-sounding docs about all the great things you claim to be doing. This is not unique to one company.
I might have seen it on HN, but I recall a study that was studying what made teams very effective. What they found was there were a rare few people who just by their involvement could make a team more effective. So rare that you may as well assume you won't ever see one.
But rather than finding magic to make teams better, they did find that there were types of people who make teams worse regardless of anyone else on the team, and they're not all that uncommon.
I think of those folks when I read that quote. That person who clearly doesn't understand but is in a position that their ignorant opinion is a go or no go type gate.
My tin-foil-hat-theory is that the most valuable things many programmers do at their company is not working for a competitor.
A small team is not only more efficient, but is overall more productive.
The 100-person team produces 100 widgets a day, and the 10-person team produces 200 widgets a day.
But, if the industry becomes filled with the knowledge of how to produce 200 widgets a day with 10 people, and there are also a lot of unemployed widget makers looking for work, and the infrastructure required to produce widgets costs approximately 0 dollars, then suddenly there is no moat for the big widget making companies.
What's wild about this? They're saying that they're streamlining the org by reducing decision-makers so that everything isn't design-by-committee. Seems perfectly reasonable, and a common failure mode for large orgs.
Anecdotally, this is a problem at Meta as described by my friends there.
Maybe they shouldn't have hired and put so many cooks in the kitchen. Treating workers like pawns is wild and you should not be normalizing the idea that it's OK for Big Tech to hire up thousands, find out they don't need them, and lay them off to be replaced by the next batch of thousands by the next leader trying to build an empire within the company. Treating this as SOP is a disservice to your industry and everyone working in it who isn't a fat cat.
Sounds to me like the classic everywhere communications problems: 1) people don't listen, 2) people can't explain in general terms, 3) while 2 is taking place, so is 1, and as that triggers repeat after repeat, people frustrate and give up.
I can actually relate to that, especially in a big co where you hire fast. I think it's shitty to over-hire and lay off, but I've definitely worked in many teams where there were just too many people (many very smart) with their own sense of priorities and goals, and it makes it hard to anything done. This is especially true when you over-divide areas of responsiblity.
I mean, I guess it makes sense if they had a particularly Byzantine decision-making structure and all those people were in roles that amounted to bureaucracy in that structure and not actually “doers”.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
>I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
> while the company continues to hire workers for its newly formed superintelligence team, TBD Lab.
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
> ”By reducing the size of our team, fewer conversations will be required to make a decision,..."
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
I will accept the Chief Emergency Shutoff Activator Officer role; my required base comp is $25M. But believe me, nobody can trip over cables or run multiple microwaves simultaneously like I can.
Guaranteed this is them cleaning out the old guard, its either axe them, or watch a brutal political game between legacy employees and new LLM AI talent
Cutting people at FAIR is a real shame though - great models like DINO and SAM have had massive positive impact - hopefully that work doesn't slow in favour of LLM-only development at MSL.
While ex-FAIR people should have little problem finding a job, the market for paying research folks that level of TC and working on ambitious research projects, unless you're in a very-LLM specific space, is absolutely shrinking.
It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).
Every time I see news like this, I just try to focus more on working on things I think are meaningful and contributing positively to the world... there is so much out of our control but what is in our control is how we use our minds and what we believe in.
Meta is fumbling hard. Winning the AI race is about marketing at this point - the difference between the models is negligible.
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
Surely winning the AI race is finding secret techniques that allow development of superior models, with it not being apparent that anyone has anything special enough that he actually is winning?
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
I think that depends on how optimistic/pessimistic one is on how much more superior the models are going to get. If you're really pessimistic then there isn't all too much one company could do to be 2x or more ahead already. If you're really optimistic then it doesn't matter what anyone is doing today because it's about who finds the next 100x leap.
They are shoving it down, WhatsApp has two entry points on the main view. I've received multiple requests for tips on how to hide them, I don't think people are interested. And I'd hide them too if I just could.
Winning the AI race is winning the application war. Similar to how internet, OS has been there for a long time, but the ecosystem took years to build.
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Meta is the worst at building platforms out of the big players. If you're not building to Facebook or Metaverse, what would you be building for if you were all-in on Meta AI? Instagram + AI will be significant, but not Meta-level significant, and it's closed. Facebook is a monster but no one's building to it, and even Mark knows it is tomorrow's Yahoo.
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
> Winning the AI race is winning the application war
This. 100% This.
As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.
> Maybe adversarial agents will see improvements...
There is increased appetite now to invest in those models that are taking a reasoning and RL problem.
I mostly agree with this but make an exception for MetaAI which seems egregiously bad compared to the others I use regularly (Anthropic's, Google's, OpenAI's)
>Winning the AI race is about marketing at this point - the difference between the models is negligible.
Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?
Lots of companies spun up giant AI teams over the last 48 months. I wouldn’t be surprised at all if 50+% of these roles are eliminated in the next 48 months.
The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.
Tbh yes. I like AI but I'm getting a bit sick of the hype. All our top dogs want AI in everything no matter whether it actually benefits the product. They even know it's senseless but they need to show the shareholders that they are all-in on AI.
It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.
If this impacted you - we are hiring at Magnetic (AI doc scanning and workflow automation for CPA firms). Cool technical problems, building a senior, co-located team in SF to have fun and build a great product from scratch
I’m kind of surprised Wang is leading AI at Meta? His knowledge is around data labeling which is important sure but is he really the guy to take this to the next level?
A skim of his Wikipedia bio suggests that he's smart, but mostly just interested in making money for himself. Since high school, he's spent time at: some fintech place, a Q-and-A site, MIT briefly, another fintech place, then data labeling and defense contracting. He sounds like a cash-seeking missile to me.
> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
Isn't that "move fast and break things" by another name?
lol, that works well until a big issue occurs in production
I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.
You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.
If we are serious about productivity.
I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.
Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.
Death of languages like Perl, Lisp and Prolog only proves this point.
Dead Comment
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
Few tools are ok with sometimes right, sometimes wrong output.
Meta is not even in the picture
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.
To be fair, almost every company has a performance system that rewards bullshitters. You’re rewarded on your ability to schmooze and talk confidently and write numerous great-sounding docs about all the great things you claim to be doing. This is not unique to one company.
But rather than finding magic to make teams better, they did find that there were types of people who make teams worse regardless of anyone else on the team, and they're not all that uncommon.
I think of those folks when I read that quote. That person who clearly doesn't understand but is in a position that their ignorant opinion is a go or no go type gate.
A small team is not only more efficient, but is overall more productive.
The 100-person team produces 100 widgets a day, and the 10-person team produces 200 widgets a day.
But, if the industry becomes filled with the knowledge of how to produce 200 widgets a day with 10 people, and there are also a lot of unemployed widget makers looking for work, and the infrastructure required to produce widgets costs approximately 0 dollars, then suddenly there is no moat for the big widget making companies.
"We want to cut costs and increase the burden on the remaining high-performers"
Deleted Comment
Why? Being transparent about these decisions are a good thing, no?
Anecdotally, this is a problem at Meta as described by my friends there.
If they want to innovate then they need to have small teams of people focused on the same problem space, and very rarely talking to each other.
Alas, the burden falls on the little guys. Especially in this kind of labor market.
Coming soon to your software development team.
Deleted Comment
New leader comes in and gets rid of the old team, putting his own preferred people in positions of power.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
Why not both?
On what planet is it OK to describe your employees as "load bearing?"
It's a good way to get your SLK keyed.
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
Well, all the people with no jobs are going to need something to fill their time.
They really need that business model.
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
Add that to “corporate personhood” and what do we get?
Deleted Comment
Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".
The billion dollar question is.. where is it?
https://www.datacenterdynamics.com/en/news/meta-brings-data-...
But maybe not:
https://open.substack.com/pub/datacenterrichness/p/meta-empt...
Other options are Ohio or Louisiana.
It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).
More like "scientific research regurgitators".
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
This. 100% This.
As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.
> Maybe adversarial agents will see improvements...
There is increased appetite now to invest in those models that are taking a reasoning and RL problem.
I mostly agree with this but make an exception for MetaAI which seems egregiously bad compared to the others I use regularly (Anthropic's, Google's, OpenAI's)
Just like Adam Neuman who was reinventing the concept of workspaces as a community.
Just like Elizabeth Holmns who was revolutionizing blood testing.
Just like SBF who pioneered a new model for altruistic capitalism.
And so many others.
Beware of prophets selling you on the idea that they alone can do something nobody has ever done before.
Oh, wow. I think you meant altruistic capitalism.
Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?
The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.
It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.
https://bookface.ycombinator.com/company/30776/jobs
https://www.ycombinator.com/companies/magnetic/jobs/77FvOwO-...
Deleted Comment