The only way to understand this is by knowing: Meta already has two (!!) AI labs who are already at existential odds with one-another and both are in the process of failing spectacularly.
One (FAIR) is lead by Rob Fergus (who? exactly!) because the previous lead quit. Relatively little gossip on that one other than top AI labs have their pick of outgoing talent.
The other (GenAI) is lead by Ahmad Al-Dahle (who? exactly!) and mostly comprises of director-level rats who jumped off the RL/metaverse ship when it was clear it was gonna sink and by moving the centre of genAI gravity from Paris where a lot of llama 1 was developed to MPK where they could secure political and actual capital. They've since been caught with their pants down cheating on objective and subjective public evals and have cancelled the rest of Llama 4 and the org lead is in the process of being demoted.
Meta are paying absolute top dollar (exceeding OAI) trying to recruit superstars into GenAI and they just can't. Basically no-one is going to re-board the Titanic and report to Captain Alexandr Wang of all people. Its somewhat telling that they tried to get Koray from GDM and Mira from OAI and this was their 3rd pick. Rumoured comp for the top positions is well into the 10's of millions. The big names who are joining are likely to stay just long enough for stocks to vest and boomerang L+1 to an actual frontier lab.
I wouldn't categorize FAIR as failing. Their job is indeed fundamental research and are still a leading research lab, especially in perception and vision. See SAM2, DINOv2, V-JEPA-2, etc. The "fair" (hah) comparisons of FAIR are not to DeepMind/OAI/Anthropic, but to other publishing research labs like Google Research, NVIDIA Research, and they are doing great by that metric. It does seem that for whatever reason that FAIR resisted productization, unlike DeepMind, which is not necessarily a bad thing if you care about open research culture (see [1]). GenAI was supposed to be the "product lab" but failed for many reasons, including the ones you mentioned. Anyways, Meta does have a reputation problem that they are struggling to solve with $$ alone, but its somewhat of a category error to deem it FAIR's fault when FAIR is not a product LLM lab. Also Rob Fergus is a legit researcher; he published regularly with people like Ilya and Pushmeet (VP of Deepmind Research), just didn't get famous :P.
FAIR is failing. Dino and JEPA at least are irrelevant in this age. This is why GenAI exists. GenAI took the good people, the money, the resources and the scope. Zuck tolerates entertains ideas until he doesn’t. It’s clear blue sky research is going to be pushed even further into the background. For perception reasons you can’t fire AI researchers or disband an ai research org but it’s clear which way this is headed.
As for your comparisons, well Google Research doesn’t exist anymore (to all intents and purposes) for similar reasons.
This is exactly why Zuck feels he needs a Sam Altman type in charge. They have the labs, the researchers, the GPUs, and unlimited cash to burn. Yet it takes more than all that to drive outcomes. Llama 4 is fine but still a distant 6th or 7th in the AI race. Everyone is too busy playing corporate politics. They need an outsider to come shake things up.
The corporate politics at Meta is the result of Zuck's own decisions. Even in big tech, Meta is (along with Amazon) rather famous for its highly political and backstabby culture.
This is because these two companies have extremely performance-review oriented cultures where results need to be proven every quarter or you're grounds for laying off.
Labs known for being innovative all share the same trait of allowing researchers to go YEARS without high impact results. But both Meta and Scale are known for being grind shops.
These people should better make a lot of money while they can, because for most of them their careers may be pretty short. The half life of AI technologies is measured in months.
This is wrong. OpenAI has almost no upside now at these valuations and there is a >2 year effective cliff on any possibility of liquidity whereas Meta is paying 7-8 figures liquid.
Metas problem is that everyone knows that it’s a dumpster fire so you will only attract people who only care about comp which is typically not the main motivation for the best people.
Anyone know what scale does these days beyond labeling tools that would make them this interesting to Meta? Data labeling tools seem more of a traditional software application and not much to do with AI models themselves that would be somewhat easily replicated, but guessing my impression is out of date. Also now apparently their CEO is leaving [1], so the idea that they were super impressed with him doesn't seem to be the explanation.
OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
But then huge revenue streams for Scale basically disappear immediately.
Is it worth Meta spending all that money just to stop competitors using Scale? There are competitors who I am sure would be very eager to get the money from Google, OpenAI, Anthropic etc that was previously going to Scale. So Meta spends all that money for basically nothing because the competitors will just fill the gap if Scale is turned-down.
I am guessing they are just buying stuff to try to be more "vertically integrated" or whatever (remember that Facebook recently got caught pirating books etc).
It's a smart purchase, it's just that I don't see how these datasets factor into super-intelligence. I don't think you can create a super-intelligent AI with more human data, even if it's high-quality data from paid human contributors.
Unless we watered-down the definition of super-intelligent AI. To me, super-intelligence means an AI that has an intelligence that dwarfs anything theoretically possible from a human mind. Borderline God-like. I've noticed that some people have referred to super-intelligent AI as simply AI that's about as intelligent as Albert Einstein in effectively all domains. In the latter case, maybe you could get there with a lot of very, very good data, but it's also still a leap of imagination for me.
It seems very short-sighted given how far Meta's latest model release was behind Qwen and DeepSeek, both of which relied heavily on automatically generated reasoning/math/coding data to achieve impressive results, not human annotated data. I.e. Scale's data is not going to help Meta build a decent reasoning model.
This is by all indications the world's most expensive acquihire of a single person. Reporting has been that Zuckerberg has seen Wang as a confidant of sorts, and has proposed a vision of AI that's said to be non consensus.
It looks like security/surveillance play more than anything. Scale has strong relationships with the US MIC, the current administration (predating Zuck's rebranding), and gulf states.
Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.
I doubt Scale is interesting by itself. This is all about Alexandr Wang. Guy is in his mid 20s and has somehow worked his way up in Silicon Valley to the same stature as CEOs of multi trillion dollar companies. Got a front row seat at Trump's inaugration. Advises the DoD. Routinely rubs shoulders with world leaders. I can't say whether there's actual substance or not, by clearly Zuck sees something in him (probably a bit of himself).
It's a wild story for sure. Dropped out of MIT after freshman year and starts Scale to do data labeling. Three years later Scale has a $1B valuation and two years after that Wang is the world's youngest billionaire. Nine years after Scale's founding they're still doing less than $1B in annual revenue. Yet Meta is doing a $14B acquihire. There's definitely more than meets the eye. I suspect it involves multiple world governments including the US.
Meta buys a non-controlling stake and says no customers will be affected but the CEO and others are leaving Scale for Meta. Meta also says they won’t have access to competitor data but at 49% ownership they get major investor rights?
The host on this podcast[0] had a good point about the "investment". It was really a merger, but framed as an investment to sidestep regulators. Key attributes:
These types of "Aaackshually" business strategies are repulsive, and are evidence that these people who wield immense responsibility do not deserve it.
The stake of FB and people now employed at FB at the executive level is clearly over 50% it seems very odd they are convincing anyone this is a minority?
Reverse acquisition? IE, similar to how Disney "bought" Pixar, but much of Pixar's IP overshadows Disney's IP; or how Apple bought Next and the current MacOS is basically NextOS under the hood.
It's a technique that companies do to avoid disruption: Buy early stage startups, and by the time they could "disrupt" the parent company, the parent company's management is ready to retire, and the former startup's management is ready to take their place.
In what way does Pixar's IP overshadow Disney's? Listing the highest-grossing media franchises [1], Mickey Mouse, Winnie the Pooh, Star Wars, and Disney Princesses are on #2-#5 respectively, while Pixar's top spot is #16 with Cars.
It might reduce scrutiny, but not completely prevent it.
Clayton act says
"No person engaged in commerce or in any activity affecting commerce shall acquire, directly or indirectly, the whole or any part of the stock or other share capital and no person subject to the jurisdiction of the Federal Trade Commission shall acquire the whole or any part of the assets of another person engaged also in commerce or in any activity affecting commerce, where in any line of commerce or in any activity affecting commerce in any section of the country, the effect of such acquisition may be substantially to lessen competition, or to tend to create a monopoly."
If this is marketed as a strategic acquisition for the national interest of the US tech industry in order to counter-act the Chinese trying to catch up on AI then nothing of the sorts will happen.
That's the MO for all these big players. They don't shell out merely for the marketplace advantage, there's always some meta (no pun intended) Gordon Gecko corpo warfare schtick going on in the background.
To quote Peter Thiel, "competition is for losers".
It doesn't really affect the other frontier labs too much because OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
Of all the tech companies, Meta is the most ruthless and shameless. You'd have to be a total fool to trust Zuck, especially Zuck who put billions into AR for not much return and now billions into AI to create lackluster lagging models.
Off base when considering the likes of Palantir and many others.
Not a fan of the person or many of Meta's business practices. But Meta has given a lot back with Llama and PyTorch, among many other open source contributions. Which others in the space are not doing.
I'd trust Zuck if I had a signed, airtight agreement for a large amount of money he paid into an escrow account for something I owned or was transferring ownership.
He's very close to peak homo economicus. (EDIT: this next point is wrong, the oral history I heard referred to Winklevoss pops, not Zuckerberg, and I misremembered) Which makes sense, given his father is deep in actuarial services.
But also for a long time the best available open-weights models on the market - this investment has done a lot to kickstart open AI research, which I am grateful for no matter the reasons.
> especially Zuck who put billions into AR for not much return
While, it's indisputable about the current state of AR/VR. Zuck has a large exetensial risk to Microsoft/Apple/Google. If those companies want to revoke access to Meta's apps (ex [1]) they can and Zuck is in trouble. At one point Google was trying to compete with FaceBook with Google+ and while that didn't work, it's still a large business risk.
Putting billions into trying to get a moat for your product seems like prudent business sense when you're raking in hundreds of billions.
This is a very interesting buy because Scale AI has been spamming anyone and everyone on freelancer platforms; and they don’t have a very good reputation online so far from people they have contracted with.
Just go look at what people say about them on Reddit. It’s rare to find anything positive, or even a single brand champion that had some sort of great experience with them.
Just like Uber, Doordash & co don't have a good reputation among their contract workers. The entire business model is based on exploitation of labor. That doesn't mean it isn't valuable (in a capitalist sense).
No, those were entirely different user experiences when the services you mentioned were gaining traction and finding product market fit.
UberCab and Palo Alto Delivery were both services that had great success at user experiences for everyone involved including drivers, riders, small businesses, people ordering food. These experiences created brand champions who went out and raved about these technological innovations nonstop.
I don’t see any mentions of any positive experiences with Scale Ai here on HN or Reddit.. maybe that’s the reason behind the acquisition?
I just don't get why Scale and/or Alexandr Wang are so important to Meta. Like sure, data is good and all, but does Scale really bring something so unique and valuable to the table? What vision or insight does Wang offer that's worth so much?
Until now I've actually been a believer in the amount of money that Zuck has poured into metaverse investments. I'm not a believer of the metaverse per se, but a believer that innovation takes unafraid capex. The last thing you want to be is scared money like microsoft who chose to scuttle the hololense project over the thought of spending a couple extra billion dollars on it.
But this deal really has left me with my head scratching. Scale is, to put it charitably, a glorified wrapper over workers in the Philippines. What meta gets in this deal is, in effect, is Alexander Wang. This is the same Wang who has said enough in public for me to think, "huh?" Said a lot of revealing stuff like at Davos (dont have the pull quotes off the top of my head) that made me realize he's just kind of a faker. A very good salesman who ultimately gets his facts off the same twitter feed we all do.
On top of what makes this baffling is that Meta has very publicly faced numerous issues and setbacks due to very poor data from Scale that caused public fires in both companies. So you're bringing in a guy whose company has caused grief for your researchers, is not research nor product oriented, and expect to galvanize talent from both the inside and outside to move towards GAI? What is Mark thinking?
Zuckerberg seems to have had all the pieces to make this work but I'm a lot less confident if I'm a shareholder now than a week ago. This is a huge miss.
Sam Altman is a huge risk to META. He has similar morals to Zuck and a much better technical team. If OpenAI turns on the slop generator, they could hit Facebook and Instagram hard. Wang is probably smart enough to help navigate that risk.
$14.3 Billion seems excessive for it to be a pure aquihire play. There's undoubtedly some IP acquisition (or at least exclusive access to certain IP) involved.
It's about 0.85% of Meta's market cap - less than the 1% they paid for (granted, all of) Instagram. They also paid about 1% of market cap for Oculus ($2b into a ~$220b market cap)
Seems about par for Facebook when it comes to company-shifting acquisitions.
> the weird setup where they only buy non voting shares is to not trigger any regulatory review
Do regulators actually fall for these sort of things in the US? One would expect companies to be judged based on following the spirit of the law, rather than nitpicking and allowing wide holes like this.
>One would expect companies to be judged based on following the spirit of the law, rather than nitpicking and allowing wide holes like this.
The letter of the law is what people follow. The spirit, or intent, of the law is what they argue about in court cases.
If the regulation says 49% and a company follows it, who's to say they're exploiting a loophole? They're literally following the law. Until there is a court case and precedent is set.
One (FAIR) is lead by Rob Fergus (who? exactly!) because the previous lead quit. Relatively little gossip on that one other than top AI labs have their pick of outgoing talent.
The other (GenAI) is lead by Ahmad Al-Dahle (who? exactly!) and mostly comprises of director-level rats who jumped off the RL/metaverse ship when it was clear it was gonna sink and by moving the centre of genAI gravity from Paris where a lot of llama 1 was developed to MPK where they could secure political and actual capital. They've since been caught with their pants down cheating on objective and subjective public evals and have cancelled the rest of Llama 4 and the org lead is in the process of being demoted.
Meta are paying absolute top dollar (exceeding OAI) trying to recruit superstars into GenAI and they just can't. Basically no-one is going to re-board the Titanic and report to Captain Alexandr Wang of all people. Its somewhat telling that they tried to get Koray from GDM and Mira from OAI and this was their 3rd pick. Rumoured comp for the top positions is well into the 10's of millions. The big names who are joining are likely to stay just long enough for stocks to vest and boomerang L+1 to an actual frontier lab.
not affiliated with meta or fair.
[1] https://docs.google.com/document/d/1aEdTE-B6CSPPeUWYD-IgNVQV...
As for your comparisons, well Google Research doesn’t exist anymore (to all intents and purposes) for similar reasons.
This is because these two companies have extremely performance-review oriented cultures where results need to be proven every quarter or you're grounds for laying off.
Labs known for being innovative all share the same trait of allowing researchers to go YEARS without high impact results. But both Meta and Scale are known for being grind shops.
not any advantage in virtue (or vices, for that matter)
In national politics, Sam is toe to toe with Elon,which is to say, not great, not terribleEven if you’re giving massive cash and stock comp, OpenAI has a lot more upside potential than Meta.
Metas problem is that everyone knows that it’s a dumpster fire so you will only attract people who only care about comp which is typically not the main motivation for the best people.
[1] https://techcrunch.com/2025/06/13/scale-ai-confirms-signific...
Meta, Google, OpenAI, Anthropic, etc. all use Scale data in training.
So, the play I’m guessing is to shut that tap off for everyone else now, and double down on using Scale to generate more proprietary datasets.
But then huge revenue streams for Scale basically disappear immediately.
Is it worth Meta spending all that money just to stop competitors using Scale? There are competitors who I am sure would be very eager to get the money from Google, OpenAI, Anthropic etc that was previously going to Scale. So Meta spends all that money for basically nothing because the competitors will just fill the gap if Scale is turned-down.
I am guessing they are just buying stuff to try to be more "vertically integrated" or whatever (remember that Facebook recently got caught pirating books etc).
Wouldn’t Scale’s board/execs still have a fiduciary duty to existing shareholders, not just Meta?
Unless we watered-down the definition of super-intelligent AI. To me, super-intelligence means an AI that has an intelligence that dwarfs anything theoretically possible from a human mind. Borderline God-like. I've noticed that some people have referred to super-intelligent AI as simply AI that's about as intelligent as Albert Einstein in effectively all domains. In the latter case, maybe you could get there with a lot of very, very good data, but it's also still a leap of imagination for me.
Deleted Comment
Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.
https://x.com/boztank/status/1933512877140316628?s=46
Leaving to join "Meta's super intelligence efforts", whatever that means.
Meta buys a non-controlling stake and says no customers will be affected but the CEO and others are leaving Scale for Meta. Meta also says they won’t have access to competitor data but at 49% ownership they get major investor rights?
Sounds like an acqui-kill to me?
* CEO works for meta
* almost but not quite a majority stake taken
0: https://podcasts.apple.com/us/podcast/world-bank-cuts-u-s-gr...
>The structure was intentional. Executives at Meta and Scale AI were worried about drawing the attention of regulators.
It's a technique that companies do to avoid disruption: Buy early stage startups, and by the time they could "disrupt" the parent company, the parent company's management is ready to retire, and the former startup's management is ready to take their place.
[1] https://en.wikipedia.org/wiki/List_of_highest-grossing_media...
To quote Peter Thiel, "competition is for losers".
Dead Comment
Not a fan of the person or many of Meta's business practices. But Meta has given a lot back with Llama and PyTorch, among many other open source contributions. Which others in the space are not doing.
He's very close to peak homo economicus. (EDIT: this next point is wrong, the oral history I heard referred to Winklevoss pops, not Zuckerberg, and I misremembered) Which makes sense, given his father is deep in actuarial services.
Have you seen Oracle?
But also for a long time the best available open-weights models on the market - this investment has done a lot to kickstart open AI research, which I am grateful for no matter the reasons.
While, it's indisputable about the current state of AR/VR. Zuck has a large exetensial risk to Microsoft/Apple/Google. If those companies want to revoke access to Meta's apps (ex [1]) they can and Zuck is in trouble. At one point Google was trying to compete with FaceBook with Google+ and while that didn't work, it's still a large business risk.
Putting billions into trying to get a moat for your product seems like prudent business sense when you're raking in hundreds of billions.
[1]: https://techcrunch.com/2019/02/01/facebook-google-scandal/
Dead Comment
Just go look at what people say about them on Reddit. It’s rare to find anything positive, or even a single brand champion that had some sort of great experience with them.
UberCab and Palo Alto Delivery were both services that had great success at user experiences for everyone involved including drivers, riders, small businesses, people ordering food. These experiences created brand champions who went out and raved about these technological innovations nonstop.
I don’t see any mentions of any positive experiences with Scale Ai here on HN or Reddit.. maybe that’s the reason behind the acquisition?
That is my impression of his Twitter feed from what I remember.
But this deal really has left me with my head scratching. Scale is, to put it charitably, a glorified wrapper over workers in the Philippines. What meta gets in this deal is, in effect, is Alexander Wang. This is the same Wang who has said enough in public for me to think, "huh?" Said a lot of revealing stuff like at Davos (dont have the pull quotes off the top of my head) that made me realize he's just kind of a faker. A very good salesman who ultimately gets his facts off the same twitter feed we all do.
On top of what makes this baffling is that Meta has very publicly faced numerous issues and setbacks due to very poor data from Scale that caused public fires in both companies. So you're bringing in a guy whose company has caused grief for your researchers, is not research nor product oriented, and expect to galvanize talent from both the inside and outside to move towards GAI? What is Mark thinking?
Zuckerberg seems to have had all the pieces to make this work but I'm a lot less confident if I'm a shareholder now than a week ago. This is a huge miss.
I love this phrasing
Well said!
Seems about par for Facebook when it comes to company-shifting acquisitions.
Do regulators actually fall for these sort of things in the US? One would expect companies to be judged based on following the spirit of the law, rather than nitpicking and allowing wide holes like this.
The letter of the law is what people follow. The spirit, or intent, of the law is what they argue about in court cases.
If the regulation says 49% and a company follows it, who's to say they're exploiting a loophole? They're literally following the law. Until there is a court case and precedent is set.
Spiritual laws is how you get b b kangaroo courts