Readit News logoReadit News
SamvitJ · 6 months ago
One way to make sense of this specific case at least.

- He's on track to becoming a top-tier AI researcher. Despite having only one year of a PhD under his belt, he already received two top awards as a first-author at major AI conferences [1]. Typically, it takes many more years of experience to do research that receives this level of recognition. Most PhDs never get there.

- Molmo, the slate of open vision-language models that he built & released as an academic [2], has direct bearing on Zuck's vision for personalized, multimodal AI at Meta.

- He had to be poached from something, in this case, his own startup, where in the best case, his equity could be worth a large multiple of his Meta offer. $250M likely exceeded the expected value of success, in his view, at the startup. There was also probably a large premium required to convince him to leave his own thing (which he left his PhD to start) to become a hired hand for Meta.

Sources:

[1] https://mattdeitke.com/

[2] https://allenai.org/blog/molmo

tantalor · 6 months ago
> could be worth...

Exactly. What's the likelihood of that?

aleph_minus_one · 6 months ago
> What's the likelihood of that?

Sufficiently high that Meta is willing to pay such an amount of money. :-)

Infinity315 · 6 months ago
It might make more sense to think of in terms of expected value. Whilst the probability may be low, the payoff is probably many times the $250M if their startup becomes successful.
gsf_emergency_2 · 6 months ago
It's strange that Zuck didn't just buy options on that guy? (Or did he? Would love to see the terms.)

Zuck's advantage over Sir Isaac (Newton) is that the market for top AI researchers is much more volatile than in South Sea tradeables pre-bubble burst?

Either that or 250M is cheap for cognitive behavior therapy

cweld510 · 6 months ago
What matters is whatever he believes the likelihood to be, not what it actually is.
Hamuko · 6 months ago
In this AI hype environment? Seems doable to me.
exasperaited · 6 months ago
This is IMO a comical, absurd, Beeple NFT type situation, which should point us to roughly where we are in the bubble.

But if he's getting real, non-returnable actual money from Meta on the basis of a back of envelope calculation for his own startup, from Meta's need to satiate Mark Zuckerberg's FOMO, then good for him.

This bubble cannot burst soon enough, but I hope he gets to keep some of this; he deserves it simply for the absurd comedy it has created.

tracerbulletx · 6 months ago
Professional athletes get paid on that scale, CEOs get paid on that scale. A top researcher in a burgeoning technology should get paid that much. Because bubbles dont mean every company fails, it means most of them do and the winner takes all, and if someone thinks hiring this guy will make them the winner than it's not remotely unusual.
bbor · 6 months ago
I'm not a conspiracy person, but it's hard not to believe that some cruel god sent us crypto just a few years before we accidentally achieved AGI just to mess with us. So many people are confident that AGI is impossible and LLMs are a passing fad based to a large degree on the idea that SV isn't trustworthy -- I'd probably be there too, if I wasn't in the field myself. It's a hard pattern not to recognize, even if n~=2.5 at most!

I hope for all of our sake's that you're right. I feel confident that you're not :(

mensetmanusman · 6 months ago
Bubble might not burst, if we can improve productivity in every field even 0.5%, that’s massive.
bloodyplonker22 · 6 months ago
I agree on all points. However, if he already had several millions like Mira and Ilya, his choice to work for Zuckerberg would likely be different. Where is the glory in bending the knee to Meta and Zuckerbeg?
ceejayoz · 6 months ago
I’d forgo a lot of glory for $250M. I suspect I’m not rare in that.
overfeed · 6 months ago
> Where is the glory in bending the knee to Meta and Zuckerbeg?

Meta will have more AI-compute than he ever hoped to get at his - and most other - startups.

Rover222 · 6 months ago
This is such a weird sentiment to me, comparing taking one of the highest paying corporate jobs in the history of humanity to bowing down to some dictator.
zurfer · 6 months ago
He probably heard the memes, didn't really want to work for Meta and said a "ridiculous" high number and Meta was like: we can do that.
tom_m · 6 months ago
[flagged]
tomhow · 6 months ago
Please don't post shallow dismissals... - https://news.ycombinator.com/newsguidelines.html

It's fine to think that we're in a bubble and to post a comment explaining your thoughts about it. But a comment like this is a low-effort, drive-by shoot-down of a comment that took at least a bit of thought and effort, and that's exactly what we don't want on HN.

trhway · 6 months ago
Various BigCo-s have been reporting good results recently and highlighted significant AI role in that. That may be a BS of course, yet even if it is half-true we're talking about tens/hundreds of billions of revenues. With 10x multiple it supports a trillion like generic valuation of AI in business right now.
naveen99 · 6 months ago
Molmo caught my eye also a while ago.
andyjensen · 6 months ago
Same except a bit longer ago than that, but otherwise I agree
mizzao · 6 months ago
It's also possible, based on what happens to those who win the lottery, that his life could become a lot harder. It's not great to have the fact that you're making $2M a week plastered all over the Internet.
habosa · 6 months ago
I don’t like this because it inspires my relatives to keep sending me links to these stories and asking why I’m not going to work at Meta and getting my billions. Mark, please do this stuff quietly so I can continue in my quiet mediocrity.
ornornor · 6 months ago
Or you can tell them that zuck is making the world way worse overall and you don’t want to enable that, regardless.

Dead Comment

apparent · 6 months ago
This is the result of the winner-take-all (most) economy. If the very best LLM is 1.5x as good as good as the next-best, then pretty much everyone in the world will want to use the best one. That means billions of dollars of profit hang in the balance, so companies want to make sure they get the very best people (even if they have to pay hundreds of millions to get it).

It's the same reason that sports stars, musicians, and other entertainers that operate on a global scale make so much more money now than they did 100 years ago. They are serving a market that is thousands of times larger than their predecessors did, and the pay is commensurately larger.

TrackerFF · 6 months ago
I don't think AI will be a winner-take-all scenario. If that is to happen, I think the following assumptions must hold:

1) The winner immediately becomes a monopoly

2) All investments are directed from competitors, to the winner

3) Research on AGI/ASI ceases

I don't see how any of these would be viable. Right now there's an incremental model arms race, with no companies holding a secret sauce so powerful that they're miles above the rest.

I think it will continue like it does today. Some company will break through with some sort of AGI model, and the competitors will follow. Then open source models will be released. Same with ASI.

The things that will be important and guarded are: data and compute.

apparent · 6 months ago
Yeah, this is why I said "(most)". But regardless, I think it's pretty uncontroversial that not all companies currently pursuing AI will ultimately succeed. Some will give up because they aren't in the top few contenders, who will be the only ones that survive in the long run.

So maybe the issue is more about staying in the top N, and being willing to pay tons to make sure that happens.

andrewrn · 6 months ago
I agree with this comment.

Maybe it's just me but I haven't been model-hopping one bit. For my daily chatbot usage, I just don't feel inclined to model-hop so much to squeeze out some tiny improvement. All the models are way beyond "good enough" at this point, so I just continue using ChatGPT and switching back and forth from o3 and 4o. I would love to hear if others are different.

Maybe others are doing some hyper-advanced stuff where the edging out makes a difference, but I just don't buy it.

A good example is search engines. Google is a pseudo-monopoly because google search gives obviously better results than bing or duckduckgo. In my experience this just isn't the case for LLM's. Its more nuanced than better or worse. LLM's are more like car models where everyone makes a personal choice on which they like the best.

nsriv · 6 months ago
I agree with you, and think we are in the heady days where moat building hasn't quite begun. Regarding 1) and 3), most models have API access to facilitate quick switching and agentic AI middleware reaps the benefits of new models being better at some specific use-case than a competitor. In the not-so-distant future, I can see the walls coming up, with some version of white-listed user-agent access only. At the moment, model improvement hype and priority access are the product, but at some point capability and general access will be the product.

We are already seeing diminishing returns from compute and training costs going up, but as more and more AI is used in the wild and pollutes training data, having validated data becomes the moat.

groestl · 6 months ago
> Right now there's an incremental model arms race

Yes, but just like in an actual arms race, we don't know if this can evolve in a winner takes all scenario very quickly and literally.

AbstractH24 · 6 months ago
The problem is models are decaying at incredible speed and being the leader today has limited guarantee you’ll be it tomorrow.

OpenAI has a limited protective moat because ChatGPT is synonymous with generative AI at the moment, but that isn’t any more baked in than MySpace (certainly not in the league of Twitter or Facebook).

specialist · 6 months ago
> I don't think AI will be a winner-take-all scenario.

AI? Do you mean LLMs, GPTs, both, or other?

Why won't AI follow the technology life cycle?

It'll always be stuck in the R&D phase, never reach maturity?

It's on a different life cycle?

Once AI matures, something prevents consolidation? (eg every nation protects its champions)

jjmarr · 6 months ago
The actual OpenRouter data says otherwise.[1] Right now, Google leads with only 28.4% marketshare. Anthropic (24.7%), Deepseek (15.4%), and Qwen (10.8%) are the runners-up.

If this were winner-take-all market with low switching costs, we'd be seeing instant majority market domination whenever a new SOTA model comes out every few weeks. But this isn't happening in practice, even though it's much easier to switch models on OpenRouter than many other inference providers.

I get the perception of "winner-take-all" is why the salaries are shooting up, but it's at-odds with the reality.

[1] https://openrouter.ai/rankings

nextworddev · 6 months ago
Openrouter data is skewed toward 1) startups, 2) cost sensitive workloads, and generally not useful as a gauge of enterprise adoption
Mistletoe · 6 months ago
I and most normal people can’t even tell the difference between models. This is less like an arms race and more like an ugly baby contest.
chychiu · 6 months ago
Actually that is decent data and reflective of current SOTA in terms of cost performance tradeoff
UmGuys · 6 months ago
I don't think so. It's just a bubble. There's no AI, we have fancy chatbots. If someone were to achieve AGI, maybe they win, but it's unlikely to exist. Or if it does, we can't define it.
nopinsight · 6 months ago
What would you say if the IMO Gold Medal models from DeepMind and OpenAI turn out to be generalizable to other domains including those with hard-to-verify reward signals?

Hint: Researchers from both companies said publicly they employ generalized reasoning techniques in these IMO models.

cesarb · 6 months ago
> There's no AI, we have fancy chatbots.

"Fancy chatbots" is a classic AI use case. ELIZA is a well-known example of early AI software.

beAbU · 6 months ago
If the next best model is 0.5x as expensive, then many might opt for that in use cases where the results are already good enough.

At work we are optimising cost by switching in different models for different agents based on use case, and where testing has demonstrated a particular model's output is sufficient.

toddmorey · 6 months ago
I don’t see a winner takes all moat forming. If anything, the model providers are almost hot-swappable. And it seems the lifespan for being the best SOTA model is now measured in weeks.
apparent · 6 months ago
It's true they are currently quite interchangeable these days. But the point is that if one can pull far enough ahead, it will get a much bigger share of the market.
mi_lk · 6 months ago
> winner-take-all (most)

> If the very best LLM is 1.5x as good as good as the next-best, then pretty much everyone in the world will want to use the best one

Is it? Gemini is arguably better than OAI in most cases but I'm not sure it's as popular among general public

apparent · 6 months ago
I don't think there's a consensus on this. I have found Gemini to be so-so, and the UX is super annoying when you run out of your pro usage. IME, there's no way to have continuity to a lower-tier model, which makes is a huge hassle. I basically never use it anymore.
esafak · 6 months ago
It's multivariate; better for what? None of them are best across the board.

I think what we're seeing here is superstar economics, where the market believes the top players are disproportionately more valuable than average. Typically this is bad, because it leads to low median compensation but in this rare case it is working out.

polski-g · 6 months ago
> If the very best LLM is 1.5x as good as good as the next-best, then pretty much everyone in the world will want to use the best one.

Well only if the price is the same. Otherwise people will value price over quality, or quality over price. Like they do for literally every other product they select...

redox99 · 6 months ago
Most likely won't be a winner-take-all, but something like it is right now, a never ending treadmill where the same 3 or 4 players release a SOTA model every year or so.
adammarples · 6 months ago
Who the heck is making profit?

Dead Comment

stefan_ · 6 months ago
That is exactly how all of this has played out so far (beep, not at fucking all!)

You are not one random hyperparameter away from the SciFi singularity. You are making iterative improvements and throwing more compute at the problem, as are all your competitors, all of which are to some degree utterly exchangeable.

croes · 6 months ago
There is LLM 1.5x as good as the next best.

I tried multiple and they all fail and some point so I let another LLM take over.

As soon it’s not some boilerplate thing it becomes harder to get the correct result

Deleted Comment

fooker · 6 months ago
You’re right.

It would be unfortunate if something like Grok takes the cake here.

apparent · 6 months ago
Why?
BSOhealth · 6 months ago
These figures are for a very small number of potential people. This leaves out that frontier AI is being developed by an incredibly small number of extremely smart people who have migrated between big tech, frontier AI, and others.

Yes, the figures are nuts. But compare them to F1 or soccer salaries for top athletes. A single big name can drive billions in that context at least, and much more in the context of AI. $50M-$100M/year, particularly when some or most is stock, is rational.

stocksinsmocks · 6 months ago
It’s just a matter of taste, but I am pleased to see publicity on people with compensation packages that greatly exceed actors and athletes. It’s about time the nerds got some recognition. My hope is that researchers get the level of celebrity that they deserve and inspire young people to put their minds to building great things.
godelski · 6 months ago
I think I'm mostly with you but it also depends how it exactly plays out.

Like I definitely think it is better for society if the economic forces are incentivizing pursuit of knowledge more than pursuit of pure entertainment[0]. But I think we also need to be a bit careful here. You need some celebrities to be the embodiment of an idea but the distribution can be too sharp and undermine, what I think we both agree on is, the goal.

Yeah, I think, on average, a $100M researcher is generating more net good for a society (and world) than a $100M sports player or actor. Maybe not in every instance, but I feel pretty confident about this on average. But at the same time, do we get more with one $100M researcher or 100 $1M researchers? It's important to recognize that we're talking about such large sums of money that at any of these levels people would be living in extreme luxury. Even in SV the per capita income is <$150k/yr, while the median income is medium income is like half that. You'd be easily in the top 1%. (The top 10% for San Jose is $275k/yr)

I think we also need to be a bit careful in recognizing how motivation can misalign incentives and goals. Is the money encouraging more to do research and push humanity's knowledge forward? Or is the money now just another means for people that just want money to exploit, who have no interest in advancing humanity's knowledge? Obviously it is a lot more complicated and both are happening but I think it is worth recognizing that if things shift towards the latter than they actually make it harder to achieve the original goals.

So on paper, I'm 100% with you. But I'm not exactly sure the paper is matching reality.

[0] To be clear, I don't think entertainment has no value. It has a lot and it plays a critical role in society.

yreg · 6 months ago
Intel made an ad series based on a similar idea in ~2010.

“Our Rock Stars Aren't Like Your Rock Stars”

https://youtu.be/7l_oTgKMi-s

moomin · 6 months ago
It’s closer to actors and athletes than we’d all hope, in that most people get a pittance or are out of work while a select few make figures that hit newspapers.
gherkinnn · 6 months ago
Sounds vindictive. And yet. According to Forbes, the top 8 richest people have a tech background, most of whom are "nerdy" by some definition.
layer8 · 6 months ago
The money these millions are coming from is already based on nerds having gotten incredibly rich (i.e. big tech). The recognition is arguably yet to follow.
vitaflo · 6 months ago
Nerds run the entire world how much recognition do they need?!
xorcist · 6 months ago
Not really the same, is it? Actors are hired to act. Athletes get paid to improve the sport. It's not like nerds are poached to do academic research or nerd out at their hearts desire. This is a business transaction that Zuck intends to make money from.

Locking up more of the world's information behind their login wall, or increase their ad sales slightly is not enough to make that kind of money. We can only speculate, of course, but at the same time I think the general idea is pretty clear: AI will soon have a lot of power, and control over that power is thought to be valuable.

The bit about "building great things" certainly rings true. Just not in the same way artists or scientists do.

dbacar · 6 months ago
how do you know they are nerds?
cm2187 · 6 months ago
What I don't understand in this AI race is that the #2 or #3 is not years behind #1, I understand it is months behind at worst. Does that headstart really matter to justify those crazy comps? Will takes years for large corporations to integrate those things. Also takes years for the general public to change their habits. And if the .com era taught us anything, it is that none of the ultimate winners were the first to market.
peterlk · 6 months ago
There is a group of wealthy individuals who have bought in to the idea that the singularity (AIs improving themselves faster than humans can) is months away. Whoever gets there first will get compound growth first, and no one will be able to catch up.

If you do not believe this narrative, then your .com era comment is a pretty good analysis.

godelski · 6 months ago
What I don't understand is with such small of a gap why this isn't a huge boon for research.

While there's a lot of money going towards research, there's less than there was years ago. There's been a shift towards engineering research and ML Engineer hiring. Fewer positions for lower level research than there were just a few years ago. I'm not saying don't do the higher level research, just that it seems weird to not do the lower level when the gap is so small.

I really suspect that the winner is going to be the one that isn't putting speed above all else. Like you said, first to market isn't everything. But if first to market is all the matters then you're also more likely to just be responding to noise in the system. The noisy signal of figuring out what that market is in the first place. It's really easy to get off track with that and lose sight of the actual directions you need to pursue.

storus · 6 months ago
LLaMA 4 is barely better than LLaMA 3.3 so a year of development didn't bring any worthy gains for Meta, and execs are likely panicking in order not to slip further given what even a resource-constrained DeepSeek did to them.
IshKebab · 6 months ago
Yeah this makes zero sense. Also unlike a pop star or even a footballer who are at least reasonably reliable, AI research is like 95% luck. It's very unlikely that any AI researcher that has had a big breakthrough will have a second one.

Remember capsule networks?

bbminner · 6 months ago
Hm, I thought that these salaries were offered to actual "giants" like Jeff Dean or someone extremely knowledgeable in the specifics of how the "business side" of AI might look like (CEOs, etc). Can someone clarify what is so special about this specific person? He is not a "top tier athlete" - I looked at his academic profile and it does not seem impressive to me by any measure. He'd make an alright (not even particularly great) assistant professor in a second tier university - which is impressive, but is by no means unique enough to explain this compensation.
sbinnee · 6 months ago
I think the key was multimodality. Meta made a big move in combining texts, audio, images. I remember imagebind was pretty cool. Allen AI has published some notable models, and Matt seems to have expertise in multimodal models. Molmo looks really cool.
bbminner · 6 months ago
A PhD dropout with an alright (passable) academic record, who worked in a 1.5-tier lab on a fairly pedestrian project (multimodal llms and agents, sure), and started a startup.. Reallyttrying to not sound bitter, good for him, I guess, but does it indicate that there's something really fucked up with how talent is being acquired?
TrackerFF · 6 months ago
Frontier AI that scales – these people all have extensive experience with developing systems that operate with hundreds of millions of users.

Don’t get me wrong, they are smart people - but so are thousands of other researchers you find in academia etc. - difference here is scale of the operation.

torginus · 6 months ago
Yeah, I guess if you have a datacenter that costs $100B, even hiring a humble CUDA assembly wizard that can optimize your code to run 10% faster is worth $10B to the company.
perks_12 · 6 months ago
I can print jersey with Neymars name on it and drive revenue. i can't do that with some ai researcher. they have to actually deliver and i don't see how a person with $100M net-worth will do anything other than coast.
magic_man · 6 months ago
Top athletes they have stats to measure. I guess for these researchers I guess there are papers? How do you know who did what with multiple authors? How do you figure out who is Jordan vs Steve Kerr?
thefaux · 6 months ago
Yeah, who knew that Kerr would have the more successful overall career in basketball?

Deleted Comment

Deleted Comment

AIPedant · 6 months ago
A very major difference is that top athletes bring in real tangible money via ticket / merch sales and sponsorships, whereas top AI researchers bring in pseudo-money via investor speculation. The AI money is far more likely to vanish.
brandall10 · 6 months ago
It's best to look at this as expected value. A top AI research has the potential to bring in a lot more $$ than a top athlete, but of course there is a big risk factor on top of that.
ojbyrne · 6 months ago
My understanding is that the bulk of revenue comes from television contracts. There has been speculation that that could easily shrink in the future if the charges become more granular and non-sports watching people stop subsidizing the sports watching people. That seems analogous to AI money.
ignoramous · 6 months ago
Another major difference is, BigTech is bigger than these global sporting institutions.

How much revenue does Google make in a day? £700m+.

positron26 · 6 months ago
OOf. Trying awfully hard to have a bad day there eh?
user____name · 6 months ago
Rational inside a deeply olligopolistic and speculative market.
ulfw · 6 months ago
F1 or soccer salaries are high because these are MARKETABLE people. The people themselves are a marketable brand.

They're not high because of performance/results alone.

Dead Comment

TheAceOfHearts · 6 months ago
Wonder what their contracts look like. Are these people gonna be grinding their ass off at the Meta offices working crazy hours? Does Zucc have a strong vision, leadership, and management skills to actually push and enable these people to achieve maximum success? And if so, what does that form of success look like? So far the vision that Zucc has outlined has been rather underwhelming, but maybe the vision which he shares with insiders is different from his public persona.

I can't help but think that the structure of this kinda hints at there being a bit of a scam-y element, where a bunch of smart people are trying to pump some rich people out of as much money as possible, with questionable chances at making it back. Imagine that the people on The List had all the keys needed to build AGI already if they put their knowledge together, what action do you think they would take?

xorcist · 6 months ago
What I most would like to understand is how you intend to motivate someone you just gave a quarter of a billion dollars to? Especially a young person. They never have to work again, and neither do their grandchildren.

You can just doodle away with whatever research interests you the most, there's no need to deliver a god mode AI to the great leader even if you had the ability to.

JonChesterfield · 6 months ago
Plenty of people are motivated by the work, not the money.

Given more money they just subcontract out increasing fractions of the overhead of life in order to do more of the work.

satrxrer · 6 months ago
I think Dario Amodei just said this much on a podcast and that is why he isn't worried.

That they are building a team with a selection bias for this too.

elbear · 6 months ago
Seems like their only motivation is their desire to succeed, glory, etc.
lionkor · 6 months ago
Im sure in the US you can never have enough money
jodrellblank · 6 months ago
> "Does Zucc have a strong vision, leadership, and management skills to actually push and enable these people to achieve maximum success?"

I suggest we saw a clear demonstration of that with the Metaverse and the answer is no, but more intensely than two letters can communicate.

baobabKoodaa · 6 months ago
I agree with you, BUT most of these AI folks have intrinsic motivation for this AI stuff that is on a whole different level compared to their motivation for building the Metaverse. So even though Zuck isn't particularly effective at motivating people or whatnot, these people will be motivated regardless.
walterbell · 6 months ago
> Imagine.. had all the keys needed

.. that had already leaked and would later plummet in value.

Voultapher · 6 months ago
Slightly fixed title: "One A.I. researcher is negotiating a $250M pay package in made up money that Meta can produce by typing zeros into a spreadsheet."

Paying with stock is a neat option for companies that are projected to grow - the very reason why all of big tech desperately wants to be perceived as growing - since it doesn't cost them anything, since they can dilute existing stock holdings at will by claiming the thing they are buying with new made up stock will make the company that much richer.

So you as a potential investor should ask yourself, will this one employee make Meta worth $255M more? Assuming they are paying $5M in cash and the rest in stock.

tonyhart7 · 6 months ago
Yeah but its not made up money tho, meta(fb) must buy out stocks at certain point while its true that many of its would vested for period of time. it still cost them in the future

what more embarrassing is that they do this to poach AI talent because they massively behind on AI races, like they literally still to the extend of king of social media (fb,instragram,whatsapp etc)

they should do better given how much data, money, resources they have tbh

Voultapher · 6 months ago
I think you are under a misconception as to how this work. Let's say Meta wants to give this person 100M in stock, they don't have to buy 100M worth of stock when the option vests. What they do is, go into their accounting software and look at how many shares they have, let's call this amount X. And they will increase the total number of shares from X to X+Y where Y is worth 100M at current valuation. They never need to spend any cash to do this. It's literally money they can print whenever they want.
meekaaku · 6 months ago
why the negativity? no one bats an eye when ronaldo/messi or steph curry or other top athletes get insane salaries.

These AI researchers will probably have far more impact on society (good or bad I dont know) than the athletes, and the people who pay them (ie zuck et al) certainly thinks its worth paying them this much because they provide value.

prewett · 6 months ago
I'm going with envy. Athletics is a completely different skill from software, and one that is looked down on by posters here, judging by the frequent use of "sportsball". "Sportsball" players make huge salaries? Whatever, not my thing, that's for normies. But when software researchers make 1000x my salary? Now it's more personal. Surely they are not 1000x as good as me. It seems unlikely that this guy is 1000x as skilled as the average senior developer, so there's some perceived unfairness, too.

But I counsel a different perspective: it's quite remunerative to be selling tulips when there's a mania on!

cosmic_cheese · 6 months ago
It may be envy, but I’m still not sure a direct comparison makes much sense, given how much of a different creature engineering LLMs is from what most devs are doing.

I think negative feelings are coming from more of a “why are they getting paid so much to build a machine that’s going to wreck everything” sort of angle, which I find understandable.

wiseowise · 6 months ago
> Surely they are not 1000x as good as me. It seems unlikely that this guy is 1000x as skilled as the average senior developer

Will never understand the logic. They is literally better than an average senior dev, if he has been offered 250m package.

gherkinnn · 6 months ago
My personal negativity stems from Meta in particular having a negative net impact on society. And no small one either. Everything Zuckerberg touches turns to poison (basically King Midas in reverse). And all that money, all that progress, is directed towards the detriment of everyone but a few.

In contrast, a skilled football player lands somewhere between neutral and positive, as at the very least they entertain millions of people. And I'm saying that as someone who finds football painfully dull.

ehnto · 6 months ago
They do bat an eyelid, many leagues even introduce salary caps in order to quell the negative side effects of insane salaries in sports.
IncreasePosts · 6 months ago
Salary caps are more about keeping smaller clubs competitive. Is it really the case here? I think if this guy's company was acquired for $1B and he made $250M from the sale, people wouldn't be surprised at all.
meekaaku · 6 months ago
ok maybe bat an eyelid,

but I dont see news articles about athletes in such negativity, citing their young age etc.

etempleton · 6 months ago
Sports teams pay Ronaldo, Messi, and Curry because they win games and that puts fans in seats and attracts sponsors that pay those teams money and turn a profit.

When someone had a successful business model that offsets the incredible costs let me know, but it is all hypothetical.

akra · 6 months ago
I think the reason for the negativity in this forum (and other threads I've seen over the past few months) is because people are engaged with AI and it seems are deep down not happy with its direction even if they are forced to adapt. That negativity spreads I think to people winning in this which is common in human nature. At least that's the impression I'm getting here and other places. The most commented articles on HN these days are AI (e.g. OpenAI model, some blogger writing about Claude Code gets 500+ comments, etc) which shows a very high level of emotional engagement and have the typical offensive and defensive attitude between people that benefit or lose from this. Also general old school software tech articles are drowned out in comparison; AI is taking all the oxygen out of the room.

My anecdotal observation talking to people: Most tech cycles I've seen have hype/excitement but this is the first one I've been in at least that I've seen a large amount of fear/despair. From loss of jobs, automating all the "good stuff", enriching only the privileged, etc etc people are worried. As loss aversion animals fear is usually more effective for engagement especially if it means a loss of what was before - people are engaged but I suspect negative towards the whole AI thing in general even if they won't say it on the record. Fear also creates a singular focus; when you are threatened/anxious its harder for people to engage with other topics and makes you see AI trend as something you would want to see fail. That paints AI researchers as not just negative; but almost changing their own profession/world for the worse which doesn't elicit a positive response from people.

And for the others, even if they don't have this engagement, the fact that this is drowning out other things can be annoying to some tech workers as well. Other tech talks, articles, research, etc is just silent in comparison.

YMMV; this is just my current anecdotal observations in my limited circle but I suspect others are seeing the same.

jdcasale · 6 months ago
Anyone on earth can completely and totally ignore football and it will have zero consequences for their life.

The money here (in the AI realm) is coming a handful of oligarchs who are transparently trying to buy control of the future.

The difference between the two scenarios is... kinda obvious don't you think?

quonn · 6 months ago
Ronaldo competes in a sport that has 250 million players (mostly for leisure purposes) worldwide, who often practice daily since childhood, and still comes out on top.

Are there 250 million AI specialists and the ones hired by Meta still come out on top?

therealdrag0 · 6 months ago
Huh the pool being so small is exactly why they’re fought over. Theres tiering in research through papers and products built. Even if the tiering is wrong, if you can monopolize the talent you strike a blow to competitors.
meekaaku · 6 months ago
I bet there are more professional footballers than AI researchers hence AI researchers will tend to get paid more.

Also much more people are affected by whatever AI is being developed/deployed than worldwide football viewers.

Top 5 football leagues have about 1.5billion monthly viewers. Top 5 AI companies (google, openai, meta etc) have far more monthly active users.

mutatio · 6 months ago
Crab mentality, the closer proximity to your profession / place in society the more resentment/envy. This is a win for some of us in tech, it's just not us, so we cannot allow it! Article even mentions the age of "24" as if someone of that age is inherently undeserving.