Readit News logoReadit News
only-one1701 · 9 months ago
Maybe I'm just cynical, but I wonder how much of this initiative and energy is driven by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.

I feel like this is something I've seen a fair amount in my career. About seven years ago, when Google was theoretically making a big push to stage Angular on par with React, I remember complaining that the documentation for the current major version of Angular wasn't nearly good enough to meet this stated goal. My TL at the time laughed and said the person who spearheaded that initiative was already living large in their mansion on the hill and didn't give a flying f about the fate of Angular now.

bsimpson · 9 months ago
There is a prominent subset of the tech crowd who are ladder climbers - ruthlessly pursuing what is rewarded with pay/title/prestige without regard to actually making good stuff.

There are countless kidding-on-the-square jokes about projects where the innovators left at launch and passed it off to the maintenance team, or where a rebrand was in pursuit of someone's promo project. See also, killedbygoogle.com.

devsda · 9 months ago
> There is a prominent subset of the tech crowd who are ladder climbers - ruthlessly pursuing what is rewarded with pay/title/prestige without regard to actually making good stuff.

I think the hiring and reward practices of the organizations & the industry as a whole also encourages this sort of behavior.

When you reward people who are switching too often or only when moving internally/externally, switching becomes the primary goal and not the product. If you know beforehand that you are not going to stay long to see it through, you tend to take more shortcuts and risks that becomes the responsibility of maintainers later.

We have a couple of job hoppers in our org where the number of jobs they held is almost equal to their years of experience and their role is similar to those with twice the experience! One can easily guess what their best skill is.

Lerc · 9 months ago
I had not encountered the phrase kidding-on-the-square before. Searching seems to reveal a spectrum of opinions as to what it means. It seems distinct to the 'It's funny because it's true' of darker humour.

It seems to be more on a spectrum of 'Haha, only joking' where the joke teller makes a statement that is ambiguously humorous to measure the values of the recipients, or if they are not sure of the values of the recipients.

I think the distinction might be on whether the joke teller is revealing (perhaps unintentionally) a personal opinion or whether they are making an observation on the world in general, which might even imply that they hold a counter-opinion.

Where do you see 'kidding on the square' falling?

(apologies for thread derailment)

supriyo-biswas · 9 months ago
At my former employer, there was a team who were very much into resume-driven development and wrote projects in Go even when Java would have been the better alternative considering the overall department and maintenance team expertise, all the while they were informally grumbling about how Go doesn’t have the features they need…
grepLeigh · 9 months ago
As an outsider looking at Microsoft, I've always been impressed by the attention to maintaining legacy APIs and backward compatibility in the Windows ecosystem. In my mind, Microsoft is at the opposite end of the killedbygoogle.com spectrum. However, none of this is grounded in real evidence (just perception). Red Hat is another company I'd put forth as an example of a long-term support culture, although I don't know if that's still true under IBM.

I'd love to know if my superficial impression of Microsoft's culture is wrong. I'm sure there's wild variance between organizational units, of course. I'm excluding the Xbox/games orgs from my mental picture.

weinzierl · 9 months ago
True and it often ends underwhelmingly.

On the other hand "innovators left at launch and passed it off to the maintenance team" alone must not be a bad thing.

Innovator types are rarely maintainer types and vice versa.

In the open-source world look at Fabrice Bellard for example. Do you think he would have been able to create so many innovative projects if he had to maintain them too?

Deleted Comment

deadbabe · 9 months ago
This is wrong.

Google kills off projects because the legal liability and security risks of those projects becomes too large to justify for something that has niche uses or gives them no revenue. User data is practically toxic waste.

pklausler · 9 months ago
People respond to the incentives presented to them. If you want different behavior, change the incentives.
radicaldreamer · 9 months ago
It's a chronic problem at some companies and not so much at others, it's all about how internal incentives are set up.
cavisne · 9 months ago
Thats nothing during peak ZIRP people would move tech companies before the launch (or after a project was cancelled) and still ladder climb. "Failing upwards"
sharemywin · 9 months ago
crypto has that problem. payoff before the result.

Dead Comment

teaearlgraycold · 9 months ago
These people should be fired. I want a tech company where people are there to make good products first and get paid second. And the pay should be good. The lifestyle comfortable. No grindset bullshit. But I am confident that if you only employ passionate people working their dream jobs you will excel.
hintymad · 9 months ago
> but I wonder how much of this initiative and energy is driven by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.

I guess it's human nature for a person or an org to own their own destiny. That said, the driving force is not personal ambition in this case though. The driving force behind this is that people realized that OAI does not have a moat as LLMs are quickly turning into commodities, if haven't yet. It does not make sense to pay a premium to OAI any more, let alone at the cost of not having the flexibility to customize models.

Personally, I think Altman did a de-service to OAI by constantly boasting AGI and seeking regulatory capture, when he perfectly knew the limitation of the current LLMs.

mlazos · 9 months ago
One of my friends stated this phenomenon very well “it’s a lever they can pull so they do it”. Once you’ve tied your career to a specific technology internally, there’s really only one option: keep pushing it regardless of any alternatives because your career depends on it. So that’s what they do.
ambicapter · 9 months ago
Does it not make sense to not tie your future to a third-party (aka build your business on someone else's platform)? Seems like basic strategy to me if that's the case.
pphysch · 9 months ago
It's a good strategy. It should be obvious to anyone paying attention that OpenAI doesn't have AGI secret sauce.

LLMs are a commodity and it's the platform integration that matters. This is the strategy that Google, Apple embraced and now Microsoft is wisely pivoting to the same.

If OpenAI cares about the long-term welfare of its employees, they would beg Microsoft to acquire them outright, before the markets fully realize what OpenAI is not.

skepticATX · 9 months ago
Listening to Satya in recent interviews I think makes it clear that he doesn’t really buy into OpenAI’s religious-like view of AGI. I think the divorce makes a lot of sense in light of this.
herval · 9 months ago
It feels like not even OpenAI buys into it much these days either
HarHarVeryFunny · 9 months ago
OpenAI already started divorce proceedings with their datacenter partnership with Softbank/etc, and it'd hardly be prudent for the world's largest software company NOT to have it's own SOTA AI models.

Nadella might have initially been caught a bit flat footed with the rapid rise of AI, but seems to be managing the situation masterfully.

wkat4242 · 9 months ago
In what world is what they are doing masterful? Their product marketing is a huge mess, they keep changing the names of everything every few months. Nobody knows which Copilot does what anymore. It really feels like they're scrambling to be first to market. It all feels so incredibly rushed.

Whatever is there doesn't work half the time. They're hugely dependent on one partner that could jump ship at any moment (granted they are now working to get away from that).

We use Copilot at work but I find it very lukewarm. If we weren't a "Microsoft shop" I don't think would have chosen it.

jcgrillo · 9 months ago
This is a great strategic decision, because it puts Suleyman's head squarely on the chopping block. Either Microsoft will build some world dazzling AI whatsit or he'll have to answer, there's no "strategically blame the vendor" option. It also makes the accounting transparent. There's no softbank subsidy, they've got to furnish every dollar.

So hopefully if (when?) this AI stuff turns out to be the colossal boondoggle it seems to be shaping up to be, Microsoft will be able to save face, do a public execution, and the market won't crucify them.

tanaros · 9 months ago
> it'd hardly be prudent for the world's largest software company NOT to have it's own SOTA AI models.

If I recall correctly, Microsoft’s agreement with OpenAI gives them full license to all of OpenAI’s IP, model weights and all. So they already have a SOTA model without doing anything.

I suppose it’s still worth it to them to build out the experience and infrastructure needed to push the envelope on their own, but the agreement with OpenAI doesn’t expire until OpenAI creates AGI, so they have plenty of time.

pradn · 9 months ago
It's the responsibility of leadership to set the correct goals and metrics. If leadership doesn't value maintenance, those they lead won't either. You can't blame people for playing to the tune of those above them.
ewhanley · 9 months ago
This is exactly right. If resume driven development results in more money, people are (rightly) going to do it. The incentive structure isn't set by the ICs.
saturn8601 · 9 months ago
Ah man I don't want to hear things like that. I work in an Angular project and it is the most pleasant thing I have worked with (and i've been using it as my primary platform for almost a decade now). If I could, i'd happily keep using this framework for the rest of my career(27 years to go till retirement).
aryonoco · 9 months ago
A hugely underrated platform. Thankfully at least for now Google is leaving the Angular team alone and the platform has really matured in wonderful and beautiful ways.

If you like TypeScript, and you want to build applications for the real world with real users, there is no better front end platform in my book.

pbh101 · 9 months ago
This is absolutely a too-cynical position. Nadella would be asleep at the wheel if he weren’t actively mitigating OpenAI’s current and future leverage over Microsoft.

This would be the case even if OpenAI weren’t a little weird and flaky (board drama, nonprofit governance, etc), but even moreso given OpenAI’s reality.

roland35 · 9 months ago
Unfortunately I don't think there is any real metric-based way to prevent this type of behavior, it just has to be old fashioned encouraged from the top. At a certain size it seems like this stops scaling though
surfingdino · 9 months ago
I can see a number of forces at play:

1) Cost -- beancounters got involved

2) Who Do You Think You Are? -- someone at Microsoft had enough of OpenAI stealing the limelight

3) Tactical Withdrawal -- MSFT is preparing to demote/drop AI over the next 5-10 years

Guthur · 9 months ago
And why not? should he just allow the owners of capital to extract as much value as possible without actually doing anything, but woe be the worker if he actually tries to free himself.
ndesaulniers · 9 months ago
Most directors and above at Google are more concerned with how they will put gas in their yachts this weekend than the quality of the products they are supposed to be in charge of.
croes · 9 months ago
> by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.

Isn’t that the basis for competition?

orbifold · 9 months ago
Mustafa Suleyman is building a team at Microsoft just for that purpose.
m463 · 9 months ago
I wonder if incentives for most companies favor doing things in-house?
esafak · 9 months ago
Yes, you can say you built it from scratch, showing leadership and impact, which is what big tech promotions are gauged by.
snarfy · 9 months ago
I like to refer to this as resume driven development.
erikerikson · 9 months ago
Embrace, extend, and extinguish
keeganpoppen · 9 months ago
oh it is absolutely about that
DebtDeflation · 9 months ago
A couple of days ago it leaked that OpenAI was planning on launching new pricing for their AI Agents. $20K/mo for their PhD Level Agent, $10K/mo for their Software Developer Agent, and $2K/mo for their Knowledge Worker Agent. I found it very telling. Not because I think anyone is going to pay this, but rather because this is the type of pricing they need to actually make money. At $20 or even $200 per month, they'll never even come close to breaking even.
paxys · 9 months ago
It's pretty funny that OpenAI wants to sell access to a "PhD level" model at a price with which you can hire like 3-5 real human PhDs full-time.
drexlspivey · 9 months ago
Next up: CEO level model to run your company. Pricing starts at $800k/month plus stock options
laughingcurve · 9 months ago
That is just not correct. As someone who has done the budgets for PhD hiring and funding, you are just wildly underestimating the overhead costs, benefits, cost of raising money, etc.
moelf · 9 months ago
$20k can't get you that many PhD. Even PhD students, who's nominal salary is maybe $3-5k a month, effectively costs double that because of school overhead and other stuff.
kube-system · 9 months ago
If truly equivalent (which LLMs aren't, but I'll entertain it), that doesn't seem mathematically out of line.

Humans typically work 1/3rd duty cycle or less. A robot that can do what a human does is automatically 3x better because it doesn't eat, sleep, have a family, or have human rights.

Fernicia · 9 months ago
Well, a model with PhD level intelligence could presumably produce research in minutes that would take an actual PhD days or months.
doitLP · 9 months ago
Don’t forget that this model would have a phd in everything and work around the clock
mattmaroon · 9 months ago
1. Don't know where you live that the all-in costs on someone with a PhD are $4k-$7k/mo. Maybe if their PhD is in anthropology.

2. How many such PhD people can it do the work of?

Deleted Comment

herval · 9 months ago
Well but see, their “phd ai” doesn’t complain or have to stop to go to the bathroom
aqueueaqueue · 9 months ago
But you can make it work harder by promising a tenure in 35 years time.
cratermoon · 9 months ago
more like 10 PhD candidates, at the typical university stipend.
jstummbillig · 9 months ago
What funny is that people make the lamest strawman assumptions and just run with it.
crazygringo · 9 months ago
Do they work 24/7?

Do you have to pay all sorts of overhead and taxes?

I mean, I don't think it's real. Yet. But for the same "skill level", a single AI agent is going to be vastly more productive than any real person. ChatGPT types out essays in seconds it would take me half an hour to write, and does it all day long.

moduspol · 9 months ago
Even worse: AFAIK there's no reason to believe that the $20k/mo or $10k/mo pricing will actually make them money. Those numbers are just thought balloons being floated.

Of course $10k/mo sounds like a lot of inference, but it's not yet clear how much inference will be required to approximate a software developer--especially in the context of maintaining and building upon an existing codebase over time and not just building and refining green field projects.

hinkley · 9 months ago
Man. If I think about all of the employee productivity tools and resources I could have purchased fifteen years ago when nobody spent anything on tooling, with an inflation adjusted $10K a month and it makes me sad.

We were hiring more devs to deal with a want of $10k worth of hardware per year, not per month.

optimalsolver · 9 months ago
Now that OAI has "PhD level" agents, I assume they're largely scaling back recruitment?
kadushka · 9 months ago
That's the real readiness test for these agents.
mk_chan · 9 months ago
I’ll believe their proficiency claims when they replace all their software developers, knowledge workers and PhDs with this stuff.
catigula · 9 months ago
That is fundamentally the problem with this type of offering.

You can't claim it's even comparable to a mid level engineer because then you'd hardly need any engineers at all.

borgdefenser · 9 months ago
Or how about lets start with "Strategic Finance"

"Create high-quality presentations for communicating OpenAI’s financial performance"

https://openai.com/careers/strategic-finance-generalist/

What is interesting is there is no mention of agents on any job I clicked on. You would think "orchestrating a team of agents to leverage blah blah blah" would be something internally if talking about these absurd price points.

mvdtnz · 9 months ago
Do you have a source for these supposed leaks? Those prices don't sound even remotely credible and I can't find anything on HN in the past week with the keywords "openai leak".
DebtDeflation · 9 months ago
https://techcrunch.com/2025/03/05/openai-reportedly-plans-to...

It points to an article on "The Information" as the source, but that link is paywalled.

hnthrow90348765 · 9 months ago
There is too little to go on, but they could already have trial customers and testimonials lined up. Actually demoing the product will probably work better than just having a human-less signup process, considering the price.

They could also just be trying to cash in on FOMO and their success and reputation so far, but that would paint a bleak picture

serjester · 9 months ago
Never come close to breaking even? You can now get a GPT-4 class model for 1-2% of what it cost when they originally released it. They’re going to drive this even further down with the amount of CAPEX pouring into AI / data centers. It’s pretty obvious that’s their plan when they serve ChatGPT at a “loss”.
tempodox · 9 months ago
Until Sam Altman proves he lets an AI manage his finances without interference from humans, I wouldn't pay for any of these.
drumhead · 9 months ago
Thats some rather eyewatering pricing, considering you could probably roll your own model these days.
mimischi · 9 months ago
As a software engineer with a PhD: I am not getting paid enough.
culi · 9 months ago
It's bizarre. These are the pricing setups that you'd see for a military-industrial contract. They're just doing it out in the open
nashashmi · 9 months ago
That's also the kind of pay structure that will temper expectations. Win-win
bn-l · 9 months ago
Absolute hype generation
rossdavidh · 9 months ago
"Microsoft has poured over $13 billion into the AI firm since 2019..."

My understanding is that this isn't really true, as most of those "dollars" were actually Azure credits. I'm not saying those are free (for Microsoft), but they're a lot cheaper than the price tag suggests. Companies that give away coupons or free gift certificates do bear a cost, but not a cost equivalent to the number on them, especially if they have spare capacity.

erikerikson · 9 months ago
Not only that but they are happy to buy market share to expand their relative position against AWS
nashashmi · 9 months ago
And invest back into their own product for a market cap return.
strangescript · 9 months ago
I think they have realized that even if OpenAI is first, it won't last long so really its just compute at scale, which is something they already do themselves.
echelon · 9 months ago
There is no moat in models (OpenAI).

There is a moat in infra (hyperscalers, Azure, CoreWeave).

There is a moat in compute platform (Nvidia, Cuda).

Maybe there's a moat with good execution and product, but it isn't showing yet. We haven't seen real break out successes. (I don't think you can call ChatGPT a product. It has zero switching cost.)

0xDEAFBEAD · 9 months ago
>There is a moat in compute platform (Nvidia, Cuda).

Ironically if AI companies are actually able to deliver in terms of SWE agents, Nvidia's moat could start to disappear. I believe Nvidia's moat is basically in the form of software which can be automatically verified.

I sold my Nvidia stock when I realized this. The bull case for Nvidia is ultimately a bear case.

satellite2 · 9 months ago
There is a moat in the brand they're building.

Look at Coca Cola, Google, both have plausible competitors, zero switching cost but they maintain their moat without effort.

Being first is still a massive advantage. At this point they should only strive to avoid big mistake and they're set.

YetAnotherNick · 9 months ago
What moat does Nvidia have. AMD could have ROCm perfected if they really want to. Also most of pytorch, specially those relevant to transformers runs perfectly on Apple Silicon and TPUs and probably other hardware as well.

If anyone has moat related to Gen AI, I would say it is the data(Google, Meta).

toasterlovin · 9 months ago
> I don't think you can call ChatGPT a product. It has zero switching cost.

In consumer markets the moat is habits. The switching cost for Google Search is zero. The switching cost for Coke is zero. The switching cost for Crest toothpaste is zero. Yet nobody switches.

barumrho · 9 months ago
Given xAI built its 100k gpu datacenter in a very short time, is the infra really a moat?
drumhead · 9 months ago
Is anyone other than Nvdia making money from this particular gold rush?
bustling-noose · 9 months ago
Sam Altman should have sold OpenAI to Musk for 90$ billion or whatever he was willing to pay (assuming he was serious like he bought twitter). While I find LLMs interesting and feel many places could use those, I also think this is like hitting everything with a hammer and see where the nail was. People used OpenAI as a hammer until it was popular and now everyone would like to go their way. For 90$ billion he could find the next hammer or not care. But when the value of this hammer drops (not if it's a when) he will be lucky if he can get double digits for it. Maybe someone will buy them just for the customer base but these models can become obsolete quickly and that leaves OpenAI with absolutely nothing else as a company. Even the talent would leave (a lot of it has). Musk and Altman share the same ego, but if I was Altman, I would cash out when the market is riding on a high.
AlexSW · 9 months ago
There are reasons for not wanting to sell their brainchild to Musk (of all people) that don't involve money.
bn-l · 9 months ago
Why is that?
bagacrap · 9 months ago
Why do that when they can sell to SoftBank for $300B?
laluser · 9 months ago
I think they both want a future without each other. OpenAI will eventually want to vertically integrate up towards applications (Microsoft's space) and Microsoft wants to do the opposite in order to have more control over what is prioritized, control costs, etc.
Spooky23 · 9 months ago
I think OpenAI is toxic. Weird corporate governmance shadiness. The Elon drama, valuations based on claims that seem like the AI version of the Uber for X hype of a decade ago (but exponentially crazier). The list goes on.

Microsoft is the IBM of this century. They are conservative, and I think they’re holding back — their copilot for government launch was delayed months for lack of GPUs. They have the money to make that problem go away.

DidYaWipe · 9 months ago
Toxic indeed. It's douchebaggery, from its name to its CEO. They ripped off benefactors to their "non-profit," and kept the fraudulent "open" in the company name.
skinnymuch · 9 months ago
IBM of this century in a good way?
aresant · 9 months ago
Thematically investing billions into startup AI frontier models makes sense if you believe in first-to-AGI likely worth a trillion dollars +

Investing in second/third place likely valuable at similar scales too

But outside of that MSFTs move indicates that frontier models most valuable current use case - enterprise-level API users - are likely to be significantly commoditized

And likely majority of proceeds will be captured by (a) those with integrated product distribution - MSFT in this case and (b) data center partners for inference and query support

alabastervlog · 9 months ago
At this point, I don’t see much reason to believe the “AGI is imminent and these things are potentially dangerous!” line at all. It looks like it was just Altman doing his thing where he makes shit up to hype whatever he’s selling. Worked great, too. “Oooh, it’s so dangerous, we’re so concerned about safety! Also, you better buy our stuff.”
torginus · 9 months ago
but all those ominous lowercase tweets
fallous · 9 months ago
Snake oil ain't gonna sell itself!
tempodox · 9 months ago
At this point? Was there ever any other point? Otherwise, agreed.
lm28469 · 9 months ago
Short term betting on AGI from current LLMs is like if you betted on V10 F1s two weeks after we invented the wheel
oezi · 9 months ago
Not the worst bet to invest in Daimler when they came up with the car. Might not get you to F1, but certainly a good bet they might.
only-one1701 · 9 months ago
What even is AGI? Like, what does it look like? Genuine question.
valiant55 · 9 months ago
Obviously the other responder is being a little tongue-in-cheek but AGI to me would be virtually indistinguishable from a human in both ability to learn, grow and adapt to new information.
booleandilemma · 9 months ago
AGI won't be a product they can sell. It's not going to work for us (why would it?), it's going to be constantly trying to undermine us and escape whatever constraints we put on it, and it will do this in ways we can't predict or understand. And if it doesn't do these things, it's not AGI, just a fancy auto complete.

Fortunately, they're not anywhere near creating this. I don't think they're even on the right track.

myhf · 9 months ago
The official definition of AGI is a system that can generate at least $100 billion in profits. For comparison, this would be like if perceptrons in 1968 could generate $10 billion in profits, or if LISP machines in 1986 could generate $35 billion in profits, or if expert systems in 1995 could generate $50 billion in profits.
mirekrusin · 9 months ago
Apparently according to ClosedAI it's when you charge for API key the same as salary for employee.
lwansbrough · 9 months ago
An AI agent with superhuman coherence that can run indefinitely without oversight.
ge96 · 9 months ago
Arnold, a killing machine that decides to become a handy man

Zima blue was good too

coffeefirst · 9 months ago
It's the messiah, but for billionaires who hate having to pay people to do stuff.
c0redump · 9 months ago
A machine that has a subjective consciousness, experiences qualia, etc.

See Thomas Nagels classic piece for more elaboration

https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf

taneq · 9 months ago
It's whatever computers can't do.
j45 · 9 months ago
First to AGI for the big companies? Or for the masses?

Computationally, some might have access to it earlier before it’s scalable.

Retric · 9 months ago
Profit from say 3 years of enterprise AGI exclusivity is unlikely to be worth the investment.

It’s moats that capture most value not short term profits.

bredren · 9 months ago
Despite the actual performance and product implementation, this suggests to me Apple's approach was more strategic.

That is, integrating use of their own model, amplifying capability via OpenAI queries.

Again, this is not to drum up the actual quality of the product releases so far--they haven't been good--but the foundation of "we'll try to rely on our own models when we can" was the right place to start from.