Readit News logoReadit News
HelloUsername · 7 months ago
Related:

Watching AI drive Microsoft employees insane https://news.ycombinator.com/item?id=44050152 21-may-2025 544 comments

zihotki · 7 months ago
It's not related, it's actually the original source of this article. The submitted article doesn't add anything to the source apart from advertizements.
csallen · 7 months ago
AI is a tool.

Just like any other tool, there are people who use it poorly, and people who use it well.

Yes, we're all tired of the endless parade of people who exaggerate the abilities of (current day) AI and claim it can do more than it can do.

But I'm also getting tired of people writing articles that showcase people using AI poorly as if that proves some sort of point about its inherent limitations.

Man hits thumb with hammer. Article: "Hammers can't even drive a simple nail!"

nessbot · 7 months ago
It's not a tool its a SaaS. I own and control my tools. I think a John Deer tractor looses it's "tool" status when you can't control it. Sure there's the local models but those aren't what the vast majority of folks are using or pushing.
steventruong · 7 months ago
This is an incredibly weird view to me. If I borrow a hammer from my neighbor, although I don’t own the hammer, it doesn’t suddenly make the hammer not a tool. Associating a tool with the concept of ownership feels like an odd argument to make.
bcyn · 7 months ago
Many SaaS products are tools. I'm sure when tractors were first invented, people felt that they didn't "control" it compared to directly holding shovels and manually doing the same work.

Not to say that LLMs are at the same reliability of tractors vs. manual labor, but just think that your classification of what's a tool vs. not isn't a fair argument.

eddd-ddde · 7 months ago
Is your email not a tool since you likely pay some cloud provider for it and the way it works is largely outside your control?

Something can be a SaaS, and a useful tool, at the same time.

hoppp · 7 months ago
A tool and a SaaS. It can be both, they are not mutually exclusive.
johnisgood · 7 months ago
You can use it locally. FWIW many tools are SaaS, yet people have no trouble with that.
Jackson__ · 7 months ago
More like:

Man hits thumb with hammer. Hammer companies proclaim Hammers will be able to build entire houses on their own within the next few years [0].

[0] https://www.nytimes.com/2025/05/23/podcasts/google-ai-demis-...

csallen · 7 months ago
Okay, so write an article about how the hammer companies are dumb. Don't write an article about how hammers can't drive nails.

Why is this complex?

pempem · 7 months ago
This is a way better version of my comment.
JHer · 7 months ago
The television, the atom bomb, the cigarette rolling machine, and penicillin are also "just tools". They nevertheless changed our world entirely, for better or worse. If you ascribe the impact of AI to the people using AI, you will be utterly, completely bewildered by what is happening and what is going to happen.

Dead Comment

croes · 7 months ago
> Yes, we're all tired of the endless parade of people who exaggerate the abilities of (current day) AI

You mean the people who create and sell these AIs.

You would blame the hammer or at least the manufacturer if they claimed the hammer can do it al by itself.

This is more a your-car-can-drive-without-supervision-but-it-hit-a-another-car case.

baxtr · 7 months ago
I think the difference is that a hammer manufacturer wouldn’t suggest that the hammer will replace the handyman with its next update.
deadlydose · 7 months ago
> You would blame the hammer or at least the manufacturer if they claimed the hammer can do it al by itself.

I wouldn't because I'm not stupid and I know what a hammer is and isn't capable of despite any claims to the contrary.

tedunangst · 7 months ago
What's the best way to insulate myself from the output of people using AI poorly?
benreesman · 7 months ago
It's increasingly a luxury to be a software engineer who is able to avoid some combination of morally reprehensible leadership harming the public, quality craftsmanship in software being in freefall, and ML proficiency being defined downwards to admit terrible uses of ML.

AI coding stuff is a massive lever on some tasks and used by experts. But its not self-driving and the capabilities of tge frontier vendor stuff might be trending down, they're certainly not skyrocketing.

Any other tool: a compiler, an editor, a shell, even a browser, but I'd say build tools are the best analogy: you have chosen to become proficient or even expert or you haven't and rely on colleagues or communities that provide that expertise. Pick a project or a company: you know if you should be messing around with the build or asking a build person.

AI is no diffetent. Claude 4 Opus just went GA and its in power user tune still, they don't have the newb/cost-control defaults dialed in yet and so its really useful and probably will be for a few days until they get the PID controller wired up to whatever a control vector is these days, and then it will tank to useless slop just like 3.7.

For a week I'll get a little boost in my ouyput and pay them a grand and be glad I did, and then it will go back to worse than useless.

These guys only know one business plan.

Deleted Comment

OutOfHere · 7 months ago
> there are people who use it poorly, and people who use it well.

Precisely. AI needs appropriate and sufficient guidance to be able to write code that does the job. I make sure my prompts have all of the necessary implementation detail that the AI will need. Without this guidance, the expected result is not a good one.

spookie · 7 months ago
This often doesn't scale well. And coming up with all the necessary context in writing is a bit harder than when programming.
dinfinity · 7 months ago
> AI needs appropriate and sufficient guidance to be able to write code that does the job.

Note that the example (shitty Microsoft) implementation was not able to properly run tests during its work, not even tests it had written itself.

If you have an existing codebase that already has a plenty tests and you ask AI to refactor something whilst giving it the access it needs to run tests, it can already sometimes do a great job all by itself.

Good specification and documentation also do a lot, of course, but the iterative approach with feedback if things are actually working as intended is a game changer. Not unsurprisingly also a lot closer to how humans do things.

despera · 7 months ago
A tool can very well be broken thought or simply useless for anything but the most lightweight job (despite all the PR nonsense)

They call it a tool and so people leave reviews like any other tool.

dvfjsdhgfv · 7 months ago
> Just like any other tool, there are people who use it poorly, and people who use it well.

We are not talking about random folks here but about the largest software company with high stakes in the most popular LLM trying to show off how good it is. Stephen Toub is hardly a newbie either.

pier25 · 7 months ago
I'd agree with you if AI companies weren't relentlessly overhyping AI.
mplanchard · 7 months ago
Isn’t this exactly what the article says? It’s the entire thesis of the second half.
ruraljuror · 7 months ago
AI can be used as a tool, sure, but it is distinct from other technologies in that it is an agent, that is: it has or operates with agency.

There are many prominent people out there who are saying that AI will replace SWEs--not just saying that AI will be a tool added to th SWE tool belt.

Although it seems like you agree with the conclusion of the article you are criticizing, the context is much more complex than AI is a tool like a hammer.

-__---____-ZXyw · 7 months ago
It is not "an agent" in the sense you are implying here, it does not will, want, plan, none of those words apply meaningfully. It doesn't reason, or think, either.

I'll be excited if that changes, but there is absolutely no sign of it changing. I mean, explicitly, the possibility of thinking machines is where it was before this whole thing started - maybe slightly higher, but moreso because a lot of money is being pumped into research.

LLMs might still replace some software workers, or lead to some reorganising of tech roles, but for a whole host of reasons, none of which are related to machine sentience.

As one example - software quality matters less and less the as users get locked in. If some juniors get replaced by LLMs and code quality plummets causing major headaches and higher workloads for senior devs, as long as sales don't dip, managers will be skipping around happily.

Deleted Comment

itishappy · 7 months ago
Improve your efficiency with this double headed hammer!

I drove 200 screws in one weekend using this hammer!

With hammers like these, who needs nails?

Hammers are all you need

Hammers deemed harmful

Dead Comment

zkmon · 7 months ago
AI adoption is mostly driven from the top. What this means is, shareholders and regulators would make the CEOs to claim that the company is using AI. CEOs trickle this grand vision and goals down, allocating funds and asking for immediate reports showing the evidence of AI everywhere in the company. One executive went to the extent saying that anyone not using AI in their work would face disciplinary action.

So the point is, it is not about whether AI can fix a bug or do something useful. It is about reporting and staying competitive via claiming. Just like many other reports which don't have any specific other purpose other than reporting itself.

A few years back, I asked an Architect who was authoring an architecture document, about who the audience for this document is. She replied saying the target audience is the reviewers. I asked, does anyone use it after the review? She says, not sure. And not surprisingly, the project which took 3 years to develop with a large cost, was shelved after being live for an hour in prod, because the whole thing was done only for a press release, saying the company has gone live with a new tech. They didn't lie.

sixtram · 7 months ago
Yesterday, I asked AI for help:

Check my SQL stored procedure for possible logical errors. It found a join error that I didn't remember including in my SQL. After double-checking, I found that it had hallucinated a join that wasn't there before and reported it as a bug. After I asked for more information, it apologized for adding that.

I also asked for a C# code with some RegEx. It compiled, but it didn't work; it replaced the order of two string parameters. I had to copy and paste it back to show why it didn't work, and then it realized that it had changed the order of the parameters.

I asked for a command-line option to zip files in a certain way. It hallucinated a nonexistent option that would be crucial. In the end, it turned out that it was not possible to zip the files the way I wanted.

My manager plans to open our Git repository for AI code and pull request (PR) review. I already anticipate the pain of reviewing nonsensical bug reports.

aerhardt · 7 months ago
I'm an experienced programmer but currently picking up C# for a masters course on Game AI. I appreciate having the LLMs at hand but I am surprised by how much of a step down the quality of the code output is compared to Python.
NBJack · 7 months ago
Step down in terms of LLM performance? I think that is easily explained in the sheer bulk of articles, blogs, and open source projects in Python rather than C#. I actually prefer the latter to the former, but I know it is still not that widely adopted.
craftkiller · 7 months ago
One of my teammates recently decided to us AI to explain a config option instead of reading the 3 sentences in the actual documentation. The AI told him the option did something that the documentation explicitly stated the option did not do. He then copied the AI's lies as a comment into our code base. If I hadn't read the actual documentation like our forefathers used to, that lie would be copied around from one project to the next. I miss having coworkers that are more than a thin facade to an LLM.
yetihehe · 7 months ago
I will say it again and again - AI will replace developers like excel and accounting programs replaced accountants.
tocs3 · 7 months ago
An old (2015) NPR Planet Money story about VisiCalc.

Episode 606: Spreadsheets! https://www.npr.org/sections/money/2015/02/25/389027988/epis...

The 1984 story it was inspired by (acording to the episode description).

https://medium.com/backchannel/a-spreadsheet-way-of-knowledg...

There are of course still accountants.

mllev · 7 months ago
Didn’t they though? I’m sure accounting firms hired way more accountants back in the days of paper records. AI definitely won’t rid society of the developer role, but there will certainly be fewer employment opportunities.
yetihehe · 7 months ago
There is already fewer employment opportunities, because big tech firms hired anyone they could and let them sit on unimportant things. At this moment, AI is used mostly as a convenient excuse to "trim the fat". Of course, it's a catastrophe for those fired programmers, but the good ones will find a job or create a new one.
namaria · 7 months ago
Are there fewer accountants now than there were in 1985?

Also relevant, do companies find it hard to hire accountants now?

Joeboy · 7 months ago
Which is how? This is an honest question, I genuinely don't know what happened to all the people who used to be employed to do manual calculations. Or all the people who worked as typists for that matter.
bgwalter · 7 months ago
I took it for irony, that is, accountants weren't replaced by Excel. There are thousands of articles right now of course that accountants will be replaced by AI.
sotix · 7 months ago
Do you mind clarifying your point? Are you implying excel didn’t replace accountants? As a CPA and software engineer I can tell you that I left the accounting industry because accountants had been replaced. Wages are abysmally low for a field that required a graduate degree to sit for the CPA exam, and the industry has been in a crisis mode with a shortage of entrants.

Accounting has probably been hit significantly harder by offshoring than by tools like excel, but the market for it is not what I would consider healthy. Excel making it easier for offshore talent is also a possibility. Further, the industry has been decimated due to a lack of union / organization. The AICPA allowing people in India and the Philippines to become CPAs has been devastating to US CPA wages.

Accordingly I find your original comparison of software engineering to accounting a bit concerning!

yetihehe · 7 months ago
I clarified in another comment[0].

> and the industry has been in a crisis mode with a shortage of entrants.

The accountants I know are well paid. Not everyone of course, there are some that are bad at their job and there are some good ones. Many are essentially working like programming freelancers. There will still be a lot of clients that can't even request AI to do some simple programs for them but are willing to do some small programming work and have some money to pay a freelancing programmer to create some small solution for them.

> Accordingly I find your original comparison of software engineering to accounting a bit concerning!

In my original post, I was telling that some programmers will indeed be replaced. The programming field will probably migrate like accounting field. It WILL be concerning for a lot of programmers, but programming as a field will be here to stay like accounting.

[0] https://news.ycombinator.com/item?id=44086476

eastbound · 7 months ago
1000x more reporting requirements?
data-ottawa · 7 months ago
My experiences with Copilot are exactly like these threads – once it's wrong you have to almost start fresh or takeover, which can be a huge time sink.

Claude 4 Opus and Sonnet seem much better for me. The models needed alignment and feedback but worked fairly well. I know Copilot uses Claude but for whatever reason I don't get nearly the same quality as using Claude Code.

Claude is expensive, $10 to implement a feature, $2 to add some unit tests to my small personal project. I imagine large apps or apps without clear division of modules/code will burn through tokens.

It definitely works as an accelerator but I don't think it's going to replace humans yet, and I think that's still a very strong position for AI to be in.

fuzzzerd · 7 months ago
Are those costs from the pay as you go credit system or is that on top of a max subscription?

I've tinkered with the pay as you go, but wonder if a higher cap on max for 100/month would be worth it?

data-ottawa · 7 months ago
I bought $20 of pay as you go API credits to test out the models and how to use them.

I have not tried the $100/month subscription. If it's net cheaper than buying credits I would consider it, since that's basically 10 features per month.

NBJack · 7 months ago
Note we aren't really seeing the price reflect the true costs of using LLMs yet. Everyone is prioritizing adoption over sustainable business models. Time will tell how this pans out.
bee_rider · 7 months ago
The line:

> Become the AI expert on your team. Don't fight the tools, master them. Be the person who knows when AI helps and when it hurts.

Is something I’ve been wondering about. I haven’t played with this AI stuff much at all, despite thinking it is probably going to be a basically interesting tool at some point. It just seems like it is currently a bit bad, and multiple companies have bet millions of dollars on the idea that it will eventually be quite good. I think that’s a self fulfilling prophecy, they’ll probably get around to making it useful. But, I wonder if it is really worthwhile to learn how to work around the limitations of the currently bad version?

Like, I get that we don’t want to become buggywhip manufacturers. But I also don’t want to specialize in hand-cranking cars and making sure the oil is topped off in my headlights…

blooalien · 7 months ago
If you want to play with "this A.I. stuff" and you have a half-way modern-ish graphic card in your PC or laptop (or even a somewhat modern-ish phone) there's a fair few ways to install and run smaller(ish) models locally on your own hardware, and a host of fancy graphical interfaces to them. I personally use ChatBox and Ollama with the Qwen2.5 models, IBM's Granite series models, and the Gemma models fairly successfully on a reasonably decent (couple years old now) consumer-class gaming rig with an NVIDIA GeForce GTX 1660 Ti and 64 gig of RAM. There's also code editors like Zed or VSCode that can connect to Ollama and other local model runners, and if you wanna get really "nerdy" about it, you can pretty easily interface with all that fun junk from Python and script up your own interfaces and tools.
AstralStorm · 7 months ago
Except your toy model or toy version of the model will barely work to talk to you, much less write code. I've done this experiment with a much beefier set of GPUs (3080 10 GB + 3060 12 GB) allowing me to run one step up bigger model.

It's not even comparable to free tiers. I have no idea how big the machines or clusters running that are, but they must be huge.

I was very unimpressed with the local models I could run.

bee_rider · 7 months ago
This seems like becoming an expert at hand-cranking engines or writing your own Forth compiler back when compilers were just getting started.

My point of view, I guess, is that we might want to wait until the field is developed to the point where chauffeurs or C programmers (in this analogy) become a thing.

grogenaut · 7 months ago
It's rapidly evolving, if you were to master just one explicit revision of say cursor then I think your anology would be correct. However to me it's more like keep abreast of the new things, try new stuff, don't settle on a solution, let the wave push you forward, don't be the coder who doesn't turn their camera on during meetings coding away in a corner for another year or 2 berfore trying ai tools because "they're bad".

But this is the same for any tech that will span the industry. You have people who want to stay in their ways and those who are curious and moving forward, and at various times in people's careers they may be one or the other. This is one of those changes where I don't think you get the option to defer.

cjalmeida · 7 months ago
Definitely don’t dismiss it. While there are limitations, it’s already very capable for a number of tasks. Tweaking it to be more effective is skill itself.
blooalien · 7 months ago
> Tweaking it to be more effective is skill itself.

^^^ This is actually one of the currently "in-demand" skills in "The Industry" right now... ;)

abletonlive · 7 months ago
It is worth it and anybody that hasn’t spent the past month using it has nothing useful to say or contribute, their knowledge about what these tools are capable of is already outdated.

I would bet about 90% of the people commenting how useless llms are for their job are people that installed copilot and tried it out for a few days at some point not in the last month. They haven’t even come close to exploring the ecosystem that’s being built up right now by the community.

bee_rider · 7 months ago
The issue (if you aren’t interested in AI in and of itself, but just in the final version of the tool it will produce) is that the AI companies advertise that they’ve come up with an incredible new version that will obsolete any skills you learned working around the previous version’s limitations, on like a monthly basis.

Like, you say these folks have clearly tried the tool out too long ago… but, I mean, at the time they tried it out they could have found other comments just like yours, advertising the fact that now the tool really is revolutionary right now. So, we can see where the skepticism comes from, right?