Readit News logoReadit News
MorehouseJ09 · 4 months ago
“If it takes longer to explain to the system all the things you want to do and all the details of what you want to do, then all you have is just programming by another name,”

I think this is going to make the difference between junior and senior engineers even more drastic than it is today. It's really hard to know what/how to even describe real problems to these tools, and the people who invest the most in their tooling now, are going to be most successful. It's hard for someone who hasn't designed a large codebase already to do this in an ai native way where they don't have the experience of abstracting at the right level and things like that.

Today's equivalent, I've often found some of the best engineers I know have insane setups with nvim or emacs. They invest in their tool chain, and are now bringing AI into.

roxolotl · 4 months ago
That quote really perfectly encapsulates the challenge with these tools. There is an assumption that inherently code is hard to write and so if you could code in natural language it would save time. But code isn’t actually that hard to write. Sure some people are genuinely bad at it just like I’m genuinely bad at drawing but a bit of practice and most people can be perfectly competent at it.

The hard part is the engineering. Understanding and breaking down the problem, and then actually solving it. If all we gain out of these tools is that we don’t have to write code by hand anymore they are moderately useful but they won’t really be a step change in software development speed.

elcritch · 4 months ago
It's not too different in my opinion from the skills need to build complicated machinery like Boeing 747s despite how much Wallstreet and PHBs want to believe it's fungible. Having competent experienced engineers on the ground level watching these processes and constantly revising and adapting to everything from personnel, material, or vendor changes is so far irreplaceable.

Maybe if we get super AGI one day. Even then I suspect that from a thermodynamics perspective that might not be cost effective as you often need localized on site intelligence.

It's an interesting question but I bet humans combined with AI tooling will remain cost competitive for a long time barring leaps in say quantum compute. After all organic brains operate at the atomic level already and were honed in an extremely competitive environment for billions of years. The calories and resources required to create highly efficient massively powerful neural compute had incredibly thin resource "margins" with huge advantages for species to utilize.

anon7000 · 4 months ago
You hit the nail on the head too. Coding itself is very easy for anyone halfway decent in this career — and yet there were a ton of people in CS101 and even in later courses who struggled with things like for loops. It was very hard for them to succeed in this career.

What’s hard is coming up with the algorithm/system design, making the right choices that will scale and won’t become a maintenance nightmare, etc. And yeah, after almost a decade, I have picked up enough I can at least write an outline of a solution that will work alright. But there are still so many tricky edge cases and scaling problems that make it hard to turn “alright” into “really good!”

Sure, AI can help… but it mostly helps with greenfield projects. It doesn’t know about the conversations on slack & jira from a year ago. It doesn’t know about the dozens of other systems and ways the project interacts with other parts of the business. It doesn’t know why whatever regurgitated approach won’t be a good fit for our specific use case. And elaborating all of that detail is not easy! Part of what makes you a good employee is the shit you picked up over the past several months & years that is joe ingrained in your mind when you start working on new projects.

Dead Comment

sublinear · 4 months ago
> some of the best engineers I know have insane setups with nvim or emacs. They invest in their tool chain, and are now bringing AI into.

I find this likely, but totally irrelevant to their success now and in the future. AI tools are as much of a trivial choice as any other text editor features. The resulting time spent and code quality are the same regardless of personal preferences.

thunky · 4 months ago
> It's really hard to know what/how to even describe real problems to these tools

I would argue that if you can't describe the problem in plain language then you don't have a very good chance of solving it with code or otherwise.

Personally I find that the act of describing the problem will often reveal a good solution...then it's just a matter of seeing if the LLM agrees with me or if it has a difference idea (for better or worse).

majormajor · 4 months ago
As software grows the problems go such that the effort to completely describe the necessary changes in plain language is often not all that much shorter than the code. Especially if you have a really really good autocomplete for the boilerplate parts of the code. So LLMs being really good at the tedious autocomplete makes LLMs have less marginal utility for the "write the whole thing" for certain types of work.

But the sneakier part of the problem is that as business rules get more complex it's usually harder to completely describe the problem in plain language than in a more formally specified language. For instance, plain-language "apply the user's promo code" doesn't capture the nasty if/else tree you might hit when you're deep in the codebase and see that there's already a bunch of restrictions on other types of promo codes and the product manager didn't think about which of those restrictions should apply to this new promo code. And at this point you're gonna need to use plain language to describe and refine the problem with the product manager - but if you instead were relying on an LLM to turn your short sentence into the right code, it might pick the wrong thing when it comes to modifying that existing code.

Verdex · 4 months ago
> ...describe the problem in plain language...

Oh, I get it. They're saying you should be able to write it in C.

[Jokes aside, I would be interested in hearing from bridge builders, aerospace engineers, or nuclear scientists. Pretty sure they're using math and not 'plain language'.]

stocksinsmocks · 4 months ago
So the author is providing some personal annotations and opinions on a summary of a “new paper” which was actually published five months ago, which itself was a summary of research with the author’s personal annotations and opinions added? These are exactly the kind of jobs that I want AI to automate.
majormajor · 4 months ago
It's more likely that AI will let more people "write" random blogs and articles about things they haven't sufficiently actually researched... you're gonna get more spam, not less.
zaphirplane · 4 months ago
The solution is more AI, in the browser to tell you this link is to a rubbish AI blog, AI in the mail client to tell you which email content to ignore and AI in the fridge, in the bathroom. Till each person is so tightly locked in a group bubble they have no idea how colourful the world is and how many different personalities exist that for the most bring something to the table
tempodox · 4 months ago
Having your opinions and personal remarks automated by “AI” sounds really smart.
pamelafox · 4 months ago
Both humans and coding agents have their strengths and weaknesses, but I've been appreciating help from coding agents, especially with languages or frameworks where I have less expertise, and the agent has more "knowledge", either in its weights or in its ability to more quickly ingest documentation.

One weakness of coding agents is that sometimes all it sees are the codes, and not the outputs. That's why I've been working on agent instructions/tools/MCP servers that empower it with all the same access that I have. For example, this is a custom chat mode for GitHub Copilot in VS Code: https://raw.githubusercontent.com/Azure-Samples/azure-search...

I give it access to run code, run tests and see the output, run the local server and see the output, and use the Playwright MCP tools on that local server. That gives the agent almost every ability that I have - the only tool that it lacks is the breakpoint debugger, as that is not yet exposed to Copilot. I'm hoping it will be in the future, as it would be very interesting to see how an agent would step through and inspect variables.

I've had a lot more success when I actively customize the agent's environment, and then I can collaborate more easily with it.

crooked-v · 4 months ago
For me it's simple: even the best models are "lazy" and will confidently declare they're finished when they're obviously not, and the immensely increased amount of training effort to get ChatGPT 5's mild improvements on benchmarks suggests that that quality won't go away anytime soon.
worldsayshi · 4 months ago
Sounds like it's partially about a nuanced trade-off. It can just as well be too eager and add changes I didn't ask for. Being lazy is better than continuing on a bad path.
crooked-v · 4 months ago
There's a long distance between "nuanced behavior" and what it actually does now, which is "complete 6 items of an explicit 10-item task list and then ask the user again if they want to continue".
anthonypasq · 4 months ago
gpt-5 is extremely cheap, what makes you think they couldn't produce a larger, smarter, more expensive model?

gpt-5 was created to be able to service 200m daily active users.

bakugo · 4 months ago
> what makes you think they couldn't produce a larger, smarter, more expensive model?

Because they already did try making a much larger, more expensive model, it was called GPT-4.5. It failed, it wasn't actually that much smarter despite being insanely expensive, and they retired it after a few months.

jjangkke · 4 months ago
I'm fatigued by these articles that just broadly claim AI can't code because its painting a broad stroke against a widely diverse use of AI for different stacks.

It's horribly outdated way of thinking that an singular AI entity would be able to handle all stacks all problems directed at it because no developer is using it that way.

AI is a great tool for both coders and artists and these outlandish titles that grab attention really seem to be echo chambers aimed at people who are convinced that AI isn't going to replace them which is true but the opposite is also true.

tjr · 4 months ago
A lot of comments here seem to be similar. I see people claiming that AI has all but taken over doing their work for them, and others claiming that it's almost useless. But usually, nobody even briefly mentions what the work is (other than, presumably, something related to programming).

I imagine there's a big difference in using AI for building, say, an online forum vs. building a flight control system, both in terms of what the AI theoretically can do, and in terms of what we maybe should or should not be letting the AI do.

lanstin · 4 months ago
Yeah. I use it for analytics/dataviz stuff (which involves a lot of python to run spark jobs, glue different APis to get some extra column of data, making png or svg pictures,, making D3 based web sites in html and JavaScript). That all works pretty well.

I also write high performance Go server code, where it works a lot less well. It doesn't follow my rules for ptr APIs or using sync mutexes or atomic operations across a code base. It (probably slightly older version than SOTA) didn't read deep call chains accurately for refactoring. It's still worth trying but if that was my sole work it would probably not be worth it.

On the other hand for personal productivity, emacs functions and config, getting a good .mcp.json, it is also very good and generates code that partakes in the exponential growth of good code. (Unlike data viz where there is a tendency to build something and then the utility declines over time).

jjangkke · 4 months ago
I can confidently state that for CRUD web apps, its truly over as in those jobs are never going to command the same wages it once used to.

With the recent models its now encroaching similarly on all fronts, I think the next few iterations we'll see LLM solidify itself as a meta-compiler that will be deployed locally for more FCS type systems.

At the end of the day the hazards are still same with or without AI, you need checks and bounds, you need proper vetting of code and quality but overall it probably doesn't make sense to charge an hourly rate because an AI would drastically cut down such billing schemes.

For me "replacement" is largely a 70~80% reduction in either hourly wages, job positions or both and from the job market data I see it can get there.

9rx · 4 months ago
Well, AI really can't code any more than a compiler can. They all require a human to write the original code, even the machine does translate it into other code.

And until the day that humans are no longer driving the bus that will remain the case.

lanstin · 4 months ago
You can say generate a c program that uses gcc 128 bit floats and systematically generates all quadratic roots in order of the max size of their minimal polynomial coefficients, and then sort them and calculate the distribution of the intervals between adjacent numbers, and it just does it. That's qualitatively different from the compilers I have used. Now I was careful to use properly technical words to pull in the world of numeric computation and c programming. But still saved me a lot of time. It was even able to bolt on multithreaded parallelism to speed it up using c stuff I never heard of.
xaindume · 4 months ago
Using a calculator won't make you a mathematician, but a mathematicians with a calculator can show you amazing things.
leptons · 4 months ago
Calculators wont give you completely wrong results, not even once, where "AI" does that way too often. If calculators did too, mathemeticians simply would not use them.
b112 · 4 months ago
Most specifically, random wrong results. Some calculators have issues with rounding, but if you understand those issues, it's consistent.

Imagine driving your car, you turn right, but today turning right slams on the brakes, and 10 people rear end you! That's current AI.

RamtinJ95 · 4 months ago
Yes in terms of raw LLM, but with some tools or a MCP the “AI” will never be wrong.
kazinator · 4 months ago
Nope, that's not the reason. It's because it's just a query that probabilistically creates a garden path of tokens out of a compressed form of the training data, requiring a real coder to evaluate which parts of are useful.

Amazing how someone writing for an IEEE website can't keep their eyes on the fundamentals.

manoDev · 4 months ago
I'm tired of the anthropomorphization marketing behind AI driving this kind of discussion. In a few years, all this talk will sound as dumb as stating "MS Word spell checker will replace writers" or "Photoshop will replace designers".

We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.

kazinator · 4 months ago
Generative AI is replacing writers, designers, actors, ... it is nothing like just a spell checker or Phtoshop.

Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.

There is no exact measure of correctness in design; one bad bit does not stop the show. The clients don't even want real art. Artists sometimes refer to commercial work as "selling out", referring to hanging their artistic integrity on the hook to make a living. Now "selling out" competes with AI which has no artistic integrity to hang on the hook, works 24 hours a day for peanuts and is astonishingly prolific.

mjr00 · 4 months ago
> Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.

Were AI-generated actors chosen over real actors, or was the alternative using some other low-cost method for an advertisement like just colorful words moving around on a screen? Or the ad not being made at all?

The existence of ads using generative AI "actors" doesn't prove that an actor wasn't paid. This is the same logical fallacy as claiming that one pirated copy of software represents a lost sale.

frank_nitti · 4 months ago
I agree with your sentiment. But where I struggle is: to what degree do each of those ads “represent one less person who would have been paid” versus those that represent one additional person who would not be able to afford to advertise in that medium.

Of course that line of reasoning reduces similar to other automation / minimum wage / etc discussions

biophysboy · 4 months ago
YouTube has the lowest quality ads of any online platform I use by several orders of magnitude. AI being used for belly fat and erectile dysfunction ads is not exactly good for its creative reputation
burnte · 4 months ago
There's a difference between taking one thing and putting something else in it's spot, and truly REPLACING something. Yes, some ads have AI generated actors. You know because you can tell because they're "not quite right", rather than focusing on the message of the ad. Noticing AI in ads turns more people off than on, so AI ads are treated by a lot of people as an easy "avoid this company" signal. So those AI ads are in lieu of real actors, but not actually replacing them because people don't want to watch AI actors in an ad. The ad ceases to be effective. The "replacement" failed.
whatever1 · 4 months ago
“ Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.”

The thing is that they would not have paid for the actor anyway. It’s that having an “actor” and special effects for your ads cost nothing, so why not?

The quality of their ads went up, the money changing hands did not change.

grugagag · 4 months ago
> Generative AI is replacing writers, designers, actors, ... it is nothing like just a spell checker or Phtoshop.

For cheap stuff it’s true. However, nobody wants to watch or listen generated content and this will wear thin aside from the niche it’ll take hold of and permanently replace humans.

exe34 · 4 months ago
> Each one represents one less person that would have been paid

or equally, one more advert which (let's say rightly) wouldn't have been made.

seriously though, automation allows us to do things that would not have been possible or affordable before. some of these are good things.

jvanderbot · 4 months ago
Anecdata: I know writers, editors, and white collar non-tech workers of all kinds who use AI daily and like it.

When GPT3.5 first landed a lifelong writer/editor saw a steep decrease in jobs. A year later the jobs changed to "can you edit this AI generated text to sound human", and now they continue to work doing normal editing for human or human-ish writing while declining the slop-correction deluge because it is terrible work.

I can't help but see the software analogy for this.

anthem2025 · 4 months ago
And as people get more used to the patterns of AI it’s getting called out more and more.
z2 · 4 months ago
I'm not a "real coder" either, but it sounds like the "No True Scotsman" trap when people say, “AI can’t be a real coder,” and then redefine “real coder” to mean something AI can’t currently do (like full autonomy or deep architectural reasoning). This makes the claim unfalsifiable and ignores the fact that AI already performs several coding tasks effectively. Yeah, I get it, context handling, long-horizon planning, and intent inference all stink, but the tools are all 'real' to me.
eMPee584 · 4 months ago
That's based on the assumption models would not soon cross that treshold of autonomy and self-reflection that suddenly makes an escalating number of jobs (with cheap humanoids, even physical) automatable for ridiculous pricing. Even if this isn't certain, likelihood could be considered quite high and thus we urgently need a public debate / design process for the peaceful, post-commercial, post-competitive, open-access post-scarcity economy some (RBE / commoning community) have been sketching for years and years. Seems this development defies most people's sense of imagination - and that's precisely why we need to raise public awareness for the freedom and fun OPEN SOURCE EVERYTHING & Universal Basic Services could bring to our tormented world. 2 billion without access to clean water? we can do much better if we break free from our collective fixation on money as the only means and way to deal with things ever.
MintPaw · 4 months ago
You say it as a joke, but spell check has replaced certain tiers of editors. And Photoshop has replaced certain tiers of designers.
manoDev · 4 months ago
Not a joke.

Proofreaders still exist, despite spell checker. Art assistants still exist, despite Photoshop. There's always more work to do, you just incorporate the new tools and bump the productivity, until it gets so commoditized it stops being a competitive advantage.

Saying AI "replaces" anyone is just a matter of rhetoric to justify lower salaries, as always.

flappyeagle · 4 months ago
Bad ones
the_af · 4 months ago
> all this talk will sound as dumb as stating "MS Word spell checker will replace writers" or "Photoshop will replace designers".

You cannot use just a spell checker to write a book (no matter how bad) or photoshop (non-AI) plugins to automatically create meaningful artwork, replacing human intervention.

Business people "up the ladder" are already threatening with reducing the workforce and firing people because they can (allegedly) be replaced by AI. No writer was ever threatened by a spellchecker.

Hollywood studio execs are putting pressure on writers, and now they can leverage AI as yet another tool against them.

throwboy2047 · 4 months ago
People are stupid, always have been - took thousands of years to accept brain as the seat of thought because “heart beat faster when excited, means heart is source of excitement”.

Heck, people literally used to think eyes are the source of light since everything is dark when you close them.

People are immensely, incredibly, unimaginably stupid. It has taken a lot of miracles put together to get us where we are now…but the fundamentals of what we are haven’t changed.

krapp · 4 months ago
You're confusing ignorance with stupidity. People at the time were coming to the best conclusions they could based on the evidence they had. That isn't stupid. If humans were truly "incredibly, unimaginably stupid" we wouldn't have even gotten to the point of creating agriculture, much less splitting the atom. We didn't get here through "miracles," we got here through hard work and intelligence.

Stupid is people in 2025 believing the world is flat and germ theory is a hoax. Ignorance becomes stupidity when our species stands on the shoulders of giants but some people simply refuse to open their eyes.

stripe_away · 4 months ago
> took thousands of years to accept brain as the seat of thought because “heart beat faster when excited, means heart is source of excitement”

So what you are saying is that beings without a central nervous system cannot experience "excitement"?

or perhaps the meaning of too many words has changed, and their context. When Hippocrates claimed that the brain was an organ to cool the blood, perhaps he meant that we use our thought to temper our emotions, i.e. what he said agrees with our modern understanding.

However, many people read Hippocrates and laugh at him, because they think he meant the brain was some kind of radiator.

Maybe because we stopped talking about "excitable" people as being "hot-blooded"

tim333 · 4 months ago
In a few years AI will have progressed a fair bit in a way that MS spell checker didn't.
amelius · 4 months ago
> tired of anthropomorphization

The thing is trained on heaps and heaps of human output. You better anthropomorphize if you want to stay ahead of the curve.

ACCount37 · 4 months ago
I'm tired of all the "yet another tool" reductionism. It reeks of cope.

It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.

tashoecraft · 4 months ago
There’s this saying that humans are terrible at predicting exponential growth. I believe we need another saying, those who expect exponential growth have a tough time not expecting it.

It’s not under a decade for ai to get to this stage but multiple decades of work, with algorithms finally able to take advantage of gpu hardware to massively excel.

There’s already feeling that growth has slowed, I’m not seeing the rise in performance at coding tasks that I saw over the past few years. I see no fundamental improvements that would suggest exponential growth or human level of performance.

luqtas · 4 months ago
> ... entirely on its own

ok, ok! just like you can find for much less computation power involved using a search engine, forums/websites having if not your question, something similar or a snippet [0] helping you solve your doubt... all of that free of tokens and companies profiting over what the internet have build! even FOSS generative AI can give billions USD to GPU manufacturers

[0] just a silly script that can lead a bunch of logic: https://stackoverflow.com/questions/70058132/how-do-i-make-a...

biophysboy · 4 months ago
You can’t see any bottlenecks? Energy? Compute power? Model limitations? Data? Money?
apercu · 4 months ago
So maybe the truth is somewhere in between - there is no way AI is not going to have a major societal impact - like social media.

If we don't see some serious fencing, I will not be surprised by some spectacular AI-caused failures in the next 3 years that wipe out companies.

Business typically follows a risk-based approach to things, and in this case entire industries are yolo'ing.

phailhaus · 4 months ago
> I see no fundamental limitations

How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.

hitarpetar · 4 months ago
your comment reeks of hype. no evidence whatsoever for your prediction, just an assertion that you personally don't see it not coming true
codingdave · 4 months ago
It took closer to 100 years for AI to get to this stage. Check out: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

I suspect once you have studied how we actually got to where we are today, you might see why your lack of seeing any limitations may not be the flex you think it is.

bakugo · 4 months ago
> I see no fundamental limitations that would prevent further improvements

How can you say this when progress has so clearly stagnated already? The past year has been nothing but marginal improvements at best, culminating in GPT-5 which can barely be considered an upgrade over 4o in terms of pure intelligence despite the significant connotation attached to the number.

pcthrowaway · 4 months ago
When are you starting time from? AI has been a topic of research for over 70 years
anthem2025 · 4 months ago
We see massive initial growth followed by a slowdown constantly.

There is zero reason to think AI is some exception that will continue to exponentially improve without limit. We already seem to be at the point of diminishing returns. Sinking absurd amounts of money and resources to train models that show incremental improvements.

To get this far they have had to spend hundreds of billions and have used up the majority of the data they have access to. We are at the point of trying to train AI on generated data and hoping that it doesn’t just cause the entire thing degrade.

SKILNER · 4 months ago
>> It reeks of cope.

haha, well said, I've got to remember that one. HN is a smelly place when it comes to AI coping.