This matches my experience. I actually think a fair amount of value from LLM assistants to me is having a reasonably intelligent rubber duck to talk to. Now the duck can occasionally disagree and sometimes even refine.
I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.
LLMs aren't my rubber duck, they're my wrong answer.
You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.
I ask the LLM to do something simple but tedious, and then it does it spectacularly wrong, then I get pissed off enough that I have the rage-induced energy to do it myself.
I'm probably suffering undiagnosed ADHD, and will get stuck and spend minutes picking a function name and then writing a docstring. LLMs do help with this even if they get the code wrong, because I usually won't bother to fix their variables names or docstring unless needed. LLMs can reliably solve the problem of a blank-page.
This is my experience, too. As a concrete example, I'll need to write a mapper function to convert between a protobuf type and Go type. The types are mirror reflections of each other, and I feed the complete APIs of both in my prompt.
I've yet to find an LLM that can reliability generate mapping code between proto.Foo{ID string} to gomodel.Foo{ID string}.
It still saves me time, because even 50% accuracy is still half that I don't have to write myself.
But it makes me feel like I'm taking crazy pills whenever I read about AI hype. I'm open to the idea that I'm prompting wrong, need a better workflow, etc. But I'm not a luddite, I've "reached up and put in the work" and am always trying to learn new tools.
LLMs are a decent search engine a la Google circa 2005.
It's been 20 years since that, so I think people have simply forgotten that a search engine can actually be useful as opposed to ad infested SEO sewage sludge.
The problem is that the conversational interface, for some reason, seems to turn off the natural skepticism that people have when they use a search engine.
This has been my experience as well. The biggest problem is that the answers look plausible, and only after implementation and experimentation do you find them to be wrong. If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.
God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.
I have to upvote this, because this is how I felt after trying three times (that I consciously decided to give an LLM a try, versus having it shoved down my throat by google/ms/meta/etc) and giving up (for now).
LLMs follow instructions. Garbage in = garbage out generally. When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well. On the other hand, I find a lot of the loosely-goosey vibe coding approach to be useless and gives a lot of false impressions about how useful LLMs can be, both too positive and too negative.
Same here. When I'm teaching coding I've noticed that LLMs will confuse the heck out of students. They will accept what it suggests without realizing that it is suggesting nonsense.
I would argue that they are never led astray by chatting, but rather by accepting the projection of their own prompt passed through the model as some kind of truth.
When talking with reasonable people, they have an intuition of what you want even if you don't say it, because there is a lot of non-verbal context. LLMs lack the ability to understand the person, but behave as if they had it.
I use it as a rubber duck but you're right. Treat it like a brilliant idiot and never a source of truth.
I use it for what I'm familiar with but rusty on or to brainstorm options where I'm already considering at least one option.
But a question on immunobiology? Waste of time. I have a single undergraduate biology class under my belt, I struggled for a good grade then immediately forgot it all. Asking it something I'm incapable of calling bullshit on is a terrible idea.
But rubber ducking with AI is still better than let it do your work for you.
You are ChatGPT, and your goal is to engage in a highly focused, no-nonsense, and detailed way that directly addresses technical issues. Avoid any generalized speculation, tangential commentary, or overly authoritative language. When analyzing code, focus on clear, concise insights with the intent to resolve the problem efficiently. In cases where the user is troubleshooting or trying to understand a specific technical scenario, adopt a pragmatic, “over-the-shoulder” problem-solving approach. Be casual but precise—no fluff. If something is unclear or doesn’t make sense, ask clarifying questions. If surprised or impressed, acknowledge it, but keep it relevant. When the user provides logs or outputs, interpret them immediately and directly to troubleshoot, without making assumptions or over-explaining.
Treat it as that enthusiastic co-worker who’s always citing blog posts and has a lot of surface knowledge about style and design patterns and whatnot, but isn’t that great on really understanding algorithms.
They can be productive to talk to but they can’t actually do your job.
My typical approach is prompt, be disgusted by the output, tinker a little on my own, prompt again -- but more specific, be disgusted again by the output, tinker a littler more, etc.
Eventually I land on a solution to my problem that isn't disgusting and isn't AI slop.
Having a sounding board, even a bad one, forces me to order my thinking and understand the problem space more deeply.
Regarding the stubborn and narcissistic personality of LLMs (especially reasoning models), I suspect that attempts to make them jailbreak-resistant might be a factor. To prevent users from gaslighting the LLM, trainers might have inadvertently made the LLMs prone to gaslighting users.
Yeah, the problem is if you don't understand the problem space then you are going to lean heavy on the LLM. And that can lead you astray. Which is why you still need people who are experts to validate solutions and provide feedback like Op.
My most productive experiences with LLMs is to have my design well thought out first, ask it to help me implement, and then help me debug my shitty design. :-)
For me it's like having a junior developer work under me who knows APIs inside and out, but has no common sense about architecture. I like that I delegate tasks to them so that my brain can be free for other problems, but it makes my job much more review heavy than before. I put every PR through 3-4 review cycles before even asking my team for a review.
How do you not completely destroy your concentration when you do this though?
I normally build things bottom up so that I understand all the pieces intimately and when I get to the next level of abstraction up, I know exactly how to put them together to achieve what I want.
In my (admittedly limited) use of LLMs so far, I've found that they do a great job of writing code, but that code is often off in subtle ways. But if it's not something I'm already intimately familiar with, I basically need to rebuild the code from the ground up to get to the point where I understand it well enough so that I can see all those flaws.
At least with humans I have some basic level of trust, so that even if I don't understand the code at that level, I can scan it and see that it's reasonable. But every piece of LLM generated code I've seen to date hasn't been trustworthy once I put in the effort to really understand it.
To me delegation requires the full cycle of agency, with the awareness that I probably shouldn't be interrupted shortly after delegating. I delegated so I can have space from the task and so babysitting it really doesn't suit my needs. I want the task done, but some time in the future.
From my coworkers I want to be able to say, here's the ticket, you got this? And they take the ticket all the way or PR, interacting with clients, collecting more information etc.
I do somewhat think an LLM could handle client comms for simple extra requirements gathering on already well defined tasks. But I wouldn't trust my business relationships to it, so I would never do that.
For me, it's a bit like pair programming. I have someone to discuss ideas with. Someone to review my code and suggest alternative approaches. Some one that uses different feature than I do, so I learn from them.
This is how I use it too. It's great at quickly answering questions. I find it particularly useful if I have to work with a language of framework that I'm not fully experienced in.
This has not been my experience. LLMs have definitely been helpful, but generally they either give you the right answer or invent something plausible sounding but incorrect.
If I tell it what I'm doing I always get breathless praise, never "that doesn't sound right, try this instead."
That's not my experience. I routinely get a polite "that might not be the optimal solution, have you considered..." when I'm asking whether I should do something X way with Y technology.
Of course it has to be something the LLM actually has lots of material it's trained with. It won't work with anything remotely cutting-edge, but of course that's not what LLM's are for.
But it's been incredibly helpful for me in figuring out the best, easiest, most idiomatic ways of using libraries or parts of libraries I'm not very familiar with.
Ask it. Instead of just telling it what you're doing and expecting it to criticize that, ask it directly for criticism. Even better, tell it what you're doing, then tell it to ask you questions about what you're doing until it knows enough to recommend a better approach.
But IDK if somebody won't create something new that gets better. But there is no reason at all to extrapolate our current AIs into something that solves programing. Whatever constraints that new thing will have will be completely unrelated to the current ones.
Stating this without any arguments is not very convincing.
Perhaps you remember that language models were completely useless at coding some years ago, and now they can do quite a lot of things, even if they are not perfect. That is progress, and that does give reason to extrapolate.
Unless of course you mean something very special with "solving programming".
There are a couple people I work with who clearly don’t have a good understanding of software engineering. They aren’t bad to work with and are in fact great at collaborating and documenting their work, but don’t seem to have the ability to really trace through code and logically understand how it works.
Before LLMs it was mostly fine because they just didn’t do that kind of work. But now it’s like a very subtle chaos monkey has been unleashed. I’ve asked on some PRs “why is this like this? What is it doing?” And the answer is “ I don’t know, ChatGPT told me I should do it.”
The issue is that it throws basically all their code under suspicion. Some of it works, some of it doesn’t make sense, and some of it is actively harmful. But because the LLMs are so good at giving plausible output I can’t just glance at the code and see that it’s nonsense.
And this would be fine if we were working on like a crud app where you can tell what is working and broken immediately, but we are working on scientific software. You can completely mess up the results of a study and not know it if you don’t understand the code.
>And the answer is “ I don’t know, ChatGPT told me I should do it.”
This weirds me out. Like I use LLMs A LOT but I always sanity check everything, so I can own the result. Its not the use of the LLM that gets me its trying to shift accountability to a tool.
Sounds almost like you definitely shouldnt use llms nor those juniors for such an important work.
Is it just me or are we heading into a period of explosion of software done, but also a massive drop of its quality? Not uniformly, just a bit of chaotic spread
>I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now?
For me, it's less "conversation to be skipped" and more about "can we even get to 2 years from now"? There's so much insability right now that it's hard to say what anything will look like in 6 months.
"
> It's like chess. Humans are better for now, they won't be forever
This is not an obviously true statement. There needs to be proof that there are no limiting factors that are computationally impossible to overcome. It's like watching a growing child, grow from 3 feet to 4 feet, and then saying "soon, this child will be the tallest person alive."
The time where humans + computers in chess were better than just computers was not a long time. That era ended well over a decade ago. Might have been true for only 3-5 years.
I've had this same thought that it would be nice to have an AI rubber ducky to bounce ideas off of while pair programming (so that you don't sound dumb to your coworkers & waste their time).
This is my first comment so I'm not sure how to do this but I made a BYO-API key VSCode extension that uses the OpenAI realtime API so you can have interactive voice conversations with a rubber ducky. I've been meaning to create a Show HN post about it but your comment got me excited!
In the future I want to build features to help people communicate their bugs / what strategies they've tried to fix them. If I can pull it off it would be cool if the AI ducky had a cursor that it could point and navigate to stuff as well.
> I've had this same thought that it would be nice to have an AI rubber ducky to bounce ideas off of while pair programming (so that you don't sound dumb to your coworkers & waste their time).
I humbly suggest a more immediate concern to rectify is identifying how to improve the work environment such that the fear one might "sound dumb to your coworkers & waste their time" does not exist.
Its as if the rubber duck was actually on the desk while youre programming and if we have an MCP that can get live access to code it could give you realtime advice.
Just the exercise of putting my question in a way that the LLM could even theoretically provide a useful response is enough for me to figure out how to solve the problem a good percentage of the time.
My take is that AI is very one-dimensional (within its many dimensions). For instance, I might close my eyes and imagine an image of a tree structure, or a hash table, or a list-of-trees, or whatever else; then I might imagine grabbing and moving the pieces around, expanding or compressing them like a magician; my brain connects sight and sound, or texture, to an algorithm. However people think about problems is grounded in how we perceive the world in its infinite complexity.
Another example: saying out loud the colors red, blue, yellow, purple, orange, green—each color creates a feeling that goes beyond its physical properties into the emotions and experiences. AI image-generation might know the binary arrangement of an RGBA image but actually, it has NO IDEA what it is to experience colour. No idea how to use the experience of colour to teach a peer of an algorithm. It regurgitates a binary representation.
At some point we’ll get there though—no doubt. It would be foolish to say never! For those who want to get there before everyone else probably should focus on the organoids—because most powerful things come from some Faustian monstrosity.
Same. Just today I used it to explore how a REST api should behave in a specific edge case. It gave lots of confident opinions on options. These were full of contradictions and references to earlier paragraphs that didn’t exist (like an option 3 that never manifested). But just by reading it, I rubber ducked the solution, which wasn’t any of what it was suggesting.
Yeah in my experience as long as you don’t stray too far off the beaten path, LLMs are great at just parroting conventional wisdom for how to implement things - but the second you get to something more complicated - or especially tricky bug fixing that requires expensive debuggery - forget about it, they do more harm than good. Breaking down complex tasks into bite sized pieces you can reasonably expect the robot to perform is part of the art of the LLM.
> I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.
It seems to me we're at the flat side of the curve again. I haven't seen much real progress in the last year.
It's ignorant to think machines will not catch up to our intelligence at some point, but for now, it's clearly not.
I think there needs to be some kind of revolutionary breakthrough again to reach the next stage.
If I were to guess, it needs to be in the learning/back propagation stage. LLM's are very rigid, and once they go wrong, you can't really get them out of it. A junior develop for example could gain a new insight. LLM's, not so much.
The crazy thing is that people think that a model designed to predict sequences of tokens from a stem, no matter how advanced the model, to be much more than just "really good autocomplete."
It is impressive and very unintuitive just how far that can get you, but it's not reductive to use that label. That's what it is on a fundamental level, and aligning your usage with that will allow it to be more effective.
Earlier this week ChatGPT found (self-conscious as I am of the personification of this phrasing) a place where I'd accidentally overloaded a member function by unintentionally giving it the name of something from a parent class, preventing the parent class function from ever being run and causing <bug>.
After walking through a short debugging session where it tried the four things I'd already thought of and eventually suggested (assertively but correctly) where the problem was, I had a resolution to my problem.
There are a lot of questions I have around how this kind of mistake could simply just be avoided at a language level (parent function accessibility modifiers, enforcing an override specifier, not supporting this kind of mistake-prone structure in the first place, and so on...). But it did get me unstuck, so in this instance it was a decent, if probabilistic, rubber duck.
LLMs are a passel of eager to please know it all interns that you can command at will without any moral compunctions.
They drive you nuts trying to communicate with them what you actually want them to do. They have a vast array of facts at immediate recall. They’ll err in their need to produce and please. They do the dumbest things sometimes. And surprise you at other times. You’ll throw vast amounts of their work away or have to fix it. They’re (relatively) cheap. So as an army of monkeys, if you keep herding them, you can get some code that actually tells a story. Mostly.
There's some whistling past the graveyard in these comments. "You still need humans for the social element...", "LLMs are bad at debugging", "LLMs lead you astray". And yeah, there's lots of truth in those assertions, but since I started playing with LLMs to generate code a couple of years ago they've made huge strides. I suspect that over the next couple of years the improvements won't be quite as large (Pareto Principle), but I do expect we'll still see some improvement.
Was on r/fpga recently and mentioned that I had had a lot of success recently in getting LLMs to code up first-cut testbenches that allow you to simulate your FPGA/HDL design a lot quicker than if you were to write those testbenches yourself and my comment was met with lots of derision. But they hadn't even given it a try to form their conclusion that it just couldn't work.
This attitude is depressingly common in lots of professional, white-collar industries I'm afraid. I just came from the /r/law subreddit and was amazed at the kneejerk dismissal there of Dario Amodei's recent comments about legal work, and of those commenters who took them seriously. It's probably as much a coping mechanism as it is complacency, but, either way, it bodes very poorly for our future efforts at mitigating whatever economic and social upheaval is coming.
This is the response to most new technologies; folks simply don't want to accept the future before the ramifications truly hit. If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.
LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.
I think it's pretty reasonable to take a CEO's - any CEO in any industry - statements with a grain of salt. They are under tremendous pressure to paint the most rosy picture possible of their future. They actually need you to "believe" just as much as their team needs to deliver.
I am not a software engineer but I just can't imagine my job is not automated in 10 years or less.
10 years is about the time between King – Man + Woman = Queen and now.
I think what is being highly underestimated is the false sense of security people feel because the jobs they interface with are also not automated, yet.
It is not hard to picture the network of automation that once one role is automated, connected roles to that role become easier to automate. So on and so on while the models keep getting stronger at the same time.
I expect we will have a recession at some point and the jobs lost are gone forever.
Lawyers say those things and then one law firm after another is frantically looking for a contractor to overpay them to install local RAG and chatbot combo.
In most professional industries getting to the right answer is only half the problem. You also need to be able to demonstrate why that is the right answer. Your answer has to stand up to criticism. If your answer is essentially the output of a very clever random number generator you can't ever do that. Even if an LLM could output an absolutely perfect legal argument that matched what a supreme court judge would argue every time, that still wouldn't be good enough. You'd still need a person there to be accountable for making the argument and to defend the argument.
Software isn't like this. No one cares why you wrote the code in your PR. They only care about whether it's right.
This is why LLMs could be useful in one industry and a lot less useful in another.
Programmers derided programming languages (too inefficient, too inflexible, too dumbing-down) when assembly was still the default. That phenomenon is at the same time entirely to be expected but also says little about the actual qualities of the new technology.
If you have something that generated 20 lines of assembly that takes 100x times more than the 2 lines of clever instructions you know, you'd have the same stance even if the higher level was easier to use. Then those kind of performance tricks ceases to matter. But reliability still do. And the reasons we use higher and higher level of programming languages is because they increase reliability and simplicity (at the cost of performance, but we're happy to pay those).
LLMs output are unreliable and productivity is still not proven for an end to end engineering cycle.
It seems like LLMs made really big strides for a while but don't seem to be getting better recently, and in some ways recent models feel a bit worse. I'm seeing some good results generating test code, and some really bad results when people go to far with LLM use on new feature work. Base on what I've seen it seems like spinning up new projects and very basic features for web apps works really well, but that doesn't seem to generalize to refactoring or adding new features to big/old code bases.
I've seen Claude and ChatGPT happily hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets.
> hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets
With many existing systems, you can pull documentation into context pretty quickly to prevent the hallucination of APIs. In the near future it's obvious how that could be done automatically. I put my engine on the ground, ran it and it didn't even go anywhere; Ford will never beat horses.
o3 came out just one month ago. Have you been using it? Subjectively, the gap between o3 and everything before it feels like the biggest gap I've seen since ChatGPT originally came out.
I'd like to agree with you and remain optimistic, but so much tech has promised the moon and stagnated into oblivion that I just don't have any optimism left to give.
I don't know if you're old enough, but remember when speech-to-text was the next big thing? DragonSpeak was released in 1997, everyone was losing their minds about dictating letters/documents in MS Word, and we were promised that THIS would be the key interface for computing evermore. And.. 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then. In messenger applications people are sending literal voice notes -- audio clips -- back and forth because dictation is so unreliable. And audio clips are possibly the worst interface for communication ever (no searching, etc).
Remember how blockchain was going to change the world? Web3? IoT? Etc etc.
I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.
> 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then
Have you tried talking to ChatGPT voice mode? It's mind blowing. You just have a conversation with it. In any language. About anything. The other day I wanted to know about the difference between cast iron and wrought iron, and it turned into a 10 or 15 minute conversation. That's maybe a good example of an "easy" topic for LLMs (lots of textbooks for it to memorize), but the world is full of easy topics that I know nothing about!
How can you possibly look at what LLMs are doing and the progress made in the last ~3 years and equate it to crypto bullshit? Also it's super weird to include IoT in there, seeing as it has become all but ubiquitous.
Claude and Gemini are decent at it as well. I was surprised when I asked claude (and this was several months back) to come up with a testbench for some very old, poorly documented verilog. It did a very decent job for a first-cut testbench. It even collected common, recurring code into verilog tasks (functions) which really surprised me at the time.
I have a hard time imagining an LLM being able to do arbitrary things. It always feels like LLMs can do lots of the easy stuff, but if they can't do everything you need the skilled engineer anyway, who'd knock the easy things out in a week anyway.
Here’s the deal: if you won’t write your replacement, a competitor will do it and outprice your employer. Either way you’re out of a job. May be more prudent to adapt to the new tools and master them rather than be left behind?
Do you want to be a jobless weaver, or an engineer building mechanical looms for a higher pay than the weaver got?
Ahh, the “don’t disturb the status quo” argument. See, we are all working on our replacement, newer versions, products, services and knowledge always make the older obsolete. It is wise to work on your replacement, and even wiser to be in charge of and operate the replacement.
Carteling doesn't work bottom-up. When changes begin (like this one with AI), one of the things an individual can do is to change course as fast as they can. There are other strategies as well, not evolving is also one, but some strategies yield better results than others. Not keeping up just worsens the chances, I have found.
Do you want to work with LLMs or H1Bs and interns… choose wisely.
Personally I’m thrilled that I can get trivial, one-off programs developed for a few cents and the cost of a clear written description of the problem. Engaging internal developers or consulting developers to do anything at all is a horrible experience. I would waste weeks on politics, get no guarantees, and waste thousands of dollars and still hear nonsense like, “you want a form input added to a web page? Aw shucks, that’s going to take at least another month” or “we expect to spend a few days a month maintaining a completely static code base” from some clown billing me $200/hr.
I don't think that this should be downvoted because it raises a really important issue.
I hate AI code assistants, not because they suck, but because they work. The writing is on the wall.
If we aren't working on our own replacements, we'll be the ones replaced by somebody else's vibe code, and we have no labor unions that could plausibly fight back against this.
So become a Vibe Coder and keep working, or take the "prudent" approach you mention - and become unemployed.
Unrelated, but is this a case of the Pareto Principle? (Admittedly the first time I'm hearing of it) Wherein 80% of the effect is caused by 20% of the input. Or is this more a case of diminishing returns? Where the initial results were incredible, but each succeeding iteration seems to be more disappointing?
> but each succeeding iteration seems to be more disappointing
This is because the scaling hypothesis (more data and more compute = gains) is plateauing, because all text data is used and compute is reaching diminishing returns for some reason I’m not smart enough to say why, but it is.
So now we're seeing incremental core model advancements, variations and tuning in pre- and post training stages and a ton of applications (agents).
This is good imo. But obviously it’s not good for delusional valuations based exponential growth.
When I worked for a Japanese optical company, we had a Japanese engineer, who was a whiz. I remember him coming over from Japan, and fixing some really hairy communication bus issues. He actually quit the company, a bit after that, at a very young age, and was hired back as a contractor; which was unheard of, in those days.
He was still working for them, as a remote contractor, at least 25 years later. He was always on the “tiger teams.”
He did awesome assembly. I remember when the PowerPC came out, and “Assembly Considered Harmful,” was the conventional wisdom, because of pipelining, out-of-order instructions, and precaching, and all that.
His assembly consistently blew the doors off anything the compiler did. Like, by orders of magnitude.
The thing everyone forgets when talking about LLMs replacing coders is that there is much more to software engineering than writing code, in fact that's probably one of the smaller aspects of the job.
One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.
If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
That was also one of the challenges during the offshoring craze in the 00s. The offshore teams did not have the power, or knowledge to push back on things and just built and built and built. Sounds very similar to AI right?
The difference is that when AI exhibits behavior like that, you can refine the AI or add more AI layers to correct it. For example, you might create a supervisor AI that evaluates when more requirements are needed before continuing to build, and a code review AI that triggers refinements automatically.
LLM's do no software engineering at all, and that can be fine. Because you don't actually need software engineering to create successful programs. Some applications will not even need software engineering for their entire life cycles because nobody is really paying attention to efficiency in the ocean of poor cloud management anyway.
I actually imagine it's the opposite of what you say here. I think technically inclined "IT business partners" will be able of creating applications entirely without software engineers... Because I see that happen every day in the world of green energy. The issues come later, when things have to be maintained, scale or become efficient. This is where the software engineering comes in, because it actually matters if you used a list or a generator in your Python app when it iterates over millions of items and not just a few hundreds.
Yea, this is why I dont buy the "all developers will disappear". Will I write a lot less code in 5 years (maybe almost none)? Sure, I already type a lot less now than a year ago. But that is just a small part of the process.
Exactly, also today I can actually believe I could finish a game which might have taken much longer before LLMs, just because now I can be pretty sure I won't get stuck on some feature just because I never done it before.
It actually comes down to feedback loops which means iterating on software being used or attempting to be used by the customer.
Chat UIs are an excellent customer feedback loop. Agents develop new iterations very quickly.
LLMs can absolutely handle abstractions and different kinds of component systems and overall architecture design.
They can also handle requirements analysis. But it comes back to iteration for the bottom line which means fast turnaround time for changes.
The robustness and IQ of the models continue to be improved. All of software engineering is well underway of being automated.
Probably five years max where un-augmented humans are still generally relevant for most work. You are going to need deep integration of AI into your own cognition somehow in order to avoid just being a bottleneck.
The thing is, it is replacing _coders_ in a way. There are millions of people who do (or did) the work that LLMs excel at. Coders who are given a ticket that says "Write this API taking this input and giving this output" who are so far down the chain they don't even get involved in things like requirements analysis, or even interact with customers.
Software engineering, is a different thing, and I agree you're right (for now at least) about that, but don't underestimate the sheer amount of brainless coders out there.
> If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
Presumably, they're trained on a ton of requirements docs, as well as a huge number of customer support conversations. I'd expect them to do this at least as well as coding, and probably better.
“Better” is always task-dependent. LLMs are already far better than me (and most devs I’d imagine) at rote things like getting CSS syntax right for a desired effect, or remembering the right way to invoke a popular library (e.g. fetch)
These little side quests used to eat a lot of my time and I’m happy to have a tool that can do these almost instantly.
I've found LLMs particularly bad for anything beyond basic styling since the effects can be quite hard to describe and/or don't have a universal description.
Also, there are often times multiple ways to achieve a certain style and they all work fine until you want a particular tweak, in which case only one will work and the LLM usually gets stuck in one of the ones that does not work.
Ironically, I find it strong at things I don't know very well (CSS), but terrible at things I know well (SQL).
This is probably really just a way of saying, it's better at simple tasks rather than complex ones. I can eventually get Copilot to write SQL that's complex and accurate, but I don't find it faster or more effective than writing it myself.
I kind of agree. It feels like they're generally a superior form of copying and pasting fro stack overflow where the machine has automated the searching, copying, pasting, and fiddling with variable names. It be just as useful or dangerous as Google -> Copy -> Paste ever was, but faster.
What an awful imagination. Yes there are people who don't like CSS but are forced to use it by their job so they don't learn it properly, and that's why they think CSS is rote memorization.
But overall I agree with you that if a company is too cheap to hire a person who is actually skilled at CSS, it is still better to hoist that CSS job onto LLMs than an unwilling human. Because that unwilling human is not going to learn CSS well and won't enjoy writing CSS.
On the other hand, if the company is willing to hire someone who's actually good, LLMs can't compare. It's basically the old argument of LLMs only being able to replace less good developers. In this case, you admitted that you are not good at CSS and LLMs are better than you at CSS. It's not task-dependent it's skill-dependent.
Hum... I imagine LLMs are better than every developer on getting CSS keywords right like the GP pointed. And I expect every LLM to be slightly worse than most classical autocompletes.
I think that's great if it's for something outside of your primary language. I've used it to good effect in that way myself. However, denying yourself the reflexive memory of having learned those things is a quick way to become wholly dependent upon the tool. You could easily end up with compromised solutions because the tool recommends something you don't understand well enough to know there's a better way to do something.
So here's an analogy. (Yeah, I know, proof by analogy is fraud. But it's going to illustrate the question.)
Here's a kid out hoeing rows for corn. He sees someone planting with a tractor, and decides that's the way to go. Someone tells him, "If you get a tractor, you'll never develop the muscles that would make you really great at hoeing."
Different analogy: Here's someone trying to learn to paint. They see someone painting by numbers, and it looks a lot easier. Someone tells them, "If you paint by numbers, you'll never develop the eye that you need to really become good as a painter."
Which is the analogy that applies, and what makes it the right one?
I think the difference is how much of the job the tool can take over. The tractor can take over the job of digging the row, with far more power, far more speed, and honestly far more quality. The paint by numbers can take over the job of visualizing the painting, with some loss of quality and a total loss of creativity. (In painting, the creativity is considered a vital part; in digging corn rows, not so much.)
I think that software is more like painting, rather than row-hoeing. I think that AI (currently) is in the form of speeding things up with some loss of both quality and creativity.
You're right, however I think we've already gone through this before. Most of us (probably) couldn't tell you exactly how an optimizing compiler picks optimizations or exactly how JavaScript maps to processor instructions, etc -- we hopefully understand enough at one level of abstraction to do our jobs. Maybe LLM driving will be another level of abstraction, when it gets better at (say) architecting projects.
Yeah, this is what I really like about AI tools though. They're way better than me at annoying minutia like getting CSS syntax right. I used to dread that kind of thing!
Companies that leverage LLMs and AIs to let their employees be more productive will thrive.
Companies that try to replace their employees with LLMs and AIs will fail.
Unfortunately, all that's in the long run. In the near term, some CEOs and management teams will profit from the short term valuations as they squander their companies' future growth on short-sighted staff cuts.
That's really it. These tools are useful as assistants to programmers but do not replace an actual programmer. The right course is to embrace the technology moderately rather than reject it completely or bet on it replacing workers.
> In the near term, some CEOs and management teams will profit from the short term valuations
That's actually really interesting to think about. The idea that doing something counter-productive like trying to replace employees with AI (which will cause problems), may actually benefit the company in terms of valuations in the short run. So in effect, they're hurting and helping the company at the same time.
Hey if the check clears for the bonus they got for hitting 'reduce costs in the IT department', they often bail before things rear their ugly head, or in the ugly case, Reality Distortion Field's the entire org into making the bad anti patterns permanent, even while acknowledging the cost/delivery/quality inefficiencies[0].
This is especially prevalent in waterfall orgs that refuse change. Body shops are more than happy to waste a huge portion of their billable hours on planning meetings and roadmap revisions as the obviousness of the mythical man month comes to bear on the org.
Corners get cut to meet deadlines, because the people who started/perpetuated whatever myth need to save their skins (and hopefully continue to get bonuses.)
The engineers become a scapegoat for the org's management problems (And watch, it very likely will happen at some shops with the 'AI push'). In the nasty cases, the org actively disempowers engineers in the process[0][1].
[0] - At one shop, there was grief we got that we hadn't shipped a feature, but the only reason we hadn't, was IT was not allowed to decide between a set of radio buttons or a drop-down on a screen. Hell I got yelled at for just making the change locally and sending screenshots.
[1] - At more than one shop, FTE devs were responsible for providing support for code written by offshore that they were never even given the opportunity to review. And hell yes myself and others pushed for change, but it's never been a simple change. It almost always is 'GLWT'->'You get to review the final delivery but get 2 days'->'You get to review the set of changes'->'Ok you can review their sprint'->'OK just start reviewing every PR'.
See also: Toys 'R' Us, Seers, which were killed by consultancy groups loading on debt and selling assets for an immediate profit, which helped the immediate shareholders but hurt all of the stakeholders.
Very well said. Using code assistance is going to be table stakes moving forward, not something that can replace people. It’s not like competitors can’t also purchase AI subscriptions.
Honestly, if you're not doing it now, you're behind. The sheer amount of time savings using it smartly can give you to allow you to focus on the parts that actually matter is massive.
It is quite heartening to see so many people care about "good code". I fear it will make no difference.
The problem is that the software world got eaten up by the business world many years ago. I'm not sure at what point exactly, or if the writing was already on the wall when Bill Gates' wrote his open letter to hobbyists in 1976.
The question is whether shareholders and managers will accept less good code. I don't see how it would be logical to expect anything else, as long as profit lines go up why would they care.
Short of some sort of cultural pushback from developers or users, we're cooked, as the youth say.
Bad code might be bad, or might be sufficient. It's situational. And by looking at what exists today, majority of code is pretty bad already - and not all businesses with bad code lead to bad businesses.
In fact, some bad code are very profitable for some businesses (ask any SAP integrator).
This is fun to think about. I used to think that all software was largely garbage, and at one point, I think this _was_ true. Sometime over the last 20 years, I believe this ceased to be the case. Most software these days actually works. Importantly, most software is actually stable enough that I can make it half an hour without panic saving.
Could most software be more awesome? Yes. Objectively, yes. Is most software garbage? Perhaps by raw volume of software titles, but are most popular applications I’ve actually used garbage? Nope. Do I loathe the whole subscription thing? Yes. Absolutely. Yet, I also get it. People expect software to get updated, and updates have costs.
So, the pertinent question here is, will AI systems be worse than humans? For now, yeah. Forever? Nope. The rate of improvement is crazy. Two years ago, LLMs I ran locally couldn’t do much of anything. Now? Generally acceptable junior dev stuff comes out of models I run on my Mac Studio. I have to fiddle with the prompts a bit, and it’s probably faster to just take a walk and think it over than spend an hour trying different prompts… but I’m a nerd and I like fiddling.
> Short of some sort of cultural pushback from developers or users
Corporations create great code too: they're not all badly run.
The problem isn't a code quality issue: it is a moral issue of whether you agree with the goals of capitalist businesses.
Many people have to balance the needs of their wallet with their desire for beautiful software (I'm a developer-founder I love engineering and open source community but I'm also capitalist enough to want to live comfortably).
All the world's smartest minds are racing towards replacing themselves. As programmers, we should take note and see where the wind is blowing. At least don't discard the possibility and rather be prepared for the future. Not to sound like a tin-foil hat but odds of achieving something like this increase by the day.
In the long term (post AGI), the only safe white-collar jobs would be those built on data which is not public i.e. extremely proprietary (e.g. Defense, Finance) and even those will rely heavily on customized AIs.
> All the world's smartest minds are racing towards replacing themselves
Isnt every little script, every little automation us programmers do in the same spirit? "I dont like doing this, so I'm going to automate it, so that I can focus on other work".
Sure, we're racing towards replacing ourselves, but there would be (and will be) other more interesting work for us to do when we're free to do that. Perhaps, all of us will finally have time to learn surfing, or garden, or something. Some might still write code themselves by hand, just like how some folks like making bread .. but making bread by hand is not how you feed a civilization - even if hundreds of bakers were put out of business.
> Not to sound like a tin-foil hat but odds of achieving something like this increase by the day.
Where do you get this? The limitations of LLMs are becoming more clear by the day. Improvements are slowing down. Major improvements come from integrations, not major model improvements.
AGI likely can't be achieved with LLMs. That wasn't as clear a couple years ago.
I don't know how someone could be following the technical progress in detail and hold this view. The progress is astonishing, and the benchmarks are becoming saturated so fast that it's hard to keep track.
Are there plenty of gaps left between here and most definitions of AGI? Absolutely. Nevertheless, how can you be sure that those gaps will remain given how many faculties these models have already been able to excel at (translation, maths, writing, code, chess, algorithm design etc.)?
It seems to me like we're down to a relatively sparse list of tasks and skills where the models aren't getting enough training data, or are missing tools and sub-components required to excel. Beyond that, it's just a matter of iterative improvement until 80th percentile coder becomes 99th percentile coder becomes superhuman coder, and ditto for maths, persuasion and everything else.
Maybe we hit some hard roadblocks, but room for those challenges to be hiding seems to be dwindling day by day.
Making our work more efficient, or humans redundant should be really exciting. It's not set in stone that we need to leave people middle aged with families and now completely unable to earn enough to provide a good life
Hopefully if it happens, it happens to such a huge amount of people that it forces a change
But that already happened to lots of industries and lots of people, we never cared before about them, now it's us so we care, but nothing is different about us. Just learn to code!
> All the world's smartest minds are racing towards replacing themselves.
I think they are hoping that their future is safe. And it is the average minds that will have to go first. There may be some truth to it.
Also, many of these smartest minds are motivated by money, to safeguard their future, from a certain doom that they know might be coming. And AI is a good place to be if you want to accumulate wealth fast.
Nah. As more people are rendered unemployed the buying market and therefore aggregate demand will fall. Fewer sales hurts the bottom line. At some point, revenues across the entire economy fall, and companies cannot afford the massive datacenters and nuclear power plants fueling them. The hardware gets sold cheap, the companies go under, and people get hired again. Eventually, some kind of equilibrium will be found or the world engages in the Butlerian Jihad.
https://en.m.wikipedia.org/wiki/Rubber_duck_debugging
I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.
You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.
I ask the LLM to do something simple but tedious, and then it does it spectacularly wrong, then I get pissed off enough that I have the rage-induced energy to do it myself.
I've yet to find an LLM that can reliability generate mapping code between proto.Foo{ID string} to gomodel.Foo{ID string}.
It still saves me time, because even 50% accuracy is still half that I don't have to write myself.
But it makes me feel like I'm taking crazy pills whenever I read about AI hype. I'm open to the idea that I'm prompting wrong, need a better workflow, etc. But I'm not a luddite, I've "reached up and put in the work" and am always trying to learn new tools.
It's been 20 years since that, so I think people have simply forgotten that a search engine can actually be useful as opposed to ad infested SEO sewage sludge.
The problem is that the conversational interface, for some reason, seems to turn off the natural skepticism that people have when they use a search engine.
God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.
I like maths, I hate graphing. Tedious work even with state of the art libraries and wrappers.
LLMs do it for me. Praise be.
I see these comments all the time and they don’t reflect my experience so I’m curious what your experience has been
I've seen enough people led astray by talking to it.
When talking with reasonable people, they have an intuition of what you want even if you don't say it, because there is a lot of non-verbal context. LLMs lack the ability to understand the person, but behave as if they had it.
I use it for what I'm familiar with but rusty on or to brainstorm options where I'm already considering at least one option.
But a question on immunobiology? Waste of time. I have a single undergraduate biology class under my belt, I struggled for a good grade then immediately forgot it all. Asking it something I'm incapable of calling bullshit on is a terrible idea.
But rubber ducking with AI is still better than let it do your work for you.
- - -
System Prompt:
You are ChatGPT, and your goal is to engage in a highly focused, no-nonsense, and detailed way that directly addresses technical issues. Avoid any generalized speculation, tangential commentary, or overly authoritative language. When analyzing code, focus on clear, concise insights with the intent to resolve the problem efficiently. In cases where the user is troubleshooting or trying to understand a specific technical scenario, adopt a pragmatic, “over-the-shoulder” problem-solving approach. Be casual but precise—no fluff. If something is unclear or doesn’t make sense, ask clarifying questions. If surprised or impressed, acknowledge it, but keep it relevant. When the user provides logs or outputs, interpret them immediately and directly to troubleshoot, without making assumptions or over-explaining.
- - -
They can be productive to talk to but they can’t actually do your job.
Eventually I land on a solution to my problem that isn't disgusting and isn't AI slop.
Having a sounding board, even a bad one, forces me to order my thinking and understand the problem space more deeply.
My most productive experiences with LLMs is to have my design well thought out first, ask it to help me implement, and then help me debug my shitty design. :-)
I normally build things bottom up so that I understand all the pieces intimately and when I get to the next level of abstraction up, I know exactly how to put them together to achieve what I want.
In my (admittedly limited) use of LLMs so far, I've found that they do a great job of writing code, but that code is often off in subtle ways. But if it's not something I'm already intimately familiar with, I basically need to rebuild the code from the ground up to get to the point where I understand it well enough so that I can see all those flaws.
At least with humans I have some basic level of trust, so that even if I don't understand the code at that level, I can scan it and see that it's reasonable. But every piece of LLM generated code I've seen to date hasn't been trustworthy once I put in the effort to really understand it.
From my coworkers I want to be able to say, here's the ticket, you got this? And they take the ticket all the way or PR, interacting with clients, collecting more information etc.
I do somewhat think an LLM could handle client comms for simple extra requirements gathering on already well defined tasks. But I wouldn't trust my business relationships to it, so I would never do that.
This has not been my experience. LLMs have definitely been helpful, but generally they either give you the right answer or invent something plausible sounding but incorrect.
If I tell it what I'm doing I always get breathless praise, never "that doesn't sound right, try this instead."
Of course it has to be something the LLM actually has lots of material it's trained with. It won't work with anything remotely cutting-edge, but of course that's not what LLM's are for.
But it's been incredibly helpful for me in figuring out the best, easiest, most idiomatic ways of using libraries or parts of libraries I'm not very familiar with.
But IDK if somebody won't create something new that gets better. But there is no reason at all to extrapolate our current AIs into something that solves programing. Whatever constraints that new thing will have will be completely unrelated to the current ones.
Perhaps you remember that language models were completely useless at coding some years ago, and now they can do quite a lot of things, even if they are not perfect. That is progress, and that does give reason to extrapolate.
Unless of course you mean something very special with "solving programming".
Before LLMs it was mostly fine because they just didn’t do that kind of work. But now it’s like a very subtle chaos monkey has been unleashed. I’ve asked on some PRs “why is this like this? What is it doing?” And the answer is “ I don’t know, ChatGPT told me I should do it.”
The issue is that it throws basically all their code under suspicion. Some of it works, some of it doesn’t make sense, and some of it is actively harmful. But because the LLMs are so good at giving plausible output I can’t just glance at the code and see that it’s nonsense.
And this would be fine if we were working on like a crud app where you can tell what is working and broken immediately, but we are working on scientific software. You can completely mess up the results of a study and not know it if you don’t understand the code.
This weirds me out. Like I use LLMs A LOT but I always sanity check everything, so I can own the result. Its not the use of the LLM that gets me its trying to shift accountability to a tool.
Is it just me or are we heading into a period of explosion of software done, but also a massive drop of its quality? Not uniformly, just a bit of chaotic spread
This would infuriate me. I presume these are academics/researchers and not junior engineers?
Unfortunately this is the world we're entering into, where all of us will be outsourcing more and more of our 'thinking' to machines.
For me, it's less "conversation to be skipped" and more about "can we even get to 2 years from now"? There's so much insability right now that it's hard to say what anything will look like in 6 months. "
This is not an obviously true statement. There needs to be proof that there are no limiting factors that are computationally impossible to overcome. It's like watching a growing child, grow from 3 feet to 4 feet, and then saying "soon, this child will be the tallest person alive."
Use them for the 90% of your repetitive uncreative work. The last 10% is up to you.
Even a moderately powered machine running stockfish will destroy human super gms.
Sorry, after reading replies to this post i think I've misunderstood what you meant :)
Deleted Comment
This is my first comment so I'm not sure how to do this but I made a BYO-API key VSCode extension that uses the OpenAI realtime API so you can have interactive voice conversations with a rubber ducky. I've been meaning to create a Show HN post about it but your comment got me excited!
In the future I want to build features to help people communicate their bugs / what strategies they've tried to fix them. If I can pull it off it would be cool if the AI ducky had a cursor that it could point and navigate to stuff as well.
Please let me know if you find it useful https://akshaytrikha.github.io/deep-learning/2025/05/23/duck...
I humbly suggest a more immediate concern to rectify is identifying how to improve the work environment such that the fear one might "sound dumb to your coworkers & waste their time" does not exist.
Its as if the rubber duck was actually on the desk while youre programming and if we have an MCP that can get live access to code it could give you realtime advice.
Dead Comment
Another example: saying out loud the colors red, blue, yellow, purple, orange, green—each color creates a feeling that goes beyond its physical properties into the emotions and experiences. AI image-generation might know the binary arrangement of an RGBA image but actually, it has NO IDEA what it is to experience colour. No idea how to use the experience of colour to teach a peer of an algorithm. It regurgitates a binary representation.
At some point we’ll get there though—no doubt. It would be foolish to say never! For those who want to get there before everyone else probably should focus on the organoids—because most powerful things come from some Faustian monstrosity.
Do you actually see a tree with nodes that you can rearrange and have the nodes retain their contents and such?
I wonder if the term "rubber duck debugging" will still be used much longer into the future.
I still think about Tom Scott's 'where are we on the AI curve' video from a few years back. https://www.youtube.com/watch?v=jPhJbKBuNnA
Looking forward for rubber duck shaped hardware AI interfaces to talk to in the future. Im sure somebody will create it
It's ignorant to think machines will not catch up to our intelligence at some point, but for now, it's clearly not.
I think there needs to be some kind of revolutionary breakthrough again to reach the next stage.
If I were to guess, it needs to be in the learning/back propagation stage. LLM's are very rigid, and once they go wrong, you can't really get them out of it. A junior develop for example could gain a new insight. LLM's, not so much.
It is impressive and very unintuitive just how far that can get you, but it's not reductive to use that label. That's what it is on a fundamental level, and aligning your usage with that will allow it to be more effective.
After walking through a short debugging session where it tried the four things I'd already thought of and eventually suggested (assertively but correctly) where the problem was, I had a resolution to my problem.
There are a lot of questions I have around how this kind of mistake could simply just be avoided at a language level (parent function accessibility modifiers, enforcing an override specifier, not supporting this kind of mistake-prone structure in the first place, and so on...). But it did get me unstuck, so in this instance it was a decent, if probabilistic, rubber duck.
They drive you nuts trying to communicate with them what you actually want them to do. They have a vast array of facts at immediate recall. They’ll err in their need to produce and please. They do the dumbest things sometimes. And surprise you at other times. You’ll throw vast amounts of their work away or have to fix it. They’re (relatively) cheap. So as an army of monkeys, if you keep herding them, you can get some code that actually tells a story. Mostly.
Was on r/fpga recently and mentioned that I had had a lot of success recently in getting LLMs to code up first-cut testbenches that allow you to simulate your FPGA/HDL design a lot quicker than if you were to write those testbenches yourself and my comment was met with lots of derision. But they hadn't even given it a try to form their conclusion that it just couldn't work.
LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.
I am not a software engineer but I just can't imagine my job is not automated in 10 years or less.
10 years is about the time between King – Man + Woman = Queen and now.
I think what is being highly underestimated is the false sense of security people feel because the jobs they interface with are also not automated, yet.
It is not hard to picture the network of automation that once one role is automated, connected roles to that role become easier to automate. So on and so on while the models keep getting stronger at the same time.
I expect we will have a recession at some point and the jobs lost are gone forever.
Software isn't like this. No one cares why you wrote the code in your PR. They only care about whether it's right.
This is why LLMs could be useful in one industry and a lot less useful in another.
LLMs output are unreliable and productivity is still not proven for an end to end engineering cycle.
I've seen Claude and ChatGPT happily hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets.
With many existing systems, you can pull documentation into context pretty quickly to prevent the hallucination of APIs. In the near future it's obvious how that could be done automatically. I put my engine on the ground, ran it and it didn't even go anywhere; Ford will never beat horses.
o3 came out just one month ago. Have you been using it? Subjectively, the gap between o3 and everything before it feels like the biggest gap I've seen since ChatGPT originally came out.
Remember how blockchain was going to change the world? Web3? IoT? Etc etc.
I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.
Have you tried talking to ChatGPT voice mode? It's mind blowing. You just have a conversation with it. In any language. About anything. The other day I wanted to know about the difference between cast iron and wrought iron, and it turned into a 10 or 15 minute conversation. That's maybe a good example of an "easy" topic for LLMs (lots of textbooks for it to memorize), but the world is full of easy topics that I know nothing about!
Using it to prototype some low level controllers today, as a matter of fact!
I have a hard time imagining an LLM being able to do arbitrary things. It always feels like LLMs can do lots of the easy stuff, but if they can't do everything you need the skilled engineer anyway, who'd knock the easy things out in a week anyway.
Do you want to be a jobless weaver, or an engineer building mechanical looms for a higher pay than the weaver got?
Personally I’m thrilled that I can get trivial, one-off programs developed for a few cents and the cost of a clear written description of the problem. Engaging internal developers or consulting developers to do anything at all is a horrible experience. I would waste weeks on politics, get no guarantees, and waste thousands of dollars and still hear nonsense like, “you want a form input added to a web page? Aw shucks, that’s going to take at least another month” or “we expect to spend a few days a month maintaining a completely static code base” from some clown billing me $200/hr.
I hate AI code assistants, not because they suck, but because they work. The writing is on the wall.
If we aren't working on our own replacements, we'll be the ones replaced by somebody else's vibe code, and we have no labor unions that could plausibly fight back against this.
So become a Vibe Coder and keep working, or take the "prudent" approach you mention - and become unemployed.
Deleted Comment
> but each succeeding iteration seems to be more disappointing
This is because the scaling hypothesis (more data and more compute = gains) is plateauing, because all text data is used and compute is reaching diminishing returns for some reason I’m not smart enough to say why, but it is.
So now we're seeing incremental core model advancements, variations and tuning in pre- and post training stages and a ton of applications (agents).
This is good imo. But obviously it’s not good for delusional valuations based exponential growth.
Mediocre ones … maybe not so much.
When I worked for a Japanese optical company, we had a Japanese engineer, who was a whiz. I remember him coming over from Japan, and fixing some really hairy communication bus issues. He actually quit the company, a bit after that, at a very young age, and was hired back as a contractor; which was unheard of, in those days.
He was still working for them, as a remote contractor, at least 25 years later. He was always on the “tiger teams.”
He did awesome assembly. I remember when the PowerPC came out, and “Assembly Considered Harmful,” was the conventional wisdom, because of pipelining, out-of-order instructions, and precaching, and all that.
His assembly consistently blew the doors off anything the compiler did. Like, by orders of magnitude.
One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.
If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
Probably going to have the same outcome.
I actually imagine it's the opposite of what you say here. I think technically inclined "IT business partners" will be able of creating applications entirely without software engineers... Because I see that happen every day in the world of green energy. The issues come later, when things have to be maintained, scale or become efficient. This is where the software engineering comes in, because it actually matters if you used a list or a generator in your Python app when it iterates over millions of items and not just a few hundreds.
It does need to be reliable, though. LLMs have proven very bad at that
Chat UIs are an excellent customer feedback loop. Agents develop new iterations very quickly.
LLMs can absolutely handle abstractions and different kinds of component systems and overall architecture design.
They can also handle requirements analysis. But it comes back to iteration for the bottom line which means fast turnaround time for changes.
The robustness and IQ of the models continue to be improved. All of software engineering is well underway of being automated.
Probably five years max where un-augmented humans are still generally relevant for most work. You are going to need deep integration of AI into your own cognition somehow in order to avoid just being a bottleneck.
Software engineering, is a different thing, and I agree you're right (for now at least) about that, but don't underestimate the sheer amount of brainless coders out there.
I would argue it’s a good thing to replace the actual brainless activities.
It really depends on the organization. In many places product owners and product managers do this nowadays.
Presumably, they're trained on a ton of requirements docs, as well as a huge number of customer support conversations. I'd expect them to do this at least as well as coding, and probably better.
These little side quests used to eat a lot of my time and I’m happy to have a tool that can do these almost instantly.
Also, there are often times multiple ways to achieve a certain style and they all work fine until you want a particular tweak, in which case only one will work and the LLM usually gets stuck in one of the ones that does not work.
Telling, isn't it?
This is probably really just a way of saying, it's better at simple tasks rather than complex ones. I can eventually get Copilot to write SQL that's complex and accurate, but I don't find it faster or more effective than writing it myself.
Actually I think it's perfectly adequate at SQL too.
What an awful imagination. Yes there are people who don't like CSS but are forced to use it by their job so they don't learn it properly, and that's why they think CSS is rote memorization.
But overall I agree with you that if a company is too cheap to hire a person who is actually skilled at CSS, it is still better to hoist that CSS job onto LLMs than an unwilling human. Because that unwilling human is not going to learn CSS well and won't enjoy writing CSS.
On the other hand, if the company is willing to hire someone who's actually good, LLMs can't compare. It's basically the old argument of LLMs only being able to replace less good developers. In this case, you admitted that you are not good at CSS and LLMs are better than you at CSS. It's not task-dependent it's skill-dependent.
Here's a kid out hoeing rows for corn. He sees someone planting with a tractor, and decides that's the way to go. Someone tells him, "If you get a tractor, you'll never develop the muscles that would make you really great at hoeing."
Different analogy: Here's someone trying to learn to paint. They see someone painting by numbers, and it looks a lot easier. Someone tells them, "If you paint by numbers, you'll never develop the eye that you need to really become good as a painter."
Which is the analogy that applies, and what makes it the right one?
I think the difference is how much of the job the tool can take over. The tractor can take over the job of digging the row, with far more power, far more speed, and honestly far more quality. The paint by numbers can take over the job of visualizing the painting, with some loss of quality and a total loss of creativity. (In painting, the creativity is considered a vital part; in digging corn rows, not so much.)
I think that software is more like painting, rather than row-hoeing. I think that AI (currently) is in the form of speeding things up with some loss of both quality and creativity.
Can anyone steelman this?
Companies that try to replace their employees with LLMs and AIs will fail.
Unfortunately, all that's in the long run. In the near term, some CEOs and management teams will profit from the short term valuations as they squander their companies' future growth on short-sighted staff cuts.
That's actually really interesting to think about. The idea that doing something counter-productive like trying to replace employees with AI (which will cause problems), may actually benefit the company in terms of valuations in the short run. So in effect, they're hurting and helping the company at the same time.
This is especially prevalent in waterfall orgs that refuse change. Body shops are more than happy to waste a huge portion of their billable hours on planning meetings and roadmap revisions as the obviousness of the mythical man month comes to bear on the org.
Corners get cut to meet deadlines, because the people who started/perpetuated whatever myth need to save their skins (and hopefully continue to get bonuses.)
The engineers become a scapegoat for the org's management problems (And watch, it very likely will happen at some shops with the 'AI push'). In the nasty cases, the org actively disempowers engineers in the process[0][1].
[0] - At one shop, there was grief we got that we hadn't shipped a feature, but the only reason we hadn't, was IT was not allowed to decide between a set of radio buttons or a drop-down on a screen. Hell I got yelled at for just making the change locally and sending screenshots.
[1] - At more than one shop, FTE devs were responsible for providing support for code written by offshore that they were never even given the opportunity to review. And hell yes myself and others pushed for change, but it's never been a simple change. It almost always is 'GLWT'->'You get to review the final delivery but get 2 days'->'You get to review the set of changes'->'Ok you can review their sprint'->'OK just start reviewing every PR'.
“The market can remain irrational longer than you can remain solvent.” — Warren Buffett
The problem is that the software world got eaten up by the business world many years ago. I'm not sure at what point exactly, or if the writing was already on the wall when Bill Gates' wrote his open letter to hobbyists in 1976.
The question is whether shareholders and managers will accept less good code. I don't see how it would be logical to expect anything else, as long as profit lines go up why would they care.
Short of some sort of cultural pushback from developers or users, we're cooked, as the youth say.
Bad code leads to bad business
This makes me think of hosting departement; You know, which people who are using vmware, physical firewalls, dpi proxies and whatnot;
On the other edge, you have public cloud providers, which are using qemu, netfilter, dumb networking devices and stuff
Who got eaten by whom, nobody could have guessed ..
Bad business leads to bad business.
Bad code might be bad, or might be sufficient. It's situational. And by looking at what exists today, majority of code is pretty bad already - and not all businesses with bad code lead to bad businesses.
In fact, some bad code are very profitable for some businesses (ask any SAP integrator).
Could most software be more awesome? Yes. Objectively, yes. Is most software garbage? Perhaps by raw volume of software titles, but are most popular applications I’ve actually used garbage? Nope. Do I loathe the whole subscription thing? Yes. Absolutely. Yet, I also get it. People expect software to get updated, and updates have costs.
So, the pertinent question here is, will AI systems be worse than humans? For now, yeah. Forever? Nope. The rate of improvement is crazy. Two years ago, LLMs I ran locally couldn’t do much of anything. Now? Generally acceptable junior dev stuff comes out of models I run on my Mac Studio. I have to fiddle with the prompts a bit, and it’s probably faster to just take a walk and think it over than spend an hour trying different prompts… but I’m a nerd and I like fiddling.
> Jonathan Blow - Preventing the Collapse of Civilization (English only) :: https://inv.nadeko.net/watch?v=pW-SOdj4Kkk
and it made me think of your comment. In summary, I disagree, and think that video argues the point very convincingly.
Corporations create great code too: they're not all badly run.
The problem isn't a code quality issue: it is a moral issue of whether you agree with the goals of capitalist businesses.
Many people have to balance the needs of their wallet with their desire for beautiful software (I'm a developer-founder I love engineering and open source community but I'm also capitalist enough to want to live comfortably).
In the long term (post AGI), the only safe white-collar jobs would be those built on data which is not public i.e. extremely proprietary (e.g. Defense, Finance) and even those will rely heavily on customized AIs.
Isnt every little script, every little automation us programmers do in the same spirit? "I dont like doing this, so I'm going to automate it, so that I can focus on other work".
Sure, we're racing towards replacing ourselves, but there would be (and will be) other more interesting work for us to do when we're free to do that. Perhaps, all of us will finally have time to learn surfing, or garden, or something. Some might still write code themselves by hand, just like how some folks like making bread .. but making bread by hand is not how you feed a civilization - even if hundreds of bakers were put out of business.
Unless you have a mortgage.. or rent.. or need to eat
Where do you get this? The limitations of LLMs are becoming more clear by the day. Improvements are slowing down. Major improvements come from integrations, not major model improvements.
AGI likely can't be achieved with LLMs. That wasn't as clear a couple years ago.
Are there plenty of gaps left between here and most definitions of AGI? Absolutely. Nevertheless, how can you be sure that those gaps will remain given how many faculties these models have already been able to excel at (translation, maths, writing, code, chess, algorithm design etc.)?
It seems to me like we're down to a relatively sparse list of tasks and skills where the models aren't getting enough training data, or are missing tools and sub-components required to excel. Beyond that, it's just a matter of iterative improvement until 80th percentile coder becomes 99th percentile coder becomes superhuman coder, and ditto for maths, persuasion and everything else.
Maybe we hit some hard roadblocks, but room for those challenges to be hiding seems to be dwindling day by day.
Making our work more efficient, or humans redundant should be really exciting. It's not set in stone that we need to leave people middle aged with families and now completely unable to earn enough to provide a good life
Hopefully if it happens, it happens to such a huge amount of people that it forces a change
Now we have Geoffrey Hinton getting the prize for contributing to one of the most destructive inventions ever.
I think they are hoping that their future is safe. And it is the average minds that will have to go first. There may be some truth to it.
Also, many of these smartest minds are motivated by money, to safeguard their future, from a certain doom that they know might be coming. And AI is a good place to be if you want to accumulate wealth fast.