> [Harper Reed] cautioned against being overly precious about the value of deeply understanding one’s code, which is no longer necessary to ensure that it works.
That just strikes me as an odd thing to say. I’m convinced that this is the dividing line between today’s software engineers and tomorrow’s AI engineers (in whatever form that takes - prompt, vibe, etc.) Reed’s statement feels very much like a justification of “if it compiles, ship it!”
> “It would be crazy if in an auto factory people were measuring to make sure every angle is correct,” he said, since machines now do the work. “It’s not as important as when it was group of ten people pounding out the metal.”
Except that the machines doing that work aren’t regularly hallucinating angles, spurious welding joints, etc.
Also, you know who did measure every angle to make sure it was correct? The engineers who put together the initial design. They sure as hell took their time getting every detail of the design right before it ever made it to the assembly line.
An entire generation of devs, who grew up using unaudited, unverified, unknown license code. And which at a moments notice, can be sold to a threat actor.
And I've seen devs try to add packages to the project without even considering the source. Using forks of forks of forks, without considering the root project. Or examing if it's just a private fork, or what is most active and updated.
If you don't care about that code, why care about AI code? Or even your own?
This is an underrated comment. Who's job is it to do the thinking? I suppose it's still the software engineer, which means the job comes down to "code prompt engineer" and "test prompt engineer".
The whole auto factory thing sounds completely misinformed to me. Just because a machine made it does not mean the output isn't checked in a multitude of ways.
Any manufacturing process is subject to quality controls. Machines are maintained. Machine parts are swapped out long before they lead to out-of-tolerance work. Process outputs are statistically characterised, measured and monitored. Measurement equipment is recalibrated on a schedule. 3d printed parts are routinely X-rayed to check for internal residue. If something can go wrong, it sure as hell is checked.
Maybe things that can't possibly fail are not checked, but the class of software that can't possibly fail is currently very small, no matter who or what generates it.
Additionally production lines are all about doing the same thing over and over again, with fairly minimal variations.
Software isn't like that. Because code is relatively easy to reuse, novelty tends to dominate new code written. Software developers are acting like integrators in at least partly novel contexts, not stamping out part number 100k of 200k that are identical.
I do think modern ML has a place as a coding tool, but these factory like conceptions are very off the mark imo.
On the auto factory side, the Toyota stuck gas pedal comes to mind, even if it can happen only under worst-case circumstances. But that's the (1 - 0.[lots of nines]) case.
On the software side, the THERAC story is absolutely terrifying - you replace a physical interlock with a software-based one that _can't possibly go wrong_ and you get a killing machine that would probably count as unethical for executions of convicted terrorists.
A buddy of mine was a director in a metrology integration firm that did nothing but install lidar, structured light and other optical measurement solutions for auto assembly plants. He had a couple dozen people working full time on new model line build outs (every new model requires substantial refurb and redesign to the assembly line) and ongoing QA of vehicles as they were being manufactured at two local Honda plants. The precision they were looking for is pretty remarkable.
> Any manufacturing process is subject to quality controls.
A few things on this illusion:
* Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns
* QA workers are often pressured to let small things fly and cave easily because they simply do not get paid enough to care and know they won't win that fight unless their employer's product causes some major catastrophe that costs lives
* Most common goods and infrastructure are built by the lowest bidder with the cheapest materials using underpaid labor, so as for "quality" we're already starting at the bottom.
There is this notion that because things like ISO and QC standards exist, people follow them. The enforcement of quality is weak and the reach of any enforcing bodies is extremely short when pushed up against the wall by the teams of lawyers afforded to companies like Boeing or Stellantis.
I see it too regularly at my job to not call out this idea that quality control is anything but smoke and mirrors, deployed with minimal effort and maximum reluctance. Hell, it's arguably the reason why I have a job since about 75% of the machines I walk in their doors to fix broke because they were improperly maintained, poorly implemented or sabotaged by an inept operator. It leaves me embittered, to be honest, because it doesn't have to be this way and the only reason why it is boils down to greed and mismanagement.
It’s been said that every jetliner that is in the air right now has a 1in hairline fracture in it somewhere? But that the plane is designed for the failure of any one or two parts?
Software doesn’t exactly work the same way. You can make “AI” that operates more like [0,1] but at the end of the day the computer is still going to {0,1}.
Something I've been thinking about is that most claims of AI productivity apply just as well (and more concretely and reliably) to just... better tooling and abstractions
Code already lets us automate work away! I can stamp out ten instances of a component or call a function ten times and cut my manual labor by 90%
I'm not saying AI has nothing to add, but the "assembly line" analogies - where we precisely factor out mundane parts of the process to be automated - is what we've been doing this whole time
AI demands a whole other analogy. The intuitions from automating factories really don't apply, imo.
Here's one candidate: AI is like gaining access to a huge pool of cheap labor, doing tasks that don't lend themselves to normal automation. Something like when manufacturing got offshored to China in the late 20th century
If you're chronically doing something mundane in software development, you're doing something wrong. That was true even before AI.
100%. I keep thinking this, and sometimes saying it.
Sure, if you're stuck in a horrible legacy code base, it's harder. But you can _still_ automate tedious work, given you can manage to put in the proverbial stop for gas. I've seen loads of developers just happily copy paste things together, not stopping to wonder if it was perhaps time to refactor.
Exactly that. Software development isn't about writing code, never was, it's about what code to write. Doesn't matter if I type in the code or tell an AI what code it should type.
I'll admit that assuming it's correct, an AI can type faster than me. But time spent typing represents only a fraction of the software development cycle.
But, it'll take another year or two on the hype cycle for the gullible managers being sold AI to realise this fully.
The correct analogy is that software engineers design and build the _factory_. The software performs the repeatable process as defined by code, and no person sits and watches if each machine instruction is executed correctly.
Do you really want your auto tool makers to not ensure the angle of the tools are correct _before_ you go and build 10,000 (misshaped) cars?
I’m not saying we don’t embrace tooling and automation as appropriate at the next level up, but sheesh that is a misguided analogy.
> It would be crazy if in an auto factory people were measuring to make sure every angle is correct
They are.
Mechanical engineers measure more angles and measurements than a consultant might guess - its a standard part of quality control, although machines often do the measuring with the occasional human sampling as a back-up. You'd be suprised just how much effort goes into getting things correct such as _packs of kitkats_ or _cans of coke_.
If getting your angles wrong risks human lives, the threat of prosecution usually makes the angles turn out right, but if all else fails, recalls can happen because the gas pedal can get stuck in the driver-side floor carpet.
Assembly-line engineering has your favour that (A) CNC machines don't randomly hallucinate; they can fail or go out of tolerance, but usually in predictable ways and (B) you can measure a lot of things on an assembly line with lasers as the parts roll through.
It was thankfully a crazy one-off that someone didn't check that _the plugs were put back into the door_, but that could be a sign of bad engineering culture.
>It would be crazy if in an auto factory people were measuring to make sure every angle is correct
To someone who used to automate assembly plants, sounds to me as a rationalization of someone who has never worked in manufacturing. Quality people rightly obsess over whether or not the machine is making “every angle” correct. Imagine trying to make a car where parts don’t fit together well. Software tends to have even more interfaces, and more failure modes.
I’ve also worked in software quality and people are great at rationalizing reasons for not doing the hard stuff, especially if that means confronting an undesired aspect of their identity (like maybe they aren’t as great of a programmer as they envision). We should strive to build processes that protect us from our own shortcomings.
What strikes me the most is not even that people are willing to do that, to just fudge their work until everything is green and call it a day.
The thing that gets me is how everyone is attaching subsidized GPU farms to their workflows, organizations and code bases like this is just some regulated utility.
Sooner of later this whole LLM thing will get monetized or die. I know that people are willing to push bellow par work. I didn't know people were ready to put on the leash of some untested new sort of vendor lock-in so willingly and even arguing this is the way. Some may even have the worst of the two worlds and end up on the hook for a new class of sticker shock, pay down and later have these products fail from under them and left out to dry.
Someone will pay for these models, the investors or the users so dependent they'll pay whatever price is charged.
Harper talks a lot about using defensive coding (tests, linters, formal verification, etc) so that its not strictly required to craft and understand everything.
This article (and several that follow) explain his ideas better than this out of context quote.
The issue is that, (the way I see it happening more and more in the real world):
- tests are ran by machines
- linters are (being) replaced by machines/SaaS services (so.. machines)
- formal verification: yes - SHure, 5 people will review the thousands lines of code written every day, in a variety of languages/systems/stacks/scripts/RPAs/etc. or they will simply check that the machines return "green-a-OK" and ask the release team to push it to production.
The other thing that I have noticed, is that '(misplaced) trust erodes controls'. "hey the code hasn't broke for 6 months, so let's remove ABC and DEF controls", and then boom goes the app (because we used to test integration but 'come on - no need for that).
Now.. this is probably the paranoid (audit/sec) in me, but stuff happens, and history repeats itself.
Also.. Devs are cost center, not a profit center. They are "value enablers" not "value adders". Like everything and everyone else, if something can be replaced with something 'equally effective' and cheaper, it is simply a matter of time.
I feel that companies want to both run for this new gold-rush, while at the same time do it slowly and see if this monster bites (someone else first).
More to the point, there are people who carefully ensure that those angles are correct, and that all of the angles result in the car’s occupants arriving at their destination instead of turning into a ball of fire. It’s just that this process takes place at design time, not assembly time.
Software the same way. It’s even more automated than auto factories. Assembly is 100% automated. Design is what we get paid to do, and that requires understanding, just like the engineers at Ford need to understand how their cars work.
Seems like the more distanced we get from actual engineering methods, the more fucked up our software and systems become. Not really surprising to be honest. Just look at the web as an example. Almost everyone is just throwing massive frameworks and a hundred libraries as dependencies together and calls it a day. No wait, must apply the uglyfier! OK now call it a day.
There's no incentive for engineering methods because there's no liability for software defects. "The software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability." It's long overdue but most people in the software industry don't want liability because it would derail their gravy train.
I'd love to blame shitty web apps on outsourcing of computing power to your users... But you'll hear countless stories of devs blowing up their AWS or GCP accounts with bad code or architecture decisions (who cares if this takes a lot of CPU to run, throw another instance at it!) , so maybe it's just a lazy/bad dev thing.
Is it though? It could be interpreted as an acknowledgement. Five years from now, testing will be further improved, yet the same people will be able to take over your iPhone by sending you a text message that you don't have to read. It's like expecting AI to solve the spam email problem, only to learn that it does not.
It's possible to say "we take the security and privacy of our customers seriously" without knowing how the code works. That's the beauty of AI. It legitimizes and normalizes stupid humans without measurably changing the level of human stupidity or the quality or efficiency of the product.
And except that the factory analogy of software delivery has always been utterly terrible.
If you want to draw parallels between software delivery and automotive delivery then most of what software engineers do would fall into the design and development phases. The bit that doesn’t: the manufacturing phase - I.e., creating lots of copies of the car - is most closely modelled by deployment, or distribution of deliverables (e.g., downloading a piece of software - like an app on your phone - creates a copy of it).
The “manufacturing phase” of software is super thin, even for most basic crud apps, because every application is different, and creating copies is practically free.
The idea that because software goes through a standardised workflow and pipeline over and over and over again as it’s built it’s somehow like a factory is also bullshit. You don’t think engineers and designers follow a standardised process when they develop a new car?
It would be crazy for auto factory workers to check every angle. It is absolutely not crazy for designers and engineers to have a deep understanding of the new car they’re developing.
The difference between auto engineering and software engineering is that in one your final prototype forms the basis for building out manufacturing to create copies of it, whereas in the other your final prototype is the only copy you need and becomes the thing you ship.
(Shipping cadence is irrelevant: it still doesn’t make software delivery a factory.)
This entire line of reasoning is… not really reasoning. It’s utterly vacuous.
It’s not only vacuous. It’s incompentent by failing to recognize the main value add mechanisms. In software delivery, manufacturing, or both. I’m not sure if Dunning-Krueger model is scientifically valid or not but this would be a very Dunning-Krueger thing to say (high confidence, total incompentence).
> The “manufacturing phase” of software is super thin, even for most basic crud apps, because every application is different, and creating copies is practically free.
This is not true from a manager's perspective (indoctrinated by Taylorism). From a manager's perspective, development is manufacturing, and underlying business process is the blueprint.
> The idea that because software goes through a standardised workflow and pipeline over and over and over again as it’s built it’s somehow like a factory is also bullshit.
I don't think it's bs. The pipeline system is almost exactly like a factory. In fact, the entire system we've created is probably what you get when cost of creating a factory approaches instantaneous and free.
The compilation step really does correspond to the "build" phase in the project lifecycle. We've just completely automated by this point.
What's hard for people to understand is that the bit right before the build phase that takes all the man-hours isn't part of the build phase. This is
an understandable mistake, as the build phase in physical projects takes most of the man-hours, but it doesn't make it any more correct.
It's a reckless stance that should never ever come from a software professional. "Let's develop modern spy^H^H^Hsoftware in the same way as the 737 Max, what could possibly go wrong?"
One of the reasons outsourcing for software fizzled out some compared to manufacturing is because in a factory you don't have "measuring to make sure every angle is correct" because (a) the manufacturing tools are tested and repeatable already and (b) the heavy lifting of figuring out all the angles was done ahead of time. So it was easy to mechanize and send to wherever the labor was cheapest since the value was in the tools and in the plans.
The vast majority of software, especially since waterfall methods were largely abandoned, has the planning being done at the same time as the "execution". Many edge cases aren't discovered until the programmer says "oh, huh, what about this other case that the specs didn't consider?" And outsourcing then became costly because that feedback loop for the spec-refinement ran really slowly, or not at all. Spend lots of money, find out you got the wrong thing later. So good luck with complex, long-running projects without deeply understanding the system.
Alternately, compare to something more bespoke and manual like building a house, where the tools are less precise and more of the work is done in the field. If you don't make sure all those angles are correct, you're gonna get crappy results.
(The most common answer here seems to be "just tell the agent what was wrong and let it iterate until it fixes it." I think it remains to be seen how well "find out everything that is wrong after all the code is written, and then tell the coding agent(s) to fix all of those" will work in practice. If nothing else, it will require a HUGE shift in manual testing appetite. Maybe all the software engineers turn into QA engineers + deployment engineers.)
Any data on that? I see everyone trying to outsource as much as they can. Sure, now it is moving toward AI, but every company I walk into have 10-1000s of FTEs in outsource countries.
I see most fortune 1000 companies here doing some type of agile/planningexecution which is in fact more waterfall. The people here in the west are more management and client facing, the rest is 'thrown over the fence'.
The lead poisoning of our time, companies getting high on hype-tech, killed off because the "freedom from programmers- no code tools" create gordion project nods.
And all because the MBAs yearn for freedom from dependencies and thus reality.
That's wild. You can't say this statistical, hyper-generalised system of "AI" is in any way comparable to the outputs of highly specific, highly deterministic machinery. It's like comparing a dice roll to a clock. If anything reviewing engineers now need to "measure the angles" more closely than ever.
> It would be crazy if in an auto factory people were measuring to make sure every angle is correct
That cannot be any furthest from the truth.
Take a decent enterprise CNC machine (look in youtube, lots of videos) that is based on servos, not the stepper motor amateur machines. That servo-based machine is measuring distances and angles hundreds of times per second, because that is how it works. Your average factory has a bunch of those.
Whoever said that should try getting their head out of their ass at least every other year.
> Reed’s statement feels very much like a justification of "if it compiles, ship it!"
Not really. More like, if the fopen works fine, don't bother looking how it does so.
SWE is going to look more like QA. I mean, as a SWE if I use the webrtc library to implement chat and it works almost always but just this once it didn't, it is likely my manager is going to ask me to file a bug and move on.
>It would be crazy if in an auto factory people were measuring to make sure every angle is correct
Yeah, but there's still something checking the angles. When an LLM writes code, if it's not the human checking the angles, then nothing is, and you just hope that the angles are correct, and you'll get your answer when you're driving 200 km/h on the Autobahn.
The ultimate irony being that they actually do measure all kinds of things against datum points as the cars move down the line as the earlier you junk a faulty build the less it's cost you.
To me the biggest difference between a software engineer and an assembly worker is the worker makes cars, the software engineer makes the assembly line.
"The fact that they used the auto industry as an example is funny, because the Toyota way and six sigma came out of that industry."
It's even funnier when you consider that Toyota has learned how bad of an idea lean manufacturing/6-Sig/5S can be thanks to the pandemic - they're moving away from it in some degrees, now.
It's wild to think that someone who purports to be an expert would compare an assembly line to AI, where the first is the ultra-optimization of systems management with human-centric processes thoughtfully layered on top and the latter a non-deterministic black box to everybody. It's almost like they are willfully lying...
I keep feeling like there's a huge disconnect between the owners/CTOs/managers with how useful they think LLMs _are supposed to be_, vs the people working on the product and how useful they think LLMs _actually are_. The article describes Harper Reed as a "longtime programmer", so maybe he falsifies my theory? From Wikipedia:
>Harper Reed is an American entrepreneur
Ah, that's a more realistic indicator of his biases. Either there's some misunderstanding, or he's incorrect, or he's being dishonest; it's my job to make sure the code that I ship is correct.
This is along the same lines as why I don't doubt the syntactical features to break. I assume they work. You have to accept some abstraction work, and build on top of it.
We will reach some point where we will have to assume AI is generating the correct code.
Car factory is a funny example because that's also one of the least likely places you will see AI assisted coding. Safety critical domains are a whole different animal with SysML everywhere and thousand page requirements docs.
also software is a factory and the LLM is a workshop building factories. Also I strongly believe that people building factories still do a lot of a) ten people pounding out the metal AND b) measuring to check
Companies with lack of engineering rigor are basically pre filtered customers packaged for a buyout by a company/vc able to afford more rigorous companies structure.
To me it feels like part of the hype train, like crypto & VR.
I recently had the (dis)pleasure of fixing a bug in a codebase that was vibe coded.
It ends up being a collection of disorganized business problems converted into code, without any kind of structure.
Refinements are implemented as super-narrow patches, resulting in complex and unorganized code, whereas a human developer might take a step back to try and extract more common patterns.
And once you reach the limit of the context window you're essentially suck, as the LLM can no longer keep track of its patches.
English (or all spoken human language) is not precise enough to articulate what you want your code to do, and more importantly, a lot of time and experience precedes code that a senior developer writes.
If you want to have this senior developer 'vibe' code, then you'll need to have a way to be more precise in your prompts, and be able to articulate all learnings from your past mistakes and experience.
And that is incredibly heavy. Remember, this is opposite from answering 'why did you write it like this'. This is an endless list of items that say 'don't do this, but this, in this highly specific context'.
Counterpoint: AI has help me refactor things where I normally couldn’t. Things like extracting some common structure that’s present in a slightly different way in 30 places, where cursor detects it, or suggesting potential for a certain pattern.
The problem with vibe coding is more behavioral I think: the person more likely to jump in the bandwagon to avoid writing some code themselves is probably not the one thinking about long term architecture and craftsmanship. It’s a laziness enhancer.
> AI has help me refactor things where I normally couldn’t.
Reading "couldn't" as, you would technically not be able to do it because of the complexity or intricacy of the problem, how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?
Your comment makes it sound like you're now dependent on AI to refactor again if dire consequences are detected way down the line (in a few months for instance), and the problem space is already just not graspable by a mere human. Which sounds really bad if that's the case.
I have the same observation. I've been able to improve things I just didn't have the energy to do for a while. But if you're gonna be lazy, it will multiply the bad.
> Counterpoint: AI has help me refactor things where I normally couldn’t. Things like extracting some common structure that’s present in a slightly different way in 30 places, where cursor detects it, or suggesting potential for a certain pattern.
I have a feeling this post is going to get a lot of backlash, but I think this is a very good counterpoint. To be clear: I am not here to shill for LLMs nor vibe coding. This is a good example where an "all seeing" LLM can be helpful.
Whether or not you choose to accept the recommendation from the LLM isn't my main point. The LLM making you aware is the key value here.
Recently, I was listening to a podcast about realistic real world uses for an LLM. One of them was a law firm trying to review details of a case to determine a strategy. One of podcasters (sp?) recoiled in horror: "An LLM is writing your briefs?" They replied: "No, no. We use it generate ideas. Then, we select best." It was experts (lawyers, in this case) using an LLM as a tool.
If you couldnt do the task youself (a very loaded statement which I honestly dont't believe), how could you even validate what llm did was correct, didnt miss anything, didnt introduce a nasty corner case bug etc?
In any case a very rare and specific corner case you mention, a dev can go on a decade or two (or lifetime or two) without ever experiencing similar requirement. If it should be a convincing argument for almighty llm it certainly isnt.
I've found it can help one get the confidence to get started, but there are limits and there is a point where a lack of domain knowledge and a desired target (sane) architecture will bight you hard.
You can have AI almost generated anything, but even AI has limited to understanding requirements, if you cannot articulate what you want very precisely, it's difficult to get "AI" to help you with that.
Is cursor _that_ good on monorepo? My use with AI so far has been the chat interface. I provide a clear description of what i want and manually copy paste it. Using copilot, I couldn't buy their agentic mode nor adding files to the chat' session context.
Gemini' large context has been really good handling large context, but still doesn't help much in refactoring.
> English (or all spoken human language) is not precise enough to articulate what you want your code to do
Exactly. And this is why I feel like we are going to go full circle on this. We've seen this cycle in our industry a couple times now:
"Formal languages are hard, wouldn't it be great if we could just talk in English and the computer would understand what we mean?" -> "Natural languages are ambiguous and not precise, wouldn't it be great if we could use a formal langue so that the computer can understand precisely what we mean?"
The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen.
Why do I think this? Because we can't even use natural language to communicate unambiguously between intelligent people. Our most earnest attempt at this, the law, is so fraught with ambiguity there's an entire profession dedicated to arguing in the gray area. So what hope do we have controlling machines precisely in this way? Are future developers destined to be equivalent to lawyers, who have to essentially debate the meaning of a program before it's compiled, just to resolve the ambiguities? If that's where this ends up, I will be very sad indeed.
> The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen
My take is more nuanced.
First, there is some evidence [1] that human language is neither necessary nor sufficient to enable what we experience as "thinking".
Second, our intuition, thinking etc are communicated via natural languages and imagery which form the basis for topics in the humanities.
Third, from communication via natural language - slowly emerges symbolism, and formalism which codifies intuitions in a manner which is operational, and useful.
As an example, socratic dialog was a precursor to euclidean geometry which operationally codifies our intuitions of space around us in a manner which becomes useful.
However, formalism is stale as there are always new worlds we experience which cannot be captured by any formalism. The genius of the human brain which is not yet captured in LLMs is to be able create symbolisms of these worlds almost on demand.
ie, if we were to order in terms of expressive power, it would be something like:
1) perception, cognition, thinking, imagination
2) human language
3) formal languages and computers codifying worlds experienced via 1) and 2)
Meanwhile, there is a provocative hypothesis [2] which argues that our "thinking" process lies outside computation as we know it.
> Are future developers destined to be equivalent to lawyers, who have to essentially debate the meaning of a program before it's compiled, just to resolve the ambiguities?
Future developers? You sound like you've never programmed in C++.
> The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen
what we operationally mean by "precise" involves formalism. ie, there is an inherent contradiction between precise, and natural languages.
>> The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen.
I think the closest might be the constructed language "Ithkuil". Learning it is....difficult, to put it mildly.
it doesn't have to be precise enough anymore though because a perfect AI can get all the details from the existing code and understand your intent from your prompt.
While I have a similar experience with, hurm, "legacy" codebase, I gotta say, LLM (in my experience) made the "legacification" of the codebase way, way faster.
One thing especially, is the loss of knowledge about the codebase. While there was always some stackoverflow-coding, when seeing a weird / complicated piece of code, I used to be able to ask the author why it was like that. Now, I sometimes get the answer "idk, its what chatgpt gave me".
"Code increases in complication to the first level where it is too complicated to understand. It then hovers around this level of complexity as developers fear to touch it, pecking away here and there to add needed features."
You're not wrong that AI is hyped just like crypto and VR were. But it's also true that automation will increasingly impact your job, even for jobs that we have considered to be highly technical like software engineering.
I've noticed this over the last decade where tech people (of which I am one) have considered themselves above the problems of ordinary workers such as just affording to live. I really started to notice this in the lead up to the 2016 election where many privileged people did not recognize or just immediately dismissed the genuine anger and plight of working people.
This dovetails into the myth of meritocracy and the view that not having enough money or lacking basic necessities like food or shelter or a personal, moral failure and not a systemic problem.
Tech people in the 2010s were incredibly privileged. Earnings kept going up. There was seemingly infinite demand for our services. Life was in many ways great. The pandemic was the opportunity for employers to reign in runaway (from their perspective) labor costs.
Permanent layoff culture is nothing more than suppressesing wages. The facade of the warm, fuzzy Big Tech employer is long gone. They are defense contractors now. Google, Microsoft or Amazon are indistinguishable from Boeing, Lockheed Martin and Northrop Grumman.
So AI won't immediately replace you. It'll start by 4 engineers with AI being able to do the job that was previously done by 5. Laying off that one person saves that money directly but also suppresses the wages of the other 4 who won't be asking for raises. They're too afraid of losing their jobs. Then it'll be 3. Then 2.
A lot of people, particularly here on HN, are going to find out just how replaceable they are and how aligning with the interests of the very wealthiest was a huge mistake. You might get paid $500K+ a year but you are still a worker. Your interests align with nurses, teachers, baristas, fast food workers and truck drivers, not the Peter Thiels of the world.
I think engineers are more like doctors / lawyers, who are also both contracted labor, whose wages can be (and have been) suppressed as automations and tactics to suppress wages arrived.
But these groups also don't have strong unions and generally don't have the class consciousness you are talking about, especially as the pay increases.
There have always been segments of the working class, which have deluded themselves into believing that by co-operating with the capitalists, they could shield themselves from the adverse effects of what is happening to the rest of the working class.
And it's an understandable impulse, but at some point you'd think people would learn instead of being mesmerized by the promise of slightly better treatment by the higher classes in exchange for pushing down the rest of the working class.
Now it's our turn as software engineers to swallow that bitter pill.
well, in all honesty, a lot of code that actually works in production, and delivers... goods, services, etc. is an over patched mess of code snippets. that sad part is that cpus are so powerful that it works.
In 20 yeas in this market I saw a lot of this. A about 6 years back it was the blockchain crazy.
My boss wanted me to put blockchain in everything (so he could market this to our clients). I printed a small sign and left on my desk. Everytime someone asked me about blockchain, I would point the sign "We don't need blockchain!"
Every 5 years or so, there's some new thing that every uncreative product lead decides they absolutely must have, for no other reason than that it's the "in" thing. Web is hot. We need to put our product on the web. Mobile is hot. We need a mobile app. 3D is hot. We need to put 3D in our product. VR is hot. We need to put VR in our product. IoT is hot. We need to put IoT in our product. Blockchain is hot. We need to put blockchain in our product. AI is hot. We need to put AI in our product. It'll keep going on long after we're out of the business.
If you have to be so precise in your prompts to the point you're almost using a well specified, domain-specific language, you might as well program again because that's effectively the same thing.
One simile I've heard describing the situation where fancy autocomplete can no longer keep track of its patches is that you'll be sloshing back and forth between bugs. I thought it was quite poetic.
Makes me wonder if we’ll see more emphasis on loosely coupled architecture as a result of this. Software engineers maintain the structure, and AI codes it chaos at the leaf. Similar to how data engineers commoditized data via the warehouse
You can see that in this article referencing Jassy's 4,500 years of effort. Which is said "sounds crazy but it's true!"
It isn't true.
The tool he's bragging about most went about changing JDK8 to JDK17 in a build config file, and if you're lucky tweaking log4j versions. 4,500 years my ass. It was more regex and AI
Agreed. I can't think of another area where AI could amplify the effect of my existing level of knowledge as it does with coding. It's far exceeding my expectations.
Yes, it's new and impressive and changing things, but nevertheless it's still underdelivering because the over-promising is out of control. They are selling these things as Ph.D. level researchers that can ace the SAT, pass the MCAT, pass the Bar, yet it still has trouble counting the R's in "strawberry".
Not directly related, but an anecdote: well before AI, I was talking to a Portfolio Solutions Manager or something from JP Morgan. He was an MD at the firm and very full of himself. He told me, "You guys, your job is....you just Google search your problem and copy paste a solution, right?". What I found hilarious is that he also told me, "The quants, I hate that they keep their C++ code secret. I opened up the executable in Notepad to read it and it was just gibberish". Lesson: people with grave incompetence at programming feel completely competent to judge what programming is and should be.
My own tangential gripe (a bit related to yours though): the factory work began when Agile crept into the workplace. Additionally, lint, unit tests, code reviews... all this crap just piled on making programming worse still.
It stopped being fun to code around that point. Too many i's to dot to make management happy.
If you give up on unit tests and code review then the code is "yours" instead of "ours" and your coworkers will not want to collaborate on it with you.
However, this has to be substantive code review by technical peers who actually care.
Unit tests also need the be valued as integral to the implementation task. The author writes the unit tests. It helps to guide the thought process. You should not offload unit tests to an intern as "scutwork".
If your code is sloppy, a stylistic mess, and unreviewed, then I am going to put it behind an interface as best I can, refer to it as "legacy", rely on you for bugfixes (I'm not touching that stinking pile), and will probably try to rally people behind a replacement.
I never found linting or writing unit tests to be particularly un-fun, but I generally really really value correctness in my code, and both of those things tend to help on that front.
I used to work in aerospace R&D. The number of times I heard some variant of “it’s just software” to disregard a safety critical concern was mind boggling. My favorite is a high-level person equating it to writing directions on a napkin.
It doesn't help that most "tech visionaries" or people considered tech bros these days more often come from accounting or legal background than anything technical. They are widely perceived as authorities but come without the expertise. This is why it's so perplexing for the techies when the industry gets caught up in some ridiculous hype cycle apparently neglecting the physical realities.
With all the changes coming up, I am happy that I am retiring soon. Since I started in the 90s, SW dev has become more and more tightly controlled and feels more like an assembly line. When I started, you could work for weeks and months without much interruption. You had plenty of time for experimentation and creativity. Now everything is ticked based and you constantly have to report status and justify what you are doing. I am sure there will always be devs who are doing interesting work but I feel these opportunities will be less and less.
In a way it's only fair. Automation has made a lot of jobs obsolete or miserable. Software devs are a big contributor to automation so we shouldn't be surprised that we are finally managing to automate our own jobs away,
> Since I started in the 90s, SW dev has become more and more tightly controlled and feels more like an assembly line. When I started, you could work for weeks and months without much interruption. You had plenty of time for experimentation and creativity. Now everything is ticked based and you constantly have to report status and justify what you are doing
Yeah the consistent "reporting" of "status" on "stand-ups" where you say some filler to get someone incapable of understanding what it is that you're doing off your back for 24 more hours has consistently been one of the most useless and unpleasant parts of the job.
> you say some filler to get someone incapable of understanding what it is that you're doing off your back for 24 more hours has consistently been one of the most useless and unpleasant parts of the job
This sucks for the 50% or so who are like you, but there's another 50% who won't really get much done otherwise, either because they don't know what to do and aren't self-motivated or capable enough to figure it out (common) or because they're actively cheating you and barely working (less common)
>> Yeah the consistent "reporting" of "status" on "stand-ups" where you say some filler to get someone incapable of understanding what it is that you're doing off your back
In my experience it is human nature to think you are doing something that people around you can't or don't understand. The graveyard is full of irreplaceable people is an old saying. Sometimes the people you report to are morons, but if you consistently report to a moron its time for introspection. There's more that you can do than just suffer that. One place to start is to have some charity for the people you work with.
I also started in the 1990’s and agree the evolution has been as you describe it. It does highly depend on where you work, but the tightly managed JIRA-driven development seems awfully popular.
But I fall short of declaring the 1990s or 2000s or 2010s were the glory days and now things suck. I think part of it is nostalgia bias. I can think of a job I spent 4 years and list all the good parts of the experience. But I suspect I’m forgetting over a lot of mediocre or negative stuff.
At any rate I still like the work today. There are still generally hard challenges that you can overcome, people that depend on you, new technologies to learn about. Generically good stuff.
Thanks for pointing out JIRA. I think the problem comes from needing to keep the codebase running next month while trying to up scalars / numbers, not thinking years ahead or how to improve both inside culture and outside image of a company which are more complex structures with lots of little metrics and interdependent components than a win/loss output or an issue tracker that ignores the fact issues solved != issues prevented.
I guess these strategies boil down to having some MBA on top or an engineer that has no board of MBAs to bow down to. I strive to stay with private owned companies for this reason but ofc these are less loud on the internet, so you can easily miss them while jobhunting.
Now everything is ticked based and you constantly have to report status and justify what you are doing.
The weird thing about this is, many developers wanted this. They wanted the theater of Agile, JIRA tickets, points etc.
I'm in the same boat yet still need to squeeze out another 10 years or so but personally working om multiple-side projects so I can get out of this boring, mundane shit.
I think what you are describing is the difference between being among the people to engineer the first car to being in the factory trying to make the 100 millionth car as cheaply as possible. Forty years ago people mostly learned to program because they were interested in it. People starting software / tech companies were taking a chance on something most didn't understand. That self selected for a type. Now its the most common major in school. I am sure its possible to recreate what you felt in the '90s but probably in a different field or a subset of this one.
Knowing Amazons backends this is just their technical debt creeping up. Coders do repetetive work there because extending from their architecture means lots of code simply to keep the machine running. I would be surprised if they did not assign ASIN (which come from the need to keep ISBN compatibility everywhere) to not only Apps but other virtual goods lol.
From my integrations pov Ebay was ahead of their time with their data structure and pushed for deprecation fast to not keep the debt. Amazon ooth only looks more modern through acquiring new market fields instantly followed by throwing a lot of money to facade up the mess. Every contact like key account managers there were usually pushed for numbers, this has nothing to do with coders being coders.
Bosses always look for ways to instantly measure coders output which is just short-sighted way of thinking. My coworkers were measured by lines of code obviously. I wonder how you measure great engineering.
So no, this has not changed, you can still work uninterrupted on stuff for months or years if you want and skip these places, maybe even proved over your career that previous designs are stable for years to come.
Is it really that LLM-based tools make developers so much more productive or rather that organizations have found out they can do with less -- and less privileged -- developers?
What I don't really see, especially not big tech-internally, are stories of teams that have become amazingly more productive. For now it feels we get some minor productivity improvements that probably do not off-set the invest and are barely enough to keep the narrative alive.
A lot of it is perception. Writing software was long considered somewhat difficult and that it required smart people to do so.
AI changes this perception and coding starts to be perceived as a low level task that anyone can do easily with augmentation from AI tools.
I certainly agree that writing software is turning more into a factory job and is less intellectually rewarding now.
When I started working in the field (1996), I was told that I would receive detailed specs from an analyst that I would then "translate" into code. At that time this idea was already out of fashion, things worked this way for the core business team (COBOL on the AS/400) but in my group (internal tools, Delphi mostly) I would get only the most vague requirements.
Eventually everyone was expected to understand a good deal of the code they were working on. The analyst and the coder became the same person.
I'm deeply skeptical that the kind of people that enjoy software development are the same kind of people that enjoy steering and proofing LLM generated code. Unlike the analyst and the coder, this strike me as a very different skill set.
Not everyone gets to code the next ground breaking algorithm at some R&D department.
Most programming tasks are rather repetitive, and in many countries there is hardly anything to look up to software developers, it is another blue collar job.
And in many cultures if you don't go into management after about five years, usually it is seen as a failure to grow up on their career.
It's been like this for awhile now. Aside from companies like Google and Facebook, most companies are using some CRUD web app where the development consists of gluing code together for multiple third-party services and libraries.
It's these sorts of jobs that will be replaced by AI and a vibe coder, which will cost much less because you don't need as much experience or expertise.
Even before AI I've always had the perception that writing software felt more intellectually on the level of plumbing. AI just feels like a having one of those fancy new tools that tradespersons may use.
Organizations have long had a preference for 'deskilling' to something reliable through bureaucratic procedures, regardless of the side effects or even if it results in it costs more due to needing three people where one talented could do it before. Because it is more dependable, even if it is dependably mediocre. Even though this technique may lead to their long-term doom and irrelevance.
The number of organizations that continue to use tedious languages like Java 8 and Golang...
Like, they hadn't realized they were turning humans into compilers for abstract concepts, yet now they are telling humans to get tf out of the way of AI
I wonder about codebase maintainability over time.
I hypothesize that it takes some period of time for vibe-coding to slowly "bit rot" a complex codebase with abstractions and subtle bugs, slowly making it less robust and more difficult to maintain, and more difficult to add new features/functionality.
So while companies may be seeing what appears to be increases in output _now_, they may be missing the increased drag on features and bugfixes _later_.
Up until now large software systems required thousands of hours of work and efforts of bright engineers. We take established code as something to be preserved because it embeds so my knowledge and took so long to develop. If it rots then it takes too long to repair or never gets repaired.
Imagine a future where the prompts become the precious artifact. That we regularly `rm -rf *` the entire code base and regenerate it with the original prompts perhaps when a better model becomes available. We stop fretting about code structure or hygiene because it won't be maintained by developers. Code is written for readability and audibility. So instead of finding the right abstractions that allow the problem to be elegantly implemented the focus is on allowing people to read the code to audit that it does what it says it does. No DSLs just plain readable code.
Yes, big-tech-internally I also see a lot of desire to get us to come up with some great AI achievements, but they are so far not achieving far far more than already existing automations and bots and code generators can do for us
Right. What the article is unsurprisingly glossing over (per usual) is that just because AI is perceived (by higher-ups that don’t actually do the work) to speed up coding work doesn't mean it actually does.
and that probably to some extent all involved (depending on how delusional they are) know that it's simply an excuse to do layoffs (replaced by offshoring) by artificially so-called raising the bar to what is unrealistic for most people
For this narrative to make sense you would have to believe that Amazon management cares more about short-term profit than the long-term quality of their work.
The narrative reflects a broader cultural shift, from "we are all in this together" (pandemic) to "our organizations are bloated and people don't work hard enough" (already pre-LLM hype post-pandemic). The observation that less-skilled people can, with the help of LLMs, take the work of traditionally more-skilled people fits this narrative. In the end, it is about demoting some types of knowledge workers from the skilled class to the working class. Apparently, important people believe that this is a long-term sustainable narrative.
Management has different layers with different goals.
A middle manager and a director certainly care a lot about accomplishing short term goals and are ok with tech debt to meet the goals.
Caring is part of it. Having good measures is another. Older measures that worked might need updating to reflect the new, higher spaghetti risk. I expect Amazon to figure it out but I don't see why they necessarily already would have.
> “It’s more fun to write code than to read code,” said Simon Willison, an A.I. fan who is a longtime programmer and blogger, channeling the objections of other programmers. “If you’re told you have to do a code review, it’s never a fun part of the job. When you’re working with these tools, it’s most of the job.”
> This shift from writing to reading code can make engineers feel as if they are bystanders in their own jobs. The Amazon engineers said that managers have encouraged them to use A.I. to help write one-page memos proposing a solution to a software problem and that the artificial intelligence can now generate a rough draft from scattered thoughts.
> They also use A.I. to test the software features they build, a tedious job that nonetheless has forced them to think deeply about their coding.
I was just thinking about this the other day (after spending an extended session talking to an LLM about bugs in its code), and I realized that when I was just starting out, I enjoyed writing code, but now the fun part is actually fixing bugs.
Maybe I'm weird, but chasing down bugs is like solving a puzzle. Writing green-field code is maybe a little bit enjoyable, but especially in areas I know well, it's mostly boring now. I'd rather do just about anything than write another iteration of a web form or connect some javascript widget to some other javascript widget in the framework flavor of the week. To some extent, then, working with LLMs has restored some of the fun of coding because it takes care of the tedious part, and I get to solve the interesting problems.
I'm with you. I love solving puzzles to make something go. In the past that involved writing code, but it's not the code writing that I love, it's the problem solving and building. And I get plenty of that now, in a post-LLM world.
I spend all day fixing bugs and that's why they pay me -- because for most people it's not an enjoyable task. I'm not denying your experience but I will tell you I think you're an outlier. For most people, fixing bugs they didn't create is the worst part of the job.
Part of the fun is also figuring out the “best” way to achieve a thing. LLMs don’t often propose the best way and will happily propose convoluted ways. Clean approaches are still hard to come up with, but LLMs certainly help implementing them once thought up
I think for me, the fun comes from preventing bugs; being able to draw on my experience to foresee common classes of bugs, and being able to prevent them through smart code architecture, making it easier for future contributors/readers to avoid walking into those traps. I'm hoping I'll keep being able to do that.
I find both fun. Writing a web form is still kinda fun even after 20 years. I guess it’s like how some people still play the same video games after years and years while others want something new.
> Andy Jassy, the chief executive, wrote that generative A.I. was yielding big returns for companies that use it for “productivity and cost avoidance.” He said working faster was essential because competitors would gain ground if Amazon doesn’t give customers what they want “as quickly as possible” and cited coding as an activity where A.I. would “change the norms.”
Like what exactly is Amazon giving us here? I don't get it. Also, I want to see Andy Jassy start writing some codes or fixing issues/bugs for next 5-10 years and have those reviewed by anonymous engineers before I take any word from him. These marketers/sales sleezy dudes claim garbage about things they do not do or know how to do but media picks up everything they say. It is like my grandmother who never went to school starts telling me about how brain surgery is slow and needs productivity else more people will die and those doctors need to adapt. Shameless behavior of these marketing/sales idiots as well as the dark side of media has reached new extreme in this AI bubble.
Meanwhile, I can see from comments how a lot of HNers totally agree everything this salesy guy says as holy bible verse and my colleague is sending me freaked out texts about how he is planning to switch career as Amazon Super Boss is talking about vibe coding now but became calm after I told him, these dudes are mostly sales/MBA who never wrote code of fixed issues, same way our PO doesn't know the diff between var and const.
That just strikes me as an odd thing to say. I’m convinced that this is the dividing line between today’s software engineers and tomorrow’s AI engineers (in whatever form that takes - prompt, vibe, etc.) Reed’s statement feels very much like a justification of “if it compiles, ship it!”
> “It would be crazy if in an auto factory people were measuring to make sure every angle is correct,” he said, since machines now do the work. “It’s not as important as when it was group of ten people pounding out the metal.”
Except that the machines doing that work aren’t regularly hallucinating angles, spurious welding joints, etc.
Who's filling that role in this brave new world?
An entire generation of devs, who grew up using unaudited, unverified, unknown license code. And which at a moments notice, can be sold to a threat actor.
And I've seen devs try to add packages to the project without even considering the source. Using forks of forks of forks, without considering the root project. Or examing if it's just a private fork, or what is most active and updated.
If you don't care about that code, why care about AI code? Or even your own?
[0] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...
[1] https://pluralistic.net/2024/08/02/despotism-on-demand/
Us?
(Yeah, we’re fucked)
Any manufacturing process is subject to quality controls. Machines are maintained. Machine parts are swapped out long before they lead to out-of-tolerance work. Process outputs are statistically characterised, measured and monitored. Measurement equipment is recalibrated on a schedule. 3d printed parts are routinely X-rayed to check for internal residue. If something can go wrong, it sure as hell is checked.
Maybe things that can't possibly fail are not checked, but the class of software that can't possibly fail is currently very small, no matter who or what generates it.
Software isn't like that. Because code is relatively easy to reuse, novelty tends to dominate new code written. Software developers are acting like integrators in at least partly novel contexts, not stamping out part number 100k of 200k that are identical.
I do think modern ML has a place as a coding tool, but these factory like conceptions are very off the mark imo.
On the software side, the THERAC story is absolutely terrifying - you replace a physical interlock with a software-based one that _can't possibly go wrong_ and you get a killing machine that would probably count as unethical for executions of convicted terrorists.
A few things on this illusion:
* Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns
* QA workers are often pressured to let small things fly and cave easily because they simply do not get paid enough to care and know they won't win that fight unless their employer's product causes some major catastrophe that costs lives
* Most common goods and infrastructure are built by the lowest bidder with the cheapest materials using underpaid labor, so as for "quality" we're already starting at the bottom.
There is this notion that because things like ISO and QC standards exist, people follow them. The enforcement of quality is weak and the reach of any enforcing bodies is extremely short when pushed up against the wall by the teams of lawyers afforded to companies like Boeing or Stellantis.
I see it too regularly at my job to not call out this idea that quality control is anything but smoke and mirrors, deployed with minimal effort and maximum reluctance. Hell, it's arguably the reason why I have a job since about 75% of the machines I walk in their doors to fix broke because they were improperly maintained, poorly implemented or sabotaged by an inept operator. It leaves me embittered, to be honest, because it doesn't have to be this way and the only reason why it is boils down to greed and mismanagement.
Software doesn’t exactly work the same way. You can make “AI” that operates more like [0,1] but at the end of the day the computer is still going to {0,1}.
Code already lets us automate work away! I can stamp out ten instances of a component or call a function ten times and cut my manual labor by 90%
I'm not saying AI has nothing to add, but the "assembly line" analogies - where we precisely factor out mundane parts of the process to be automated - is what we've been doing this whole time
AI demands a whole other analogy. The intuitions from automating factories really don't apply, imo.
Here's one candidate: AI is like gaining access to a huge pool of cheap labor, doing tasks that don't lend themselves to normal automation. Something like when manufacturing got offshored to China in the late 20th century
If you're chronically doing something mundane in software development, you're doing something wrong. That was true even before AI.
Sure, if you're stuck in a horrible legacy code base, it's harder. But you can _still_ automate tedious work, given you can manage to put in the proverbial stop for gas. I've seen loads of developers just happily copy paste things together, not stopping to wonder if it was perhaps time to refactor.
I'll admit that assuming it's correct, an AI can type faster than me. But time spent typing represents only a fraction of the software development cycle.
But, it'll take another year or two on the hype cycle for the gullible managers being sold AI to realise this fully.
Do you really want your auto tool makers to not ensure the angle of the tools are correct _before_ you go and build 10,000 (misshaped) cars?
I’m not saying we don’t embrace tooling and automation as appropriate at the next level up, but sheesh that is a misguided analogy.
This is, I think very important especially for non-technical managers to grasp (lol, good luck with that).
Deleted Comment
They are.
Mechanical engineers measure more angles and measurements than a consultant might guess - its a standard part of quality control, although machines often do the measuring with the occasional human sampling as a back-up. You'd be suprised just how much effort goes into getting things correct such as _packs of kitkats_ or _cans of coke_.
If getting your angles wrong risks human lives, the threat of prosecution usually makes the angles turn out right, but if all else fails, recalls can happen because the gas pedal can get stuck in the driver-side floor carpet.
Assembly-line engineering has your favour that (A) CNC machines don't randomly hallucinate; they can fail or go out of tolerance, but usually in predictable ways and (B) you can measure a lot of things on an assembly line with lasers as the parts roll through.
It was thankfully a crazy one-off that someone didn't check that _the plugs were put back into the door_, but that could be a sign of bad engineering culture.
To someone who used to automate assembly plants, sounds to me as a rationalization of someone who has never worked in manufacturing. Quality people rightly obsess over whether or not the machine is making “every angle” correct. Imagine trying to make a car where parts don’t fit together well. Software tends to have even more interfaces, and more failure modes.
I’ve also worked in software quality and people are great at rationalizing reasons for not doing the hard stuff, especially if that means confronting an undesired aspect of their identity (like maybe they aren’t as great of a programmer as they envision). We should strive to build processes that protect us from our own shortcomings.
Don't have to imagine, just walk over to your local Tesla dealership.
The thing that gets me is how everyone is attaching subsidized GPU farms to their workflows, organizations and code bases like this is just some regulated utility.
Sooner of later this whole LLM thing will get monetized or die. I know that people are willing to push bellow par work. I didn't know people were ready to put on the leash of some untested new sort of vendor lock-in so willingly and even arguing this is the way. Some may even have the worst of the two worlds and end up on the hook for a new class of sticker shock, pay down and later have these products fail from under them and left out to dry.
Someone will pay for these models, the investors or the users so dependent they'll pay whatever price is charged.
This article (and several that follow) explain his ideas better than this out of context quote.
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
The other thing that I have noticed, is that '(misplaced) trust erodes controls'. "hey the code hasn't broke for 6 months, so let's remove ABC and DEF controls", and then boom goes the app (because we used to test integration but 'come on - no need for that).
Now.. this is probably the paranoid (audit/sec) in me, but stuff happens, and history repeats itself.
Also.. Devs are cost center, not a profit center. They are "value enablers" not "value adders". Like everything and everyone else, if something can be replaced with something 'equally effective' and cheaper, it is simply a matter of time.
I feel that companies want to both run for this new gold-rush, while at the same time do it slowly and see if this monster bites (someone else first).
Software the same way. It’s even more automated than auto factories. Assembly is 100% automated. Design is what we get paid to do, and that requires understanding, just like the engineers at Ford need to understand how their cars work.
Is it though? It could be interpreted as an acknowledgement. Five years from now, testing will be further improved, yet the same people will be able to take over your iPhone by sending you a text message that you don't have to read. It's like expecting AI to solve the spam email problem, only to learn that it does not.
It's possible to say "we take the security and privacy of our customers seriously" without knowing how the code works. That's the beauty of AI. It legitimizes and normalizes stupid humans without measurably changing the level of human stupidity or the quality or efficiency of the product.
Sold! Sold! Sold!
If you want to draw parallels between software delivery and automotive delivery then most of what software engineers do would fall into the design and development phases. The bit that doesn’t: the manufacturing phase - I.e., creating lots of copies of the car - is most closely modelled by deployment, or distribution of deliverables (e.g., downloading a piece of software - like an app on your phone - creates a copy of it).
The “manufacturing phase” of software is super thin, even for most basic crud apps, because every application is different, and creating copies is practically free.
The idea that because software goes through a standardised workflow and pipeline over and over and over again as it’s built it’s somehow like a factory is also bullshit. You don’t think engineers and designers follow a standardised process when they develop a new car?
It would be crazy for auto factory workers to check every angle. It is absolutely not crazy for designers and engineers to have a deep understanding of the new car they’re developing.
The difference between auto engineering and software engineering is that in one your final prototype forms the basis for building out manufacturing to create copies of it, whereas in the other your final prototype is the only copy you need and becomes the thing you ship.
(Shipping cadence is irrelevant: it still doesn’t make software delivery a factory.)
This entire line of reasoning is… not really reasoning. It’s utterly vacuous.
This is not true from a manager's perspective (indoctrinated by Taylorism). From a manager's perspective, development is manufacturing, and underlying business process is the blueprint.
I don't think it's bs. The pipeline system is almost exactly like a factory. In fact, the entire system we've created is probably what you get when cost of creating a factory approaches instantaneous and free.
The compilation step really does correspond to the "build" phase in the project lifecycle. We've just completely automated by this point.
What's hard for people to understand is that the bit right before the build phase that takes all the man-hours isn't part of the build phase. This is an understandable mistake, as the build phase in physical projects takes most of the man-hours, but it doesn't make it any more correct.
Apparently the Cybertruck did not. And that sort of speaks for itself.
The vast majority of software, especially since waterfall methods were largely abandoned, has the planning being done at the same time as the "execution". Many edge cases aren't discovered until the programmer says "oh, huh, what about this other case that the specs didn't consider?" And outsourcing then became costly because that feedback loop for the spec-refinement ran really slowly, or not at all. Spend lots of money, find out you got the wrong thing later. So good luck with complex, long-running projects without deeply understanding the system.
Alternately, compare to something more bespoke and manual like building a house, where the tools are less precise and more of the work is done in the field. If you don't make sure all those angles are correct, you're gonna get crappy results.
(The most common answer here seems to be "just tell the agent what was wrong and let it iterate until it fixes it." I think it remains to be seen how well "find out everything that is wrong after all the code is written, and then tell the coding agent(s) to fix all of those" will work in practice. If nothing else, it will require a HUGE shift in manual testing appetite. Maybe all the software engineers turn into QA engineers + deployment engineers.)
Any data on that? I see everyone trying to outsource as much as they can. Sure, now it is moving toward AI, but every company I walk into have 10-1000s of FTEs in outsource countries.
I see most fortune 1000 companies here doing some type of agile/planningexecution which is in fact more waterfall. The people here in the west are more management and client facing, the rest is 'thrown over the fence'.
And all because the MBAs yearn for freedom from dependencies and thus reality.
That cannot be any furthest from the truth.
Take a decent enterprise CNC machine (look in youtube, lots of videos) that is based on servos, not the stepper motor amateur machines. That servo-based machine is measuring distances and angles hundreds of times per second, because that is how it works. Your average factory has a bunch of those.
Whoever said that should try getting their head out of their ass at least every other year.
Not really. More like, if the fopen works fine, don't bother looking how it does so.
SWE is going to look more like QA. I mean, as a SWE if I use the webrtc library to implement chat and it works almost always but just this once it didn't, it is likely my manager is going to ask me to file a bug and move on.
Yeah, but there's still something checking the angles. When an LLM writes code, if it's not the human checking the angles, then nothing is, and you just hope that the angles are correct, and you'll get your answer when you're driving 200 km/h on the Autobahn.
They need to read “the code is the design”. When you are cutting the door for the thousandth car, you are past design and into building.
For us building is automatic - take code turn into binary.
The reason we measure the door is we are at the design stage abs you need to make sure everything fits.
Agree, this comment makes no sense.
So you are telling me, your AI code passes a six sigma grade of quality control?
I have a bridge to sell you. No, Bridges!
It's even funnier when you consider that Toyota has learned how bad of an idea lean manufacturing/6-Sig/5S can be thanks to the pandemic - they're moving away from it in some degrees, now.
Six Sigma came out of Motorola, who still practice it today.
It was then adopted by the likes of GE, before finding its way into the automotive and many other manufacturing industries.
>Harper Reed is an American entrepreneur
Ah, that's a more realistic indicator of his biases. Either there's some misunderstanding, or he's incorrect, or he's being dishonest; it's my job to make sure the code that I ship is correct.
This is along the same lines as why I don't doubt the syntactical features to break. I assume they work. You have to accept some abstraction work, and build on top of it.
We will reach some point where we will have to assume AI is generating the correct code.
Deleted Comment
Deleted Comment
You are correct tho. I do think that we are approaching the point of "If it compiles, ship it"
Without proper understanding of CS, this is what we get. Lack of rigour.
This is frankly the most idiotic statement I have heard about programming yet.
That quote really misrepresents his writing.
I recently had the (dis)pleasure of fixing a bug in a codebase that was vibe coded.
It ends up being a collection of disorganized business problems converted into code, without any kind of structure.
Refinements are implemented as super-narrow patches, resulting in complex and unorganized code, whereas a human developer might take a step back to try and extract more common patterns.
And once you reach the limit of the context window you're essentially suck, as the LLM can no longer keep track of its patches.
English (or all spoken human language) is not precise enough to articulate what you want your code to do, and more importantly, a lot of time and experience precedes code that a senior developer writes.
If you want to have this senior developer 'vibe' code, then you'll need to have a way to be more precise in your prompts, and be able to articulate all learnings from your past mistakes and experience.
And that is incredibly heavy. Remember, this is opposite from answering 'why did you write it like this'. This is an endless list of items that say 'don't do this, but this, in this highly specific context'.
The problem with vibe coding is more behavioral I think: the person more likely to jump in the bandwagon to avoid writing some code themselves is probably not the one thinking about long term architecture and craftsmanship. It’s a laziness enhancer.
Reading "couldn't" as, you would technically not be able to do it because of the complexity or intricacy of the problem, how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?
Your comment makes it sound like you're now dependent on AI to refactor again if dire consequences are detected way down the line (in a few months for instance), and the problem space is already just not graspable by a mere human. Which sounds really bad if that's the case.
It only looks effective if you remove learning from the equation.
It's the wrong tool for the job, that's what it is.
Recently, I was listening to a podcast about realistic real world uses for an LLM. One of them was a law firm trying to review details of a case to determine a strategy. One of podcasters (sp?) recoiled in horror: "An LLM is writing your briefs?" They replied: "No, no. We use it generate ideas. Then, we select best." It was experts (lawyers, in this case) using an LLM as a tool.
"Technology acts as an amplifier of human intentions"
...so, if someone is just doing a sloppy job, AI-assisted vibe coding will enable them to do it faster.
In any case a very rare and specific corner case you mention, a dev can go on a decade or two (or lifetime or two) without ever experiencing similar requirement. If it should be a convincing argument for almighty llm it certainly isnt.
You can have AI almost generated anything, but even AI has limited to understanding requirements, if you cannot articulate what you want very precisely, it's difficult to get "AI" to help you with that.
And it’s better in the long run.
Exactly. And this is why I feel like we are going to go full circle on this. We've seen this cycle in our industry a couple times now:
"Formal languages are hard, wouldn't it be great if we could just talk in English and the computer would understand what we mean?" -> "Natural languages are ambiguous and not precise, wouldn't it be great if we could use a formal langue so that the computer can understand precisely what we mean?"
The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen.
Why do I think this? Because we can't even use natural language to communicate unambiguously between intelligent people. Our most earnest attempt at this, the law, is so fraught with ambiguity there's an entire profession dedicated to arguing in the gray area. So what hope do we have controlling machines precisely in this way? Are future developers destined to be equivalent to lawyers, who have to essentially debate the meaning of a program before it's compiled, just to resolve the ambiguities? If that's where this ends up, I will be very sad indeed.
My take is more nuanced.
First, there is some evidence [1] that human language is neither necessary nor sufficient to enable what we experience as "thinking".
Second, our intuition, thinking etc are communicated via natural languages and imagery which form the basis for topics in the humanities.
Third, from communication via natural language - slowly emerges symbolism, and formalism which codifies intuitions in a manner which is operational, and useful.
As an example, socratic dialog was a precursor to euclidean geometry which operationally codifies our intuitions of space around us in a manner which becomes useful.
However, formalism is stale as there are always new worlds we experience which cannot be captured by any formalism. The genius of the human brain which is not yet captured in LLMs is to be able create symbolisms of these worlds almost on demand.
ie, if we were to order in terms of expressive power, it would be something like:
1) perception, cognition, thinking, imagination 2) human language 3) formal languages and computers codifying worlds experienced via 1) and 2)
Meanwhile, there is a provocative hypothesis [2] which argues that our "thinking" process lies outside computation as we know it.
[1] https://www.nature.com/articles/s41586-024-07522-w [2] https://www.amazon.com/Emperors-New-Mind-Concerning-Computer... [3] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
Future developers? You sound like you've never programmed in C++.
what we operationally mean by "precise" involves formalism. ie, there is an inherent contradiction between precise, and natural languages.
I think the closest might be the constructed language "Ithkuil". Learning it is....difficult, to put it mildly.
https://ithkuil.net/
this matches the description of every codebase (except one) I came across in my 30-year career
One thing especially, is the loss of knowledge about the codebase. While there was always some stackoverflow-coding, when seeing a weird / complicated piece of code, I used to be able to ask the author why it was like that. Now, I sometimes get the answer "idk, its what chatgpt gave me".
"Code increases in complication to the first level where it is too complicated to understand. It then hovers around this level of complexity as developers fear to touch it, pecking away here and there to add needed features."
http://h2.jaguarpaw.co.uk/posts/peter-principle/
Seems like I'm not the first one to notice this either:
https://nigeltao.github.io/blog/2021/json-with-commas-commen...
LLMs will produce code with good structure, as long as you provide that architecture before hand.
I've noticed this over the last decade where tech people (of which I am one) have considered themselves above the problems of ordinary workers such as just affording to live. I really started to notice this in the lead up to the 2016 election where many privileged people did not recognize or just immediately dismissed the genuine anger and plight of working people.
This dovetails into the myth of meritocracy and the view that not having enough money or lacking basic necessities like food or shelter or a personal, moral failure and not a systemic problem.
Tech people in the 2010s were incredibly privileged. Earnings kept going up. There was seemingly infinite demand for our services. Life was in many ways great. The pandemic was the opportunity for employers to reign in runaway (from their perspective) labor costs.
Permanent layoff culture is nothing more than suppressesing wages. The facade of the warm, fuzzy Big Tech employer is long gone. They are defense contractors now. Google, Microsoft or Amazon are indistinguishable from Boeing, Lockheed Martin and Northrop Grumman.
So AI won't immediately replace you. It'll start by 4 engineers with AI being able to do the job that was previously done by 5. Laying off that one person saves that money directly but also suppresses the wages of the other 4 who won't be asking for raises. They're too afraid of losing their jobs. Then it'll be 3. Then 2.
A lot of people, particularly here on HN, are going to find out just how replaceable they are and how aligning with the interests of the very wealthiest was a huge mistake. You might get paid $500K+ a year but you are still a worker. Your interests align with nurses, teachers, baristas, fast food workers and truck drivers, not the Peter Thiels of the world.
In the future no one will have to code. Well compile the business case from UML diagrams!
But these groups also don't have strong unions and generally don't have the class consciousness you are talking about, especially as the pay increases.
And it's an understandable impulse, but at some point you'd think people would learn instead of being mesmerized by the promise of slightly better treatment by the higher classes in exchange for pushing down the rest of the working class.
Now it's our turn as software engineers to swallow that bitter pill.
It’s like comparing a machine gun to a matchlock pistol and saying they’re the same thing.
My boss wanted me to put blockchain in everything (so he could market this to our clients). I printed a small sign and left on my desk. Everytime someone asked me about blockchain, I would point the sign "We don't need blockchain!"
AI is extremely new, impressive and is already changing things.
How can that be in any way be similar to crypto and VR?
It isn't true.
The tool he's bragging about most went about changing JDK8 to JDK17 in a build config file, and if you're lucky tweaking log4j versions. 4,500 years my ass. It was more regex and AI
Indeed, which is exactly what's happening with AI.
Deleted Comment
It stopped being fun to code around that point. Too many i's to dot to make management happy.
However, this has to be substantive code review by technical peers who actually care.
Unit tests also need the be valued as integral to the implementation task. The author writes the unit tests. It helps to guide the thought process. You should not offload unit tests to an intern as "scutwork".
If your code is sloppy, a stylistic mess, and unreviewed, then I am going to put it behind an interface as best I can, refer to it as "legacy", rely on you for bugfixes (I'm not touching that stinking pile), and will probably try to rally people behind a replacement.
As an employee at a company with a similar attitude, I cannot agree more with this.
A burning need to dominate in a misguided attempt to fill the gaping void inside
Broken and hurting people spreading their pain as they flail blindly for relief
Our world creaks, cracks splintering out like spider thread
The foundations tremble
In a way it's only fair. Automation has made a lot of jobs obsolete or miserable. Software devs are a big contributor to automation so we shouldn't be surprised that we are finally managing to automate our own jobs away,
Yeah the consistent "reporting" of "status" on "stand-ups" where you say some filler to get someone incapable of understanding what it is that you're doing off your back for 24 more hours has consistently been one of the most useless and unpleasant parts of the job.
This sucks for the 50% or so who are like you, but there's another 50% who won't really get much done otherwise, either because they don't know what to do and aren't self-motivated or capable enough to figure it out (common) or because they're actively cheating you and barely working (less common)
In my experience it is human nature to think you are doing something that people around you can't or don't understand. The graveyard is full of irreplaceable people is an old saying. Sometimes the people you report to are morons, but if you consistently report to a moron its time for introspection. There's more that you can do than just suffer that. One place to start is to have some charity for the people you work with.
Deleted Comment
But I fall short of declaring the 1990s or 2000s or 2010s were the glory days and now things suck. I think part of it is nostalgia bias. I can think of a job I spent 4 years and list all the good parts of the experience. But I suspect I’m forgetting over a lot of mediocre or negative stuff.
At any rate I still like the work today. There are still generally hard challenges that you can overcome, people that depend on you, new technologies to learn about. Generically good stuff.
I guess these strategies boil down to having some MBA on top or an engineer that has no board of MBAs to bow down to. I strive to stay with private owned companies for this reason but ofc these are less loud on the internet, so you can easily miss them while jobhunting.
The weird thing about this is, many developers wanted this. They wanted the theater of Agile, JIRA tickets, points etc.
I'm in the same boat yet still need to squeeze out another 10 years or so but personally working om multiple-side projects so I can get out of this boring, mundane shit.
From my integrations pov Ebay was ahead of their time with their data structure and pushed for deprecation fast to not keep the debt. Amazon ooth only looks more modern through acquiring new market fields instantly followed by throwing a lot of money to facade up the mess. Every contact like key account managers there were usually pushed for numbers, this has nothing to do with coders being coders.
Bosses always look for ways to instantly measure coders output which is just short-sighted way of thinking. My coworkers were measured by lines of code obviously. I wonder how you measure great engineering.
So no, this has not changed, you can still work uninterrupted on stuff for months or years if you want and skip these places, maybe even proved over your career that previous designs are stable for years to come.
Deleted Comment
Eventually everyone was expected to understand a good deal of the code they were working on. The analyst and the coder became the same person.
I'm deeply skeptical that the kind of people that enjoy software development are the same kind of people that enjoy steering and proofing LLM generated code. Unlike the analyst and the coder, this strike me as a very different skill set.
Not everyone gets to code the next ground breaking algorithm at some R&D department.
Most programming tasks are rather repetitive, and in many countries there is hardly anything to look up to software developers, it is another blue collar job.
And in many cultures if you don't go into management after about five years, usually it is seen as a failure to grow up on their career.
It's these sorts of jobs that will be replaced by AI and a vibe coder, which will cost much less because you don't need as much experience or expertise.
Seeing Like a State by James Scott
https://en.wikipedia.org/wiki/Seeing_Like_a_State
Explains a lot of the confusing stuff I've experienced, in that eureka sort of way.
Like, they hadn't realized they were turning humans into compilers for abstract concepts, yet now they are telling humans to get tf out of the way of AI
I'm not sure what: "'deskilling' to something reliable through bureaucratic procedures" ... means.
I'm the Managing Director of a small company and I'm pretty sure you are digging at the likes of me (int al) - so what am I doing wrong?
I hypothesize that it takes some period of time for vibe-coding to slowly "bit rot" a complex codebase with abstractions and subtle bugs, slowly making it less robust and more difficult to maintain, and more difficult to add new features/functionality.
So while companies may be seeing what appears to be increases in output _now_, they may be missing the increased drag on features and bugfixes _later_.
Imagine a future where the prompts become the precious artifact. That we regularly `rm -rf *` the entire code base and regenerate it with the original prompts perhaps when a better model becomes available. We stop fretting about code structure or hygiene because it won't be maintained by developers. Code is written for readability and audibility. So instead of finding the right abstractions that allow the problem to be elegantly implemented the focus is on allowing people to read the code to audit that it does what it says it does. No DSLs just plain readable code.
People may worry that the "ASM" codebase will be bit-rot and no one can understand the compiler output or add new feature to the ASM codebase.
and that probably to some extent all involved (depending on how delusional they are) know that it's simply an excuse to do layoffs (replaced by offshoring) by artificially so-called raising the bar to what is unrealistic for most people
Please don't put others down like that on HN. It's mean and degrades the community.
https://news.ycombinator.com/newsguidelines.html
> “It’s more fun to write code than to read code,” said Simon Willison, an A.I. fan who is a longtime programmer and blogger, channeling the objections of other programmers. “If you’re told you have to do a code review, it’s never a fun part of the job. When you’re working with these tools, it’s most of the job.”
> This shift from writing to reading code can make engineers feel as if they are bystanders in their own jobs. The Amazon engineers said that managers have encouraged them to use A.I. to help write one-page memos proposing a solution to a software problem and that the artificial intelligence can now generate a rough draft from scattered thoughts.
> They also use A.I. to test the software features they build, a tedious job that nonetheless has forced them to think deeply about their coding.
Maybe I'm weird, but chasing down bugs is like solving a puzzle. Writing green-field code is maybe a little bit enjoyable, but especially in areas I know well, it's mostly boring now. I'd rather do just about anything than write another iteration of a web form or connect some javascript widget to some other javascript widget in the framework flavor of the week. To some extent, then, working with LLMs has restored some of the fun of coding because it takes care of the tedious part, and I get to solve the interesting problems.
I solve a problem, let the AI mull on the next bit, solve another problem etc.
Like what exactly is Amazon giving us here? I don't get it. Also, I want to see Andy Jassy start writing some codes or fixing issues/bugs for next 5-10 years and have those reviewed by anonymous engineers before I take any word from him. These marketers/sales sleezy dudes claim garbage about things they do not do or know how to do but media picks up everything they say. It is like my grandmother who never went to school starts telling me about how brain surgery is slow and needs productivity else more people will die and those doctors need to adapt. Shameless behavior of these marketing/sales idiots as well as the dark side of media has reached new extreme in this AI bubble.
Meanwhile, I can see from comments how a lot of HNers totally agree everything this salesy guy says as holy bible verse and my colleague is sending me freaked out texts about how he is planning to switch career as Amazon Super Boss is talking about vibe coding now but became calm after I told him, these dudes are mostly sales/MBA who never wrote code of fixed issues, same way our PO doesn't know the diff between var and const.