There are different groups with vested interests that color a lot of AI discourse
You have Tech CEOs that want work done cheaper and AI companies willing to sell it to them. They will give you crazy alarming narratives around AI replacing all developers, etc.
Then you have tech employees who want to believe they’re irreplaceable. It’s easy to want to keep working how we’ve always worked with hope of getting back to pre 2022 levels of software hiring and income. AI stands in the way of that.
I don’t think people are doing this intentionally all the time. But there is so much money and social stature from all these groups on the line, there are very few able to give a neutral, disinterested perspective on a topic like AI coding.
And to add to that, reasoned, boring, thoughtful middle of the road takes are just naturally going to get fewer eyeballs than extreme points of view.
What I do know, is that people keep trying to make my job obsolete, only to hire me under a different title for more money later. The practices and the tools are the same too, yet to justify it to themselves they'll make shit up about how the work is "actually, a material difference from what came before" (except, it's really not).
I'm still employable after 20 years of this.
I'm not a software engineer, and I'm not a tech CEO - I don't care, but people have been trying to replace me my whole career (even myself: "Automate yourself out of a job” is a common sysadmin mantra after all). Yet, somehow, I'm still here.
> "Automate yourself out of a job” is a common sysadmin mantra after all
Not just sysadmin. I've been automating the hell out of tedious mundane tasks that are done by error prone humans, and only become more error prone as they get tired/bored of these mundane tasks. The automation essentially just becomes a new tool/app for the humans to use.
At this point of my experience doing this, employees scared of automation are probably employees that aren't very good at their job. Employees that embrace this type of automation are the ones that tend to be much better at their job.
I really loath that part of this industry; I spent over a decade in it and the only tangible thing I've taken away from it is that the problems these philosophies and practices set out to solve are derivative of two things:
- Infrastructure is very often fundamentally disconnected from the product roadmap and thus is under constant cycles of cost control
- The culture of systems administration does not respond well to change or evolution. This is a function that is built around shepherding systems - whether at scale or as pets.
Long way of saying the gamut isn't that corporations will ever be free of systems administration any more than they'll be freed from software engineering. It is far easier to optimize software than it is to optimize infrastructure though, which is why it feels like someone is coming after you. To make matters more complex, a lot of software today overlaps with the definition of infrastructure.
Some of the easy jobs will be taken away and your job is not under threat. Right? But some of the people who were doing low skilled jobs will grow to compete with you. Less supply more demand. Either the pay will come down or there are very few jobs. Fingers crossed.
Many jobs aim to solve problems so well that there’s nothing left to fix — doctors curing illnesses, firefighters preventing fires, police reducing crime, pest control eliminating infestations, or electricians making lasting repairs. And that’s totally fine — people still have jobs, and when it works, it’s actually great for everyone.
Thank you for this, not only on the Ai-related discussion but painting the sense that sys admin work is still as complex and finnicky as it ever was. All we did with that role is let Amazon evangelize some weird taxing “standard” for hosting.
If you're like me, though, your day-to-day has probably shifted a lot.
My day used to be making sure desktops worked and we had a repeatable process to make new good desktops out of all the complex client software they needed.
Then I made sure servers got upgraded and patched and taken care of on a day-to-day level, although it was still someone else's job to keep desktops running. At home I compiled my own kernels and used tarballs to install and update packages. Desktop hardware support was iffy in Linux.
Then I jumped to borg and tupperware and kubernetes where hardware never mattered and it almost didn't matter what clients had because the browser they used auto-updated. At home I switched to distros where package management was automatic and rarely, if ever, broke.
I don't even know the hostnames or the network addresses of the hardware that runs my services, and AWS or GCP SREs probably rarely need to know either. Now I care about an abstract thing called a service that is instrumented with logs, metrics, and traces that would put the best local development tools of 20 years ago to shame. CI/CD and infrastructure as code pipelines actually did automate away many of the checklist-style sysadmin work of the past. At home I could run Talos and Ceph and Crossplane if I wanted to but so far I've dragged the old days of individual hosts along mostly for nostalgia.
I expect to eventually end up caring about systems at an even more abstract level once something like Crossplane becomes as universal as Terraform and GitHub actions. They'll probably run on something like web assembly on bare metal because no one actually cares what's underneath their containers if they keep working.
The stack of technology gets taller and more abstract and as it does so the job of caring about the lower layers gets automated and the lower layers get, if not simplified, standardized to the point that automation is more reliable than human intervention.
Humans will only get squeezed out of the loop when superhuman artificial intelligence arrives and our abstract design and management of systems becomes less reliable than the automation. Then hopefully we get a nice friendly button to push for more automatically-human-aligned utility.
EDIT: That's not to say that the lower levels can be automatically designed; not yet at least. Eventually once AI is good enough at formal design then quite likely. We still need low-level software engineering to keep building the stack but it is vastly more commoditized (the one open source developer in the xkcd cartoon keeping 99% of the world's infrastructure running on a 20-year old tool/utility)
>Then you have tech employees who want to believe they’re irreplaceable. It’s easy to want to keep working how we’ve always worked with hope of getting back to pre 2022 levels of software hiring and income. AI stands in the way of that.
Eventually, things will stabilize to a point where we will know what the boundaries of all the new LLM based tooling is good for, and where humans add lots of value to that.
That will then drive a maximalist hiring spree, as the more people you have working at an increase in velocity, the faster you ship, and for once, perhaps the quality of code won't substantially decrease, assuming LLMs improve another few leaps in output quality and engineering workflows adjust in a normalized way.
Thats the hopeful side of the equation anyway.
I feel incredibly bad about customer oriented jobs, like white glove customer service. I already saw a trend (especially since 2020 but certainly before as well) where these AI chat bots and AI support lines will decimate that job category. These are pretty common white collar jobs, lots of people even in our industry got their starts on support lines.
I don't think any normal customer is going to want to talk to AI, once the novelty wears off. It's going to be "Let me talk to your supervisor" almost every time.
Also judging by my LinkedIn, but all the senior tech execs on the job market are now Generative AI Experts, which is funny because I thought they were all Cryptocurrency Experts when I last saw them posting this much in 2020.
The best is when you work with one, in a small enough company. During which they extoll some currently-hot grift, and then after the company goes under (maybe not due to thier poor leadership), you read about their huge successes there! Much win.
I think this is an easy thing to wrap my mind around (since I have been in both camps):
AI can generate lots of code very quickly.
AI does not generate code that follows taste and or best practices.
So in cases where the task is small, easily plannable, within the training corpus, or for a project that doesn't have high stakes it can produce something workable quickly.
In larger projects or something that needs maintainability for the future code generation can fall apart or produce subpar results.
I think there is a strong case that experienced developers can not be replaced by AI anytime soon. Where the danger lies is for junior developers fresh out of college. How are they supposed to become experienced developers if an AI can do the grunt work they normally get assigned?
It really doesn't help that AI companies are hype machines full of salesmen trying to hype up their product so you buy it. There have been a lot of amazing AI "success stories" that don't hold up under scrutiny.
I think the problem with this one is that LLMs are somehow both unreasonably effective as well as unreasonably ineffective. Letting the code editor llms do their suggestions, I’ve gotten a ton of useless boilerplate garbage suggestions, but periodically, a couple times of day, it suggests blocks of code that are far more complete, comprehensive and correct than it has any right to be.
Whether you’ll get the high IQ or low IQ LLM on the next suggestion is a crapshoot; how much consideration you give to either outcome (focus on the random instances of brilliance, or the constant stream of bullshit) drives the final perception.
Or are those people biased by what they want to be true because of their current situation (the dev who dont wanna change how they work and therefore wants AI to not work or the non-technical person who dont wanna learn to code or be dependent on a developer and therefore want it to work)
And then you have research that says they're both full of shit. The article is perhaps a bit shallow, but it's spiritually correct: there's a lot of uncertainty and people who claim they figured it all out are mostly spewing nonsense.
Maybe it's too soon to say that autonomous LLM agents are the wave of the future and always will be, but that's basically where I'm at.
AI code completion is awesome, but it's essentially a better Stack Overflow, and I don't remember people worrying that Stack Overflow was going to put developers out of work, so I'm not losing sleep that an improved version will.
The problem with the "agents" thing is that it's mostly hype, and doesn't reflect any real AI or model advances that makes them possible.
Yes, there's a more streamlined interface to allow them to do things, but that's all it is. You could accomplish the same by copy-and-pasting a bunch of context into the LLM before and asking it what to do. MCP and other agent-enabling data channels now allow it to actually reach out and do that stuff, but this is not in itself a leap forward in capabilities, just in delivery mechanisms.
I'm not saying it's irrelevant or doesn't matter. However, it does seem to me that as we've run out of low-hanging fruit in model advances, the hype machine has pivoted to "agents" and "agentic workflows" as the new VC-whetting sauce to keep the bubble growing.
AI is glorified autocomplete. Look at what happens when AI tries its hand at writing legal briefs, and you'll understand why it cannot possibly replace software developers.
As with all uses of current AI (meaning generative AI LLMs) context is everything. I say this as a person who is both a lawyer and a software engineer. It is not surprising that the general purpose models wouldn't be great at writing a legal brief -- the training data likely doesn't contain much of the relevant case law because while it is theoretically publicly available, practicing attorneys universally use proprietary databases like Lexis and WestLaw to surface it. The alternative is spelunking through public court websites that look like they were designed in the 90s or even having to pay for case records like on PACER.
At the same time, even if you have access to proper context like if your model can engage with Lexis or WestLaw via tool-use, surfacing appropriate matches from caselaw requires more than just word/token matching. LLMs are statistical models that tend to reduce down to the most likely answer. But, typically, in the context of a legal brief, a lawyer isn't attempting to find the most likely answer or even the objectively correct answer, they are trying to find relevant precedent with which they can make an argument that supports the position they are trying to advance. An LLM by its nature can't do that without help.
Where you're right, then, is that law and software engineering have a lot in common when it comes to how effective baseline LLM models are. Where you're wrong is in calling them glorified auto-complete.
In the hands of a novice they will, yes, generate plausible but mostly incorrect or technically correct but unusable in some way answers. Properly configured with access to appropriate context in the hands of an expert who understands how to communicate what they want the tool to give them? Oh that's quite a different matter.
Look, maybe I’m saying the quiet part out loud, but if software engineering isn’t where the money is at anymore, and those jobs go away, I’d be competent at something else. I’m a smart person. I work hard. I’m confident I’d be able to displace someone who isn’t as smart as me and just “take” their job.
There’s going to be a need for smart people with a good work ethic unless literally everyone loses their jobs, and at that point we’re living past the singularly event horizon as far as I’m concerned and all bets are off.
Why don't Tech CEOs and CTOs see how AI is going to "disrupt" their jobs? If anyone can write code, why will I hire your company to create the software I use?
I also see the non-development jobs as much more in peril from AI; I can go generate marketing copy, HR policies, and project plans of comparable (or better!) quality today.
> It’s easy to want to keep working how we’ve always worked
There is no “how we’ve always worked”; there was no steady state. Constant evolution and progressive automation of the simpler parts has been the norm for software development forever.
> with hope of getting back to pre 2022 levels of software hiring and income. AI stands in the way of that.
It doesn't, though. Productivity multipliers don't reduce demand for the affected field or income in it. (Tight money policies and economic slowdowns, especially when they co-occur, do, though, especially in a field where much of the demand and high income levels are driven by speculative investment, either in startups or new ventures by established firms.)
it has devalued the labor. i scoped a contract but they want me to do it in less time now that we have AI. this lets them pay me less for the same work. and AI CEOs sell it as "society will do less work" but instead now i'm expected to do more work because the work takes less time. same as it ever was with technology advances.
There are also a large number of people who have a deep-seated philosophical objection to the entire project of ai. It's not just about their job, it's about their sense of who and what they are as a human being, their soul or whatever. They will insist that AI's do not think or know anything, no matter what evidence there is to contrary.
Everyone has an agenda. Similarly, groups like r/singularity and the rationalists have spent years predicting the oncoming advent of a machine god, and are desperately reading too much into every LLM advancement.
(Note: This is an American take, and uses software jobs as the primary example, but a lot of this applies to other jobs)
Yes, well said - with one caveat: "Money and social status" is subtly but crucially different from "livelihood". You correctly identify the group of people whose lives are affected, but lump those people along with CEOs and AI companies under the umbrella of "money and social status" in your summarization, which maybe undersells the role of the masses in this equation.
The software job - traditionally one of the few good career options remaining for a large chunk of Americans - is falling. There are many different reasons, and AI, while not the apocalypse, is a small but crucial part of it. We need to get over the illusion that a large piece of that fall consists of wildly luxurious incomes reducing to simply cushy incomes - "boo hoo", we say, sarcastically. But the majority of software folks are going from "comfortable" to "unhappy but livable", while some are going from "livable" to "not livable", while others are no longer employed at all. There were already too many people across the job spectrum in these buckets, and throwing another gigantic chunk of citizens in there is going to eventually cause big, bad things to happen.
We need to start giving a shit about our citizens, and part of that is avoiding the implication that just because something disruptive is inevitable, doesn't mean the affect isn't devastating and we should just do absolutely nothing about the situation. Another part of that is avoiding the implication that the average person can just successfully change careers without enormous suffering. We can ease, assist, REGULATE (which the party in power would like to make illegal), etc. It's important to understand that none of that means "stopping" AI or something ridiculous like that.
We need to start giving a shit about our citizens.
A third time: We need to start giving a shit about our citizens.
And sorry, most of this was not directed personally at you. Just the first note about your wording.
The key is to look at the long term structural changes the industry is going through, and whether or not AI helps, or hinders that goal
In general, the industry has been making huge efforts to push errors from runtime, to compile time. If you imagine points where we can catch errors being laid out from left to right, we have the following:
The industry is trying to push errors leftwards. Rust, heavier review, safety in general - its all about cutting down costs by eliminating expensive errors earlier in the production chain. Every industry does this, its much less costly to catch a defective oxygen mask in the factory, than when it sets a plane on fire. Its also better to catch a defective component in the design phase, than when you're doing tests on it
AI is all about trying to push these errors rightwards. The only way that it can save in engineer time is if it goes through inadequate testing, validation, and review. 90% of the complexity of programming is building a mental model of what you're doing, and ensuring that it meets the spec of what you want to do. A lot of that work is currently pure mental work with no physical component - we try and offload it increasingly to compilers in safe languages, and add tests and review to minimise the slippage. But even in a safe language, it still requires a very high amount of mental work to be done to make sure that everything is correct. Tests and review are a stop gap to try and cover the fallibility of the human brain
So if you chop down on that critical mental work by using something probabilistically correct, you're introducing errors that will be more costly down the line. It'll be fine in the short term, but in the long term it'll cost you more money. That's the primary reason why I don't think AI will catch on - its short termist thinking from people who don't understand what makes software complex to build, or how to actually produce software that's cheap in the long term. Its also exactly the same reason that Boeing is getting its ass absolutely handed to it in the aviation world. Use AI if you want to go bankrupt in 5 years but be rich now
> It'll be fine in the short term, but in the long term it'll cost you more money. That's the primary reason why I don't think AI will catch on - its short termist thinking from people who don't understand what makes software complex to build, or how to actually produce software that's cheap in the long term. Its also exactly the same reason that Boeing is getting its ass absolutely handed to it in the aviation world. Use AI if you want to go bankrupt in 5 years but be rich now
I think your analysis is sound from a technical perspective, but your closing statement is why AI is going to be mass adopted. The people who want to be rich now and don't care about what will happen in 5 years have been calling the shots for a very long time now, and as much as we technical folks insist this can't possibly keep going on forever, it's probably not going to stop sometime soon.
I’m not sure this follows as SOTA LLMs are pretty good at writing Rust, so wouldn’t this also make it easier for codebases to move leftward in your analogy? For example, I was resistant to use Rust for a lot of things because a). The community is somewhat annoying and pedantic, even by software engineering standards b). The overhead in getting colleagues up to speed on Rust code was too much of a time suck. LLMs solve both those problems and we’re now migrating lots of stuff to Rust, my colleagues can ask lots of questions (of Google Gemini Pro 2.5), without burdening anyone or being met with disdain, and seem generally more curious and positive about these moves/Rust overall.
> its short termist thinking from people who don't understand what makes software complex to build
Ironically you don't need AI to see this pattern. Maybe AI makes it a little bit more obvious who's thinking long term and who's not (both at the top and in the trenches)
> Use AI if you want to go bankrupt in 5 years but be rich now
Or, as some would put it "Use AI if you want to be rich now, exit, and have someone else go bankrupt in 5 years"
You are looking at LLMs for code generation exclusively, but that is not the only application within software engineering.
In my company some people are using LLMs to generate some of their code, but more are using them to get a first code review, before requesting a review by their colleagues.
This helps getting the easy/nitpicky stuff out of the way and thereby often saves us one feedback+fix cycle.
Examples would be "you changed this unit test, but didn't update the unit test name", "you changed this function but not the doc string", or "if you reorder these if statements you can avoid deep nesting".
Nothing groundbreaking, but nice things.
We still review like we did before, but can often focus a little more on the "what" instead of the "how".
In this application, the LLM is kind of like a linter with fuzzy rules. We didn't stop reviewing code just because many languages come with standard formatters nowadays, either.
While the whole code generation aspect of AI is all the rage right now (and to quote the article):
> Focus on tangible changes in areas that you care about that really do seem connected to AI
So while I don't disagree with you at all, in terms of AI being a bubble, none of that is why the tech is being so hyped up. The current speculative hype push is being driven by two factors:
1. The promise that AI will replace most if not all developers
2. Alternatively, that AI will turn every developer into a 10-100x developer
My personal opinion is that it'll end up being one of many tools that's situationally useful, eg you're 100% right in that having it as an additional code review step is a great idea. But the amount of money being pumped into the industry isn't enough to sustain mild use cases like that and that isn't why the tech is being pushed. The trillions of dollars being dumped into improving clang tidy isn't sustainable if that's the end use case
If AI tools allow for better rapid prototyping, they could help catch "errors" in the conception and design phases. I don't know how useful this actually is, though.
One of the problems with using AI for prototyping (or just in general), is that the act of creating the prototype is what's valuable, not the prototype itself. You learn lessons in trying to build it that you use to build the real product. Using the AI to skip the learning step and produce the prototype directly would be missing the point of prototyping at all
That's true up to a certain threshold on 'probabilistically correct', right? At a certain number of 9s, it's fine. And increasingly I use AI to help ask me questions, refine my understanding of problem spaces, do deep research on existing patterns or trends in a space and then use the results as context to have a planning session, which provides context for architecture, etc.
So, I don't know that the tools are inherently rightward-pushing
The problem is, given the inherent limitations of natural language as a format to feed to an AI, it can never have enough information to be able to solve your problem adequately. Often the constraints of what you're trying to solve only crop up during the process of trying to solve the problem itself, as it was unclear that they even existed beforehand
An AI tool that could have a precise enough specification fed into it to produce the result that you wanted with no errors, would be a programming language
I don't disagree at all that AI can be helpful, but there's a huge difference between using it as a research tool (which is very valid), and the folks who are trying to use it to replace programmers en masse. The latter is what's driving the bubble, not the former
This has been a huge frustration for me, but the wild thing is that we've built up so many tools over time that help humans only for AI coding tools to wild west it and not use them. The best AI coding tools will read docs websites, terminal error messages, write/run tests, etc. But we have so many better tools that none of them seem to use:
* profilers
* debuggers
* linters
* static analyzers
* language server protocol
* wire protocol analyzers
* decompilers
* call graph analyzers
* database structure crawlers
In the absence of models that can do perfect oneshot software engineering, we're gonna have to fall back on well-integrated tool usage, and nobody seems to do that well yet.
I think a lot of these use cases for AI are incidental byproducts of the actual goal, which is to replace software developers. They're trying to salvage some kind of utility. Because I agree that the AI tools in use are marginal improvements, or downgrades in a lot of cases
I've heard people say they use AI agents to set up a new project with git. Just use tortoisegit or something, its free and takes one click - its just using AI for the sake of it
Vibe coding pushes errors rightward, but using AI to speed up typing or summarizing documentation doesn’t. Vibe coding will fail, but that doesn’t mean using AI to code will fail. You’re looking at one (admittedly stupid) use case and generalizing too hastily.
If I have an LLM fix a bug where it gets the feedback from the type checker, linter and tests in realtime, no errors were pushed rightward.
It’s not a free lunch though. I still have to refactor afterwards or else I’ll be adding tech debt. To do that, I need to have an accurate mental model of the problem. I think this is where most people will go wrong. Most people have a mindset of “if it compiles and works, it ships.” This will lead to a tangled mess.
Basically, if people treat AI as a silver bullet for dealing with complexity, they’re going to have a bad time. There still is no silver bullet.
Is this true? Most software devs would like to. But I think business is more interested in spee which pushes errors to the right. Which seems to be more profitable in most software. Even stuff thats been around for a decade(s).
nah, in general there's a serious industry-wide push for this. software testing is changing (involve QAs early so they can help with the spec so when they get the software they know what to test against), agile is about delivering small valuable parts of the product as soon as possible, VC investing (lean startups!) is about testing business ideas as soon as possible, etc.
it's all part of the shift left ideology. (same with security, you cannot really add it later, same with GDPR and other data protection stuff, you cannot track consent for various data processing purposes after you already have a lot of users onboarded - unless you want to do the sneaky very not nice "ToS updated, pay or die, kthxbai" thing [which is what Meta did], etc.)
... of course this usually means that many times people want to go from the "barely idea as a Figma proto" to "mature product maintained by distributed high-velocity teams" without realizing that there are trade-offs.
shift left is makes good business and engineering sense and all, as it allows you to focus on the things that work, but it requires more iterations to go from that to something mature.
I disagree entirely, and I can convince you I'm right with one sentence: More lines of JS/TS are written by AI than lines of Rust are written at all. We don't have the data to assert this as true, but I think the vast majority of people would agree with that statement.
This statement being true disproves the statement "the industry has been making huge efforts to push errors from runtime, to compile time." The industry is not a monolith. Different actors have different, individualized goals.
it's absolutely not a problem that people are writing more JS/TS than Rust.
if that Rust is for decades and the JS/TS gets thrown out in 1 year.
there's a lot of shitty C being written still around the world yet the Rust that goes into the kernel has real value long term.
there's an adoption cycle. Rust is probably already well over the hype peak, and now it's slowly climbing upward to its "plateau of productivity".
(and I would argue that yes there are pretty good things happening nowadays in the industry. for many domains efficient and safe libraries, frameworks, and platforms are getting to be the norm. for example see how Blender became a success story. how deterministic testing is getting adopted for distributed databases.)
We still run tests (before code review, obviously; don't know why it's listed like this).
We still do QA at runtime.
I feel like anti-AI people are those one who actually treat AI as magic, not the AI users. AI doesn't magically prevent you from doing the things that helped you in pre-AI era.
This is the best and most enlightening take I've heard in a good while.
I have articulated this to friends and colleagues who are on the LLM hype train somewhat differently, in terms of the unwieldiness of accumulated errors and entropy, disproportionate human bottlenecks when you do have to engage with the code but didn't write any of it and don't understand it, etc.
However, your formulation really ties a lot of this together better. Thanks!
Exabytes of code are being written in Python and JS though… I don’t think that fits with your narrative that everything is being pushed to compile-time. C#, Java and Go remain popular sure but have they grown that much relative to other languages? Rust is being adopted primarily in projects that used to be in C or C++ if I’m not mistaken.
Those languages have been going through exactly the same evolution though, like the JS -> typescript migration is one of the most direct practical examples of this imo
thanks for articulating so nicely what needs to be said in this debate.
Pushing errors leftward vs rightwards is such a nice metaphor, not to mention the metaphor on mental models.
Also, your comment on why natural language is unable to describe the problem adequately (later in this thread), since sometimes constraints are discovered during the solution process, and otherwise, and if problems could be described adequately, thats what we call a programming language is very nice - ie, for natural language to adequately describe a problem, that becomes a formal language.
Only experienced engineers who have been through failed projects will understand what you are saying, and the rest of those in the grip of the ai-mania will come to terms with it soon.
> I expected to find vastly differing views of what future developments might look like, but I was surprised at just how much our alums differed in their assessment of where things are today.
> We found at least three factors that help explain this discrepancy. First was the duration, depth, and recency of experience with LLMs; the less people had worked with them and the longer ago they had done so, the more likely they were to see little value in them (to be clear, “long ago” here may mean a matter of just a few months). But this certainly didn’t explain all of the discrepancy: The second factor was the type of programming work people cared about. By this we mean things like the ergonomics of your language, whether the task you’re doing is represented in model training data, and the amount of boilerplate involved. Programmers working on web apps, data visualization, and scripts in Python, TypeScript, and Go were much more likely to see significant value in LLMs, while others doing systems programming in C, working on carbon capture, or doing novel ML research were less likely to find them helpful. The third factor was whether people were doing smaller, more greenfield work (either alone or on small teams), or on large existing codebases (especially at large organizations). People were much more likely to see utility in today’s models for the former than the latter.
Anecdotal: definitely a long way to go for systems programming, non-trivial firmware, and critical systems in general. And I say this as a huge fan of LLMs in general.
I work as a FW Eng and while they've been of immense value in scripting especially (fuck you powershell), I can only use them as a better autocomplete on our C codebase. Sometimes I'd chat with the codebase, but that's a huge hit or miss.
The value of AI is easy to see personally, concretely. But there is always a gap between concrete value in your hands and how that plays out in larger systems. The ability to work remotely could intuitively project to outsourcing of almost all knowledge work to cheaper labor markets, and yet that has only happened at the margins. The world is complex and complicated, reserve a measure of doubt.
In-person work has higher bandwidth and lower latency than remote work, so for certain roles it makes sense you wouldn't want to farm it out to remote workers. The quality of the work can degrade in subtle ways that some people find hard to work with.
Similarly, handing a task to a human versus an LLM probably comes with a context penalty that's hard to reason about upfront. You basically make your best guess at what kind of system prompt an LLM needs to do a task, as well as the ongoing context stream. But these are still relatively static unless you have some complex evaluation pipeline that can improve the context in production very quickly.
So I think human workers will probably be able to find new context much faster when tasks change, at least for the time being. Customer service seems to be the frontline example. Many customer service tasks can be handled by an LLM, but there are probably lots of edge cases at the margins where a human simply outperforms because they can gather context faster. This is my best guess as to why Klarna reversed their decision to go all-in on LLMs earlier this year.
> In-person work has higher bandwidth and lower latency than remote work, so for certain roles it makes sense you wouldn't want to farm it out to remote workers
This is just not true. Specially if your team exists within an organization that works in world wide solutions and interacts with the rest of the world.
Remote can be so much faster and efficient because it's decentralized by nature and it can make everyone's workflows as optimized as possible.
Just because companies push for "onsite" work to justify their downtown real estate doesn't mean it's more productive.
The people who are saying AI will replace everyone are people who don't actively deploy code anymore. People like CEOs and VPs.
People who are actively deploying code are well aware of the limitations of AI.
A good prompt with some custom context will get you maybe 80% of the way there. Then iterating with the AI will get you about 90% of the way, assuming you're a senior engineer with enough experience to know what to ask. But you will still need to do some work at the end to get it over the line.
And then you end up with code that works but is definitely not optimal.
Fred Brooks told us to "plan to throw one version away, because you will."
What he missed is that the one version that is not thrown away will have to be maintained pretty much for ever.
(Can we blame him for not seeing SaaS coming ?)
What if the real value of AI was at the two sides of this:
* to very quickly built the throwaway version that is just used during demos, to gather feedback from potential customers, and see where things break ?
That can probably be a speed-up of 10x, or 100x, and an incredible ROI if you avoid building a "full" version that's useless
* then you create the "proper" system the "old" way, using AI as an autocomplete on steroid (and maybe get 1.5x, 2x, speedup, etc...)
* then you use LLMs to do the things you would not do anyway for lack of time (testing, docs, etc...) Here the speedup is infinite if you did not do it, and it had some value.
But the power that be will want you to start working on the next feature, by this time...
* I don't know about how LLMs would help to fix bugs
So basically, two codebase "lanes", evolving in parallel, one where the AI / human ratio is 90/10, one where it's maybe 30/70 ?
Maybe but anytime someone keeps doing mental gymnastics and theorizing that there are new forces at play, something comes out and says no, it was something very straightforward. Hammock Driven Development describes a zen internalized way an expert does exactly as you describe but it is nicer you don't have to pay per token. To be clear, I think this all falls again under the rubber duck umbrella which is fine, but seemingly impossible to design a controlled study for?
The real benefactors of AI in software development are senior devs who’ve had enough of boilerplate, framework switching, and other tedious low-value tasks. You cut down on the former laborious tradition of picking through StackOverflow for glimmers of hope.
Yet when you look beyond boilerplate code generation, it's not all that LLMs increase experienced developers productivity (even when they believe that it does): https://arxiv.org/abs/2507.09089
Edit: Hello downvoters, would love to know if you found any flawed argument, is this just because this study/comment contradicting the common narrative on HN or something else entirely?
Is there literally anything other than this single, 16-participant study with that validates the idea that leveraging AI as an assistant reduces completion time in general?
Unless those participants were just complete idiots, I simply cannot square this with my last few weeks absolutely barnstorming on a project using Claude Code.
One of the problems with this study is that the field is moving so very fast.
6 months in models is an eternity. Anthropic has better models out since this study was done. Gemini keeps getting better. Grok / xAI isn’t a joke anymore. To say nothing of the massive open source advancements released in just the last couple weeks alone.
This is all moving so fast that one already out of date report isn’t definitive. Certainly an interesting snapshot in time, but has to be understood in context.
Hackernews needs to get better on this. The head in the sand vibe here won’t be tenable for much longer.
Since you asked, I downvoted you for asking about why you're being downvoted. Don't waste brain cells on fake internet points - it's bad for your health.
AI more of a force muliplier than a replacement. If you rated programmers from 0 to 100. AI can take you from 0 to 80, but can't take you from 98 to 99.
I'd love to record these AI CEOs statements about what's going to happen in the next 24 months and look back at that time -- see how "transformed" the world is then.
My guess is more if the same (i.e. mostly crap), but faster.
We still create software largely the same as we did in the 1980s. Developers sitting at keyboards writing code, line by line. This despite decades of research and countless attempts at "expert systems", "software through pictures" and endless attempts at generating code from various types of models or flowcharts or with different development methodologies or management.
LLMs are like scaffolding on steroids, but aren't fundamentally transforming the process. Developers still need the mental model of what they are building and need to be able to verify that they have actually built it.
> We still create software largely the same as we did in the 1980s. Developers sitting at keyboards writing code, line by line.
That's because the single dimension of code fits how the computer works and we can project any higher order dimension on it. If you go with 2 dimensions like pictures, it no longer fits the computer model, and everything becomes awkward with the higher dimensions of the domain. The only good 2d representation is the grid (spreadsheet, relation dbs, parallel programming..) and even they can be unwieldy.
The mental model is the higher dimension structure, that we project on line of codes. Having LLM generating it is like throwing painting on canvas and hoping for the Mona Lisa.
While I read this article, Claude code was fixing a bug for me.
I agree with Cal that we basically don’t know what happens next. But I do know that the world needs a lot more good software and expanding the scope of what good software professionals can do for companies is positive.
I wonder if you would say this if, say, in a year you're laid off because AI got good enough to write your code. Would you be happy that there is better software in the world at the expense of your job?
I’m optimistic - perhaps naively - about my ability to retool within work.
10 years ago, I was building WordPress websites for motivational speakers. Today, I build web apps for the government. Certainly in 10 years we will be in a different place than we are today.
Your argument, taken in a broader sense, would have us tending to corn fields by hand to avoid a machine taking our job.
If you're so scared then you were probably too sheltered thinking you'd have "guaranteed security" in a shitty unfair world.
Some of us weren't that lucky and always had to stay creative, even when we lacked resources that privileged 1st world people had.
It's a shift. Exciting and also, yes, dangerous one, but the if you focus on being able to produce true value you are bound to be able to make a living out of it. Whether you're employed by someone or create a company yourself.
Find something you love and be scared at times, but don't let it stop you and you'll succeed.
You have Tech CEOs that want work done cheaper and AI companies willing to sell it to them. They will give you crazy alarming narratives around AI replacing all developers, etc.
Then you have tech employees who want to believe they’re irreplaceable. It’s easy to want to keep working how we’ve always worked with hope of getting back to pre 2022 levels of software hiring and income. AI stands in the way of that.
I don’t think people are doing this intentionally all the time. But there is so much money and social stature from all these groups on the line, there are very few able to give a neutral, disinterested perspective on a topic like AI coding.
And to add to that, reasoned, boring, thoughtful middle of the road takes are just naturally going to get fewer eyeballs than extreme points of view.
Or, am I a "devops engineer"?
or.. a "SRE"...
or... am I a platform engineer?
You know what, I don't know.
What I do know, is that people keep trying to make my job obsolete, only to hire me under a different title for more money later. The practices and the tools are the same too, yet to justify it to themselves they'll make shit up about how the work is "actually, a material difference from what came before" (except, it's really not).
I'm still employable after 20 years of this.
I'm not a software engineer, and I'm not a tech CEO - I don't care, but people have been trying to replace me my whole career (even myself: "Automate yourself out of a job” is a common sysadmin mantra after all). Yet, somehow, I'm still here.
Not just sysadmin. I've been automating the hell out of tedious mundane tasks that are done by error prone humans, and only become more error prone as they get tired/bored of these mundane tasks. The automation essentially just becomes a new tool/app for the humans to use.
At this point of my experience doing this, employees scared of automation are probably employees that aren't very good at their job. Employees that embrace this type of automation are the ones that tend to be much better at their job.
- Infrastructure is very often fundamentally disconnected from the product roadmap and thus is under constant cycles of cost control
- The culture of systems administration does not respond well to change or evolution. This is a function that is built around shepherding systems - whether at scale or as pets.
Long way of saying the gamut isn't that corporations will ever be free of systems administration any more than they'll be freed from software engineering. It is far easier to optimize software than it is to optimize infrastructure though, which is why it feels like someone is coming after you. To make matters more complex, a lot of software today overlaps with the definition of infrastructure.
Some of the easy jobs will be taken away and your job is not under threat. Right? But some of the people who were doing low skilled jobs will grow to compete with you. Less supply more demand. Either the pay will come down or there are very few jobs. Fingers crossed.
Deleted Comment
My day used to be making sure desktops worked and we had a repeatable process to make new good desktops out of all the complex client software they needed.
Then I made sure servers got upgraded and patched and taken care of on a day-to-day level, although it was still someone else's job to keep desktops running. At home I compiled my own kernels and used tarballs to install and update packages. Desktop hardware support was iffy in Linux.
Then I jumped to borg and tupperware and kubernetes where hardware never mattered and it almost didn't matter what clients had because the browser they used auto-updated. At home I switched to distros where package management was automatic and rarely, if ever, broke.
I don't even know the hostnames or the network addresses of the hardware that runs my services, and AWS or GCP SREs probably rarely need to know either. Now I care about an abstract thing called a service that is instrumented with logs, metrics, and traces that would put the best local development tools of 20 years ago to shame. CI/CD and infrastructure as code pipelines actually did automate away many of the checklist-style sysadmin work of the past. At home I could run Talos and Ceph and Crossplane if I wanted to but so far I've dragged the old days of individual hosts along mostly for nostalgia.
I expect to eventually end up caring about systems at an even more abstract level once something like Crossplane becomes as universal as Terraform and GitHub actions. They'll probably run on something like web assembly on bare metal because no one actually cares what's underneath their containers if they keep working.
The stack of technology gets taller and more abstract and as it does so the job of caring about the lower layers gets automated and the lower layers get, if not simplified, standardized to the point that automation is more reliable than human intervention.
Humans will only get squeezed out of the loop when superhuman artificial intelligence arrives and our abstract design and management of systems becomes less reliable than the automation. Then hopefully we get a nice friendly button to push for more automatically-human-aligned utility.
EDIT: That's not to say that the lower levels can be automatically designed; not yet at least. Eventually once AI is good enough at formal design then quite likely. We still need low-level software engineering to keep building the stack but it is vastly more commoditized (the one open source developer in the xkcd cartoon keeping 99% of the world's infrastructure running on a 20-year old tool/utility)
Eventually, things will stabilize to a point where we will know what the boundaries of all the new LLM based tooling is good for, and where humans add lots of value to that.
That will then drive a maximalist hiring spree, as the more people you have working at an increase in velocity, the faster you ship, and for once, perhaps the quality of code won't substantially decrease, assuming LLMs improve another few leaps in output quality and engineering workflows adjust in a normalized way.
Thats the hopeful side of the equation anyway.
I feel incredibly bad about customer oriented jobs, like white glove customer service. I already saw a trend (especially since 2020 but certainly before as well) where these AI chat bots and AI support lines will decimate that job category. These are pretty common white collar jobs, lots of people even in our industry got their starts on support lines.
That sounds like a contradiction to Brooks's law, which I don’t see being invalidated by AI tooling.
Those are groups defined by something other than actual LLM usage, which makes them both not particularly interesting. What is interesting:
You have people who've tried using LLMs to generate code and found it utterly useless.
Then you have people who've tried using LLMs to generate code and believe that it has worked very well for them.
AI can generate lots of code very quickly.
AI does not generate code that follows taste and or best practices.
So in cases where the task is small, easily plannable, within the training corpus, or for a project that doesn't have high stakes it can produce something workable quickly.
In larger projects or something that needs maintainability for the future code generation can fall apart or produce subpar results.
It really doesn't help that AI companies are hype machines full of salesmen trying to hype up their product so you buy it. There have been a lot of amazing AI "success stories" that don't hold up under scrutiny.
Whether you’ll get the high IQ or low IQ LLM on the next suggestion is a crapshoot; how much consideration you give to either outcome (focus on the random instances of brilliance, or the constant stream of bullshit) drives the final perception.
Deleted Comment
AI code completion is awesome, but it's essentially a better Stack Overflow, and I don't remember people worrying that Stack Overflow was going to put developers out of work, so I'm not losing sleep that an improved version will.
Yes, there's a more streamlined interface to allow them to do things, but that's all it is. You could accomplish the same by copy-and-pasting a bunch of context into the LLM before and asking it what to do. MCP and other agent-enabling data channels now allow it to actually reach out and do that stuff, but this is not in itself a leap forward in capabilities, just in delivery mechanisms.
I'm not saying it's irrelevant or doesn't matter. However, it does seem to me that as we've run out of low-hanging fruit in model advances, the hype machine has pivoted to "agents" and "agentic workflows" as the new VC-whetting sauce to keep the bubble growing.
At the same time, even if you have access to proper context like if your model can engage with Lexis or WestLaw via tool-use, surfacing appropriate matches from caselaw requires more than just word/token matching. LLMs are statistical models that tend to reduce down to the most likely answer. But, typically, in the context of a legal brief, a lawyer isn't attempting to find the most likely answer or even the objectively correct answer, they are trying to find relevant precedent with which they can make an argument that supports the position they are trying to advance. An LLM by its nature can't do that without help.
Where you're right, then, is that law and software engineering have a lot in common when it comes to how effective baseline LLM models are. Where you're wrong is in calling them glorified auto-complete.
In the hands of a novice they will, yes, generate plausible but mostly incorrect or technically correct but unusable in some way answers. Properly configured with access to appropriate context in the hands of an expert who understands how to communicate what they want the tool to give them? Oh that's quite a different matter.
There’s going to be a need for smart people with a good work ethic unless literally everyone loses their jobs, and at that point we’re living past the singularly event horizon as far as I’m concerned and all bets are off.
I also see the non-development jobs as much more in peril from AI; I can go generate marketing copy, HR policies, and project plans of comparable (or better!) quality today.
There is no “how we’ve always worked”; there was no steady state. Constant evolution and progressive automation of the simpler parts has been the norm for software development forever.
> with hope of getting back to pre 2022 levels of software hiring and income. AI stands in the way of that.
It doesn't, though. Productivity multipliers don't reduce demand for the affected field or income in it. (Tight money policies and economic slowdowns, especially when they co-occur, do, though, especially in a field where much of the demand and high income levels are driven by speculative investment, either in startups or new ventures by established firms.)
That’s why the sentiment among most people and also HN is highly negative against AI. If you’re reading this you likely have that bias.
Dead Comment
Yes, well said - with one caveat: "Money and social status" is subtly but crucially different from "livelihood". You correctly identify the group of people whose lives are affected, but lump those people along with CEOs and AI companies under the umbrella of "money and social status" in your summarization, which maybe undersells the role of the masses in this equation.
The software job - traditionally one of the few good career options remaining for a large chunk of Americans - is falling. There are many different reasons, and AI, while not the apocalypse, is a small but crucial part of it. We need to get over the illusion that a large piece of that fall consists of wildly luxurious incomes reducing to simply cushy incomes - "boo hoo", we say, sarcastically. But the majority of software folks are going from "comfortable" to "unhappy but livable", while some are going from "livable" to "not livable", while others are no longer employed at all. There were already too many people across the job spectrum in these buckets, and throwing another gigantic chunk of citizens in there is going to eventually cause big, bad things to happen.
We need to start giving a shit about our citizens, and part of that is avoiding the implication that just because something disruptive is inevitable, doesn't mean the affect isn't devastating and we should just do absolutely nothing about the situation. Another part of that is avoiding the implication that the average person can just successfully change careers without enormous suffering. We can ease, assist, REGULATE (which the party in power would like to make illegal), etc. It's important to understand that none of that means "stopping" AI or something ridiculous like that.
We need to start giving a shit about our citizens.
A third time: We need to start giving a shit about our citizens.
And sorry, most of this was not directed personally at you. Just the first note about your wording.
In general, the industry has been making huge efforts to push errors from runtime, to compile time. If you imagine points where we can catch errors being laid out from left to right, we have the following:
Caught by: Compiler -> code review -> tests -> runtime checks -> 'caught' in prod
The industry is trying to push errors leftwards. Rust, heavier review, safety in general - its all about cutting down costs by eliminating expensive errors earlier in the production chain. Every industry does this, its much less costly to catch a defective oxygen mask in the factory, than when it sets a plane on fire. Its also better to catch a defective component in the design phase, than when you're doing tests on it
AI is all about trying to push these errors rightwards. The only way that it can save in engineer time is if it goes through inadequate testing, validation, and review. 90% of the complexity of programming is building a mental model of what you're doing, and ensuring that it meets the spec of what you want to do. A lot of that work is currently pure mental work with no physical component - we try and offload it increasingly to compilers in safe languages, and add tests and review to minimise the slippage. But even in a safe language, it still requires a very high amount of mental work to be done to make sure that everything is correct. Tests and review are a stop gap to try and cover the fallibility of the human brain
So if you chop down on that critical mental work by using something probabilistically correct, you're introducing errors that will be more costly down the line. It'll be fine in the short term, but in the long term it'll cost you more money. That's the primary reason why I don't think AI will catch on - its short termist thinking from people who don't understand what makes software complex to build, or how to actually produce software that's cheap in the long term. Its also exactly the same reason that Boeing is getting its ass absolutely handed to it in the aviation world. Use AI if you want to go bankrupt in 5 years but be rich now
I think your analysis is sound from a technical perspective, but your closing statement is why AI is going to be mass adopted. The people who want to be rich now and don't care about what will happen in 5 years have been calling the shots for a very long time now, and as much as we technical folks insist this can't possibly keep going on forever, it's probably not going to stop sometime soon.
Ironically you don't need AI to see this pattern. Maybe AI makes it a little bit more obvious who's thinking long term and who's not (both at the top and in the trenches)
> Use AI if you want to go bankrupt in 5 years but be rich now
Or, as some would put it "Use AI if you want to be rich now, exit, and have someone else go bankrupt in 5 years"
In my company some people are using LLMs to generate some of their code, but more are using them to get a first code review, before requesting a review by their colleagues.
This helps getting the easy/nitpicky stuff out of the way and thereby often saves us one feedback+fix cycle.
Examples would be "you changed this unit test, but didn't update the unit test name", "you changed this function but not the doc string", or "if you reorder these if statements you can avoid deep nesting". Nothing groundbreaking, but nice things.
We still review like we did before, but can often focus a little more on the "what" instead of the "how".
In this application, the LLM is kind of like a linter with fuzzy rules. We didn't stop reviewing code just because many languages come with standard formatters nowadays, either.
While the whole code generation aspect of AI is all the rage right now (and to quote the article):
> Focus on tangible changes in areas that you care about that really do seem connected to AI
1. The promise that AI will replace most if not all developers
2. Alternatively, that AI will turn every developer into a 10-100x developer
My personal opinion is that it'll end up being one of many tools that's situationally useful, eg you're 100% right in that having it as an additional code review step is a great idea. But the amount of money being pumped into the industry isn't enough to sustain mild use cases like that and that isn't why the tech is being pushed. The trillions of dollars being dumped into improving clang tidy isn't sustainable if that's the end use case
Conception -> design -> compiler -> code review ...
If AI tools allow for better rapid prototyping, they could help catch "errors" in the conception and design phases. I don't know how useful this actually is, though.
So, I don't know that the tools are inherently rightward-pushing
An AI tool that could have a precise enough specification fed into it to produce the result that you wanted with no errors, would be a programming language
I don't disagree at all that AI can be helpful, but there's a huge difference between using it as a research tool (which is very valid), and the folks who are trying to use it to replace programmers en masse. The latter is what's driving the bubble, not the former
* profilers
* debuggers
* linters
* static analyzers
* language server protocol
* wire protocol analyzers
* decompilers
* call graph analyzers
* database structure crawlers
In the absence of models that can do perfect oneshot software engineering, we're gonna have to fall back on well-integrated tool usage, and nobody seems to do that well yet.
I've heard people say they use AI agents to set up a new project with git. Just use tortoisegit or something, its free and takes one click - its just using AI for the sake of it
Deleted Comment
If I have an LLM fix a bug where it gets the feedback from the type checker, linter and tests in realtime, no errors were pushed rightward.
It’s not a free lunch though. I still have to refactor afterwards or else I’ll be adding tech debt. To do that, I need to have an accurate mental model of the problem. I think this is where most people will go wrong. Most people have a mindset of “if it compiles and works, it ships.” This will lead to a tangled mess.
Basically, if people treat AI as a silver bullet for dealing with complexity, they’re going to have a bad time. There still is no silver bullet.
Is this true? Most software devs would like to. But I think business is more interested in spee which pushes errors to the right. Which seems to be more profitable in most software. Even stuff thats been around for a decade(s).
it's all part of the shift left ideology. (same with security, you cannot really add it later, same with GDPR and other data protection stuff, you cannot track consent for various data processing purposes after you already have a lot of users onboarded - unless you want to do the sneaky very not nice "ToS updated, pay or die, kthxbai" thing [which is what Meta did], etc.)
... of course this usually means that many times people want to go from the "barely idea as a Figma proto" to "mature product maintained by distributed high-velocity teams" without realizing that there are trade-offs.
shift left is makes good business and engineering sense and all, as it allows you to focus on the things that work, but it requires more iterations to go from that to something mature.
This statement being true disproves the statement "the industry has been making huge efforts to push errors from runtime, to compile time." The industry is not a monolith. Different actors have different, individualized goals.
if that Rust is for decades and the JS/TS gets thrown out in 1 year.
there's a lot of shitty C being written still around the world yet the Rust that goes into the kernel has real value long term.
there's an adoption cycle. Rust is probably already well over the hype peak, and now it's slowly climbing upward to its "plateau of productivity".
(and I would argue that yes there are pretty good things happening nowadays in the industry. for many domains efficient and safe libraries, frameworks, and platforms are getting to be the norm. for example see how Blender became a success story. how deterministic testing is getting adopted for distributed databases.)
> Compiler -> code review -> tests -> runtime checks -> 'caught' in prod
With AI we still compile the code.
We still do code review.
We still run tests (before code review, obviously; don't know why it's listed like this).
We still do QA at runtime.
I feel like anti-AI people are those one who actually treat AI as magic, not the AI users. AI doesn't magically prevent you from doing the things that helped you in pre-AI era.
I have articulated this to friends and colleagues who are on the LLM hype train somewhat differently, in terms of the unwieldiness of accumulated errors and entropy, disproportionate human bottlenecks when you do have to engage with the code but didn't write any of it and don't understand it, etc.
However, your formulation really ties a lot of this together better. Thanks!
Pushing errors leftward vs rightwards is such a nice metaphor, not to mention the metaphor on mental models. Also, your comment on why natural language is unable to describe the problem adequately (later in this thread), since sometimes constraints are discovered during the solution process, and otherwise, and if problems could be described adequately, thats what we call a programming language is very nice - ie, for natural language to adequately describe a problem, that becomes a formal language.
Only experienced engineers who have been through failed projects will understand what you are saying, and the rest of those in the grip of the ai-mania will come to terms with it soon.
It's almost enraging that with the hype around LLMs, the development of real automatic programming AI seems to have halted.
> I expected to find vastly differing views of what future developments might look like, but I was surprised at just how much our alums differed in their assessment of where things are today.
> We found at least three factors that help explain this discrepancy. First was the duration, depth, and recency of experience with LLMs; the less people had worked with them and the longer ago they had done so, the more likely they were to see little value in them (to be clear, “long ago” here may mean a matter of just a few months). But this certainly didn’t explain all of the discrepancy: The second factor was the type of programming work people cared about. By this we mean things like the ergonomics of your language, whether the task you’re doing is represented in model training data, and the amount of boilerplate involved. Programmers working on web apps, data visualization, and scripts in Python, TypeScript, and Go were much more likely to see significant value in LLMs, while others doing systems programming in C, working on carbon capture, or doing novel ML research were less likely to find them helpful. The third factor was whether people were doing smaller, more greenfield work (either alone or on small teams), or on large existing codebases (especially at large organizations). People were much more likely to see utility in today’s models for the former than the latter.
— https://www.recurse.com/blog/191-developing-our-position-on-...
I work as a FW Eng and while they've been of immense value in scripting especially (fuck you powershell), I can only use them as a better autocomplete on our C codebase. Sometimes I'd chat with the codebase, but that's a huge hit or miss.
In-person work has higher bandwidth and lower latency than remote work, so for certain roles it makes sense you wouldn't want to farm it out to remote workers. The quality of the work can degrade in subtle ways that some people find hard to work with.
Similarly, handing a task to a human versus an LLM probably comes with a context penalty that's hard to reason about upfront. You basically make your best guess at what kind of system prompt an LLM needs to do a task, as well as the ongoing context stream. But these are still relatively static unless you have some complex evaluation pipeline that can improve the context in production very quickly.
So I think human workers will probably be able to find new context much faster when tasks change, at least for the time being. Customer service seems to be the frontline example. Many customer service tasks can be handled by an LLM, but there are probably lots of edge cases at the margins where a human simply outperforms because they can gather context faster. This is my best guess as to why Klarna reversed their decision to go all-in on LLMs earlier this year.
This is just not true. Specially if your team exists within an organization that works in world wide solutions and interacts with the rest of the world.
Remote can be so much faster and efficient because it's decentralized by nature and it can make everyone's workflows as optimized as possible.
Just because companies push for "onsite" work to justify their downtown real estate doesn't mean it's more productive.
People who are actively deploying code are well aware of the limitations of AI.
A good prompt with some custom context will get you maybe 80% of the way there. Then iterating with the AI will get you about 90% of the way, assuming you're a senior engineer with enough experience to know what to ask. But you will still need to do some work at the end to get it over the line.
And then you end up with code that works but is definitely not optimal.
What he missed is that the one version that is not thrown away will have to be maintained pretty much for ever.
(Can we blame him for not seeing SaaS coming ?)
What if the real value of AI was at the two sides of this:
* to very quickly built the throwaway version that is just used during demos, to gather feedback from potential customers, and see where things break ?
That can probably be a speed-up of 10x, or 100x, and an incredible ROI if you avoid building a "full" version that's useless
* then you create the "proper" system the "old" way, using AI as an autocomplete on steroid (and maybe get 1.5x, 2x, speedup, etc...)
* then you use LLMs to do the things you would not do anyway for lack of time (testing, docs, etc...) Here the speedup is infinite if you did not do it, and it had some value.
But the power that be will want you to start working on the next feature, by this time...
* I don't know about how LLMs would help to fix bugs
So basically, two codebase "lanes", evolving in parallel, one where the AI / human ratio is 90/10, one where it's maybe 30/70 ?
AI for fast accretion, human for weathering ?
Edit: Hello downvoters, would love to know if you found any flawed argument, is this just because this study/comment contradicting the common narrative on HN or something else entirely?
Unless those participants were just complete idiots, I simply cannot square this with my last few weeks absolutely barnstorming on a project using Claude Code.
> We do not provide evidence that:
> AI systems do not currently speed up many or most software developers
> We do not claim that our developers or repositories represent a majority or plurality of software development work
6 months in models is an eternity. Anthropic has better models out since this study was done. Gemini keeps getting better. Grok / xAI isn’t a joke anymore. To say nothing of the massive open source advancements released in just the last couple weeks alone.
This is all moving so fast that one already out of date report isn’t definitive. Certainly an interesting snapshot in time, but has to be understood in context.
Hackernews needs to get better on this. The head in the sand vibe here won’t be tenable for much longer.
Since you asked, I downvoted you for asking about why you're being downvoted. Don't waste brain cells on fake internet points - it's bad for your health.
I'd love to record these AI CEOs statements about what's going to happen in the next 24 months and look back at that time -- see how "transformed" the world is then.
We still create software largely the same as we did in the 1980s. Developers sitting at keyboards writing code, line by line. This despite decades of research and countless attempts at "expert systems", "software through pictures" and endless attempts at generating code from various types of models or flowcharts or with different development methodologies or management.
LLMs are like scaffolding on steroids, but aren't fundamentally transforming the process. Developers still need the mental model of what they are building and need to be able to verify that they have actually built it.
That's because the single dimension of code fits how the computer works and we can project any higher order dimension on it. If you go with 2 dimensions like pictures, it no longer fits the computer model, and everything becomes awkward with the higher dimensions of the domain. The only good 2d representation is the grid (spreadsheet, relation dbs, parallel programming..) and even they can be unwieldy.
The mental model is the higher dimension structure, that we project on line of codes. Having LLM generating it is like throwing painting on canvas and hoping for the Mona Lisa.
In the case of the internet, it ended up going both ways. We overestimated it in the near term and underestimated its impact in the long term.
They could very well be right. I don't think they are. But I've also never seen anything that can scale quite like AI.
I agree with Cal that we basically don’t know what happens next. But I do know that the world needs a lot more good software and expanding the scope of what good software professionals can do for companies is positive.
10 years ago, I was building WordPress websites for motivational speakers. Today, I build web apps for the government. Certainly in 10 years we will be in a different place than we are today.
Your argument, taken in a broader sense, would have us tending to corn fields by hand to avoid a machine taking our job.
Some of us weren't that lucky and always had to stay creative, even when we lacked resources that privileged 1st world people had.
It's a shift. Exciting and also, yes, dangerous one, but the if you focus on being able to produce true value you are bound to be able to make a living out of it. Whether you're employed by someone or create a company yourself.
Find something you love and be scared at times, but don't let it stop you and you'll succeed.