I build accounting automation tools and this resonates hard. The codebase has ~60 backend services handling things like pattern matching, VAT classification, invoice reconciliation - stuff where a subtle bug doesn't crash anything, it just silently posts the wrong number to someone's accounts.
Vibe coding would be catastrophic here. Not because the AI can't write the code - it usually can - but because the failure mode is invisible. A hallucinated edge case in a tax calculation doesn't throw an error. It just produces a slightly wrong number that gets posted to a real accounting platform and nobody notices until the accountant does their review.
Where I've found AI genuinely useful is as a sophisticated autocomplete. I write the architecture, define the interfaces, handle the domain logic myself. Then I'll use it to fill in boilerplate, write test scaffolding, or explore an API I'm not familiar with. The moment I hand it the steering wheel on anything domain-specific, things go sideways fast.
The article's point about understanding your codebase is spot on. When something breaks at 2am in production, "the AI wrote that part" isn't an answer. You need to be able to trace through the logic yourself.
> Vibe coding would be catastrophic here. Not because the AI can't write the code - it usually can - but because the failure mode is invisible. A hallucinated edge case in a tax calculation doesn't throw an error. It just produces a slightly wrong number that gets posted to a real accounting platform and nobody notices until the accountant does their review.
How is that different from handwritten code ? Sounds like stuff you deal with architecturally (auditable/with review/rollback) and with tests.
With handwritten code, the humans know what they don’t know. If you want some constants or some formula, you don’t invent or guess it, you ask the domain expert.
I think the point he is trying to make is that you can't outsource your thinking to a automated process and also trust it to make the right decisions at the same time.
In places where a number, fraction, or a non binary outcome is involved there is an aspect of growing the code base with time and human knowledge/failure.
You could argue that speed of writing code isn't everything, many times being correct and stable likely is more important. For eg- A banking app, doesn't have be written and shipped fast. But it has to be done right. ECG machines, money, meat space safety automation all come under this.
I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
It's very interesting to me how many people presume that if you don't learn how to vibecode now you'll never ever be able to catch up. If the models are constantly getting better, won't these tools be easier to use a year from now? Will model improvements not obviate all the byzantine prompting strategies we have to use today?
In the early days, the interfaces were so complex and technical, that only engineers could use them.
Some of these early musicians were truly amazing individuals; real renaissance people. They understood the theory, and had true artistic vision. The knew how to ride the tiger, and could develop great music, fairly efficiently.
A lot of others, not so much. They twiddled knobs at random, and spent a lot of effort, panning for gold dust. Sometimes, they would have a hit, but they wasted a lot of energy on dead ends.
Once the UI improved (like the release of the Korg M1 sampler), then real artists could enter the fray, and that’s when the hockey stick bent.
Not exactly sure what AI’s Korg M1 will be, but I don’t think we’re there, yet.
I do think that there's some meta-skills involved here that are useful, in the same way that some people have good "Google-fu". Some of it is portable, some of it isn't.
I think if you orient your experimentation right you can think of some good tactics that are helpful even when you're not using AI assistance. "Making this easier for the robot" can often align with "making this easier for the humans" as well. It's a decent forcing function
Though I agree with the sentiment. People who have been doing this for less than a year convinced that they have some permanent lead over everyone.
I think a lot about my years being self taught programming. Years spent spinning my wheels. I know people who after 3 months of a coding bootcamp were much further than me after like ... 6 years of me struggling through material.
I think so, that's why I think that the risk of pretty much ignoring the space is close to zero. If I happen to be catastrophically wrong about everything then any AI skills I would've learned today will be completely useless 5 years from now anyway, just like skills from early days of ChatGPT are completely useless today.
I think the AI-coding skill that is likely to remain useful is the ability (and discipline) to review and genuinely understand the code produced by the AI before committing it.
I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)
If I could have the best of both worlds, that would be a genuine win, and I don't think it's impossible. It won't save as much time as pure vibe coding promises to, of course.
I think there's something to this, but I also there there's something to the notion that it'll get easier and easier to do mass-market work with them, but at the same time they'll become greater and greater force multipliers for more and more nuanced power users.
It is strange because the tech now moves much faster than the development of human expertise. Nobody on earth achieved Sonnet 3.5 mastery, in the 10k hours sense, because the model didn't exist long enough.
Prior intuitions about skill development, and indeed prior scientifically based best practices, do not cleanly apply.
Yup, this is why even though I like ai coding a lot, and am pretty enthusiastic about it, and have fun tinkering with it, and think it will stick around and become part of everyday proper software development practice (with guardrails in place), I at least don't go telling people they need to learn it now or they'll be obsolete or whatever. Sitting back and seeing how this all works out — nobody really knows imo, I could be wrong too! — is a valid choice and if ai does stick around you can just hop in when the landscape is clearer!
That's my take. I know LLMs arent going away even if the bubble pops. I refuse to become a KPI in some PM's promotion to justify pushing this tech even further, so for now I don't use it (unless work mandates it).
Until then, I keep up and add my voice to the growing number who oppose this clear threat on worker rights. And when the bubble pops or when work mandates it, I can catch up in a week or two easy peasy. This shit is not hard, it is literally designed to be easy. In fact, everything I learn the old way between now and then will only add to the things I can leverage when I find myself using these things in the future.
Wait around five years and then prompt: "Vibe me Windows" and then install your smart new double glazed floor. There is definitely something useful happening in LLM land but it is not and will never be AGI.
Oooh, let me dive in with an analogy:
Screwdriver.
Metal screws needed inventing first - they augment or replace dowels, nails, glue, "joints" (think tenon/dovetail etc), nuts and bolts and many more fixings. Early screws were simply slotted. PH (Philips cross head) and PZ (Pozidrive) came rather later.
All of these require quite a lot of wrist effort. If you have ever screwed a few 100 screws in a session then you know it is quite an effort.
Drill driver.
I'm not talking about one of those electric screw driver thingies but say a De W or Maq or whatever jobbies. They will have a Li-ion battery and have a chuck capable of holding something like a 10mm shank, round or hex. It'll have around 15 torque settings, two or three speed settings, drill and hammer drill settings. Usually you have two - one to drill and one to drive. I have one that will seriously wrench your wrist if you allow it to. You need to know how to use your legs or whatever to block the handle from spinning when the torque gets a bit much.
...
You can use a modern drill driver to deploy a small screw (PZ1, 2.5mm) to a PZ3 20+cm effort. It can also drill with a long auger bit or hammer drill up to around 20mm and 400mm deep. All jolly exciting.
I still use an "old school" screwdriver or twenty. There are times when you need to feel the screw (without deploying an inadvertent double entendre).
I do find the new search engines very useful. I will always put up with some mild hallucinations to avoid social.microsoft and nerd.linux.bollocks and the like.
> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.
The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.
The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.
Those of us working from the bottom, looking up, do tend to take the clinical progressive approach. Our focus is on the next ticket.
My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.
Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.
To be fair, that's what I have done. I try to use AI every now and then for small, easy things. It isn't yet reliable for those things, and always makes mistakes I have to clean up. Therefore I'm not going to trust it with anything more complicated yet.
We should separate doing science from adopting science.
Testing medical drugs is doing science. They test on mice because it's dangerous to test on humans, not to restrict scope to small increments. In doing science, you don't always want to be extremely cautious and incremental.
Trying to build a browser with 100 parallel agents is, in my view, doing science, more than adopting science. If they figure out that it can be done, then people will adopt it.
Trying to become a more productive engineer is adopting science, and your advice seems pretty solid here.
> Test on mice, test in clinical trials, then go to market.
You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.
> my best ideas often happen when knee deep in some codebase
I notice that I get into this automatically during AI-assisted coding sessions if I don't lower my standards for the code. Eventually, I need to interact very closely with both the AI and the code, which feels similar to what you describe when coding manually.
I also notice I'm fresher because I'm not using many brainscycles to do legwork- so maybe I'm actually getting into more situations where I'm getting good ideas because I'm tackling hard problems.
So maybe the key to using AI and staying sharp is to refuse to sacrifice your good taste.
Yeah, it's frustrating that it seems most AI conversations devolve into straw men of either zero AI or one shot apps. There's a huge middle ground where I, and it seems like many others, have found AI very useful. We're still at the stage where it's somewhat unique for each person where AI can work for them (or not).
Or just wait for things to settle. As fast as the field is moving, staying ahead of the game is probably high investment with little return, as the things you spend a ton of time honing today may be obsolete tomorrow, or simply built into existing products with much lower learning cost.
Note, if staying on the bleeding edge is what excites you, by all means do. I'm just saying for people who don't feel that urge, there's probably no harm just waiting for stuff to standardize and slow down. Either approach is fine so long as you're pragmatic about it.
Very reasonable take. The fact that this is being downvoted really shows how poor HN's collective critical thinking has become. Silicon Valley is cannibalizing itself and it's pretty funny to watch from the outside with a clear head.
Interesting analogy, but I'd say it's kind of the opposite. In the two you mentioned, the cost of inaction is extremely high, so they reach one conclusion, whereas here the argument is that the cost of inaction is pretty low, and reaches the opposite conclusion.
It definitely comes up if you're just reviewing an already-"completed" PR. Even if you're not going to ship AI-generated code to prod (and I think that's a reasonable choice), it's often informative to give a high-level description of what you want to accomplish to a coding agent and see what it does in your codebase. You might find that the AI covered a particular edge case that you would have missed. You might find that even if the PR as a whole is slop.
> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.
Even if you believe that many are too far on one side now, you have to account for the fact that AI will get better rapidly. If you're not using it now you may end up lacking preparation when it becomes more valuable
But as it gets better, it'll also get easier, be built into existing products you already use, etc. So I wouldn't worry too much about that aspect. If you enjoy tinkering, or really want to dive deep into fundamentals, that's one thing, but I wouldn't worry too much about "learning to use some tool", as fast as things are changing.
The bit about "we have automated coding, but not software engineering" matches my experience. LLMs are good at writing individual functions but terrible at deciding which functions should exist.
My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.
Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.
Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.
> LLMs are good at writing individual functions but terrible at deciding which functions should exist.
Have you tried explicitly asking them about the latter? If you just tell them to code, they aren't going to work on figuring out the software engineering part: it's not part of the goal that was directly reinforced by the prompt. They aren't really all that smart.
> However, it is important to ask if you want to stop investing in your own skills because of a speculative prediction made by an AI researcher or tech CEO.
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
I personally found out that knowing how to use ai coding assistants productively is a skill like any other and a) it requires a significant investment of time b) can be quite rewarding to learn just as any other skill c) might be useful now or in the future and d) doesn't negate the usefulness of any other skills acquired on the past nor diminishes the usefulness of learning new skills in the future
Agreed, my experience and code quality with claude code and agentic workflows has dramatically increased since investing in learning how to properly use these tools. Ralph Wiggum based approaches and HumanLayer's agents/commands (in their .claude/) have boosted my productivity the most. https://github.com/snwfdhmp/awesome-ralphhttps://github.com/humanlayer
On the using AI assistants I find that everything is moving so fast that I feel constantly like "I'm doing this wrong". Is the answer simply "dedicate time to experimenting? I keep hearing "spec driven design" or "Ralph" maybe I should learn those? Genuine thoughts and questions btw.
The addictive nature of the technology persists though. So even if we say certain skills are required to use it, then also it must come with a warning label and avoided by people with addictive personalities/substance abuse issues etc.
> knowing how to use ai coding assistants productively is a skill like any other
No, it's different from other skills in several ways.
For one, the difficulty of this skill is largely overstated. All it requires is basic natural language reading and writing, the ability to organize work and issue clear instructions, and some relatively simple technical knowledge about managing context effectively, knowing which tool to use for which task, and other minor details. This pales in comparison with the difficulty of learning a programming language and classical programming. After all, the entire point of these tools is to lower the required skill ceiling of tasks that were previously inaccessible to many people. The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I would argue that the results depend far more on the user's familiarity with the domain than their skill level. Domain experts know how to ask the right questions, provide useful guidance, and can tell when the output is of poor quality or inaccurate. No amount of technical expertise will help you make these judgments if you're not familiar with the domain to begin with, which can only lead to poor results.
> might be useful now or in the future
How will this skill be useful in the future? Isn't the goal of the companies producing these tools to make them accessible to as many people as possible? If the technology continues to improve, won't it become easier to use, and be able to produce better output with less guidance?
It's amusing to me that people think this technology is another layer of abstraction, and that they can focus on "important" things while the machine works on the tedious details. Don't you see that this is simply a transition period, and that whatever work you're doing now, could eventually be done better/faster/cheaper by the same technology? The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
I'm of the opinion that this goal is unachievable with the current tech generation, and that the bubble will burst soon unless another breakthrough is reached. In the meantime, your own skills will continue to atrophy the more you rely on this tech, instead of on your own intellect.
I'm doing a similar thing. Recently, I got $100 to spend on books. The first two books I got were A Philosophy of Software Design, and Designing Data-Intensive Applications, because I asked myself, out of all the technical and software engineering related books that I might get, given agentic coding works quite well now, what are the most high impact ones?
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
Agreed. I find most design patterns end up as a mess eventually, at least when followed religiously. DDD being one of the big offenders. They all seem to converge on the same type of "over engineered spaghetti" that LOOKS well factored at a glance, but is incredibly hard to understand or debug in practice.
DDD is quite nice as a philosophy. Like concatenate state based on behavioral similarity and keep mutation and query function close, model data structures from domain concepts and not the inverse, pay attention to domain boundary (an entity may be read only in some domain and have fewer state transition than in another).
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
I was going to ask the same thing. I'm self taught but I've mainly gone the other way, more interested in learning about lower level things. Bang for buck I think I might have been better reading DDD type books.
It presents the main concepts like a good lecture and a more modern take than the blue book. Then you can read the blue book.
But DDD should be taken as a philosophy rather than a pattern. Trying to follow it religiously tends to results in good software, but it’s very hard to nail the domain well. If refactoring is no longer an option, you will be stuck with a non optimal system. It’s more something you want to converge to in the long term rather than getting it right early. Always start with a simpler design.
I don't know this feels extremely wrong I've put out more things (including open source for the first time in a long time) that I still feel proud of since at the end of the way I manually review everything and fix whatever I don't like.
But I think this only works is because I have a decade of experience in basically every field in the programming space and I had to learn it all without AI. I know exactly what I want from AI where opus 4.6 and codex 5.3 understands that and executes on it faster than I could ever write.
The #1 predictor of success here is being able to define what success looks like in an obnoxiously detailed manner. If you have a strong vision about the desired UI/UX and you constantly push for that outcome, it is very unlikely you will have a bad time with the current models.
The workflow that seems more perilous is the one where the developer fires up gas town with a vague prompt like "here's my crypto wallet please make me more money". We should be wielding these tools like high end anime mech suits. Serialized execution and human fully in the loop can be so much faster even if it consumes tokens more slowly.
Everyone seems to have different ways to deal with AI for coding and have different experiences. But Armin's comment quoted in the article is spot on. I have seen a friend do exactly the same thing, vibe coded an entire product hooked to Cursor over three months. Filled with features no one uses, feeling very good about everything he built. Ultimately it's his time and money, but I would never want this in my company. While you can get very far with vibe coding, without the guiding hands and someone who understands what's really going on with the code, it ends up in a disaster.
I use AI for the mundane parts, for brainstorming bugs. It is actually more consistent than me in covering corner cases, making sure guard conditions exist etc. So I now focus more on design/architecture and what to build and not minutea.
Vibe coding would be catastrophic here. Not because the AI can't write the code - it usually can - but because the failure mode is invisible. A hallucinated edge case in a tax calculation doesn't throw an error. It just produces a slightly wrong number that gets posted to a real accounting platform and nobody notices until the accountant does their review.
Where I've found AI genuinely useful is as a sophisticated autocomplete. I write the architecture, define the interfaces, handle the domain logic myself. Then I'll use it to fill in boilerplate, write test scaffolding, or explore an API I'm not familiar with. The moment I hand it the steering wheel on anything domain-specific, things go sideways fast.
The article's point about understanding your codebase is spot on. When something breaks at 2am in production, "the AI wrote that part" isn't an answer. You need to be able to trace through the logic yourself.
How is that different from handwritten code ? Sounds like stuff you deal with architecturally (auditable/with review/rollback) and with tests.
I think the point he is trying to make is that you can't outsource your thinking to a automated process and also trust it to make the right decisions at the same time.
In places where a number, fraction, or a non binary outcome is involved there is an aspect of growing the code base with time and human knowledge/failure.
You could argue that speed of writing code isn't everything, many times being correct and stable likely is more important. For eg- A banking app, doesn't have be written and shipped fast. But it has to be done right. ECG machines, money, meat space safety automation all come under this.
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
In the early days, the interfaces were so complex and technical, that only engineers could use them.
Some of these early musicians were truly amazing individuals; real renaissance people. They understood the theory, and had true artistic vision. The knew how to ride the tiger, and could develop great music, fairly efficiently.
A lot of others, not so much. They twiddled knobs at random, and spent a lot of effort, panning for gold dust. Sometimes, they would have a hit, but they wasted a lot of energy on dead ends.
Once the UI improved (like the release of the Korg M1 sampler), then real artists could enter the fray, and that’s when the hockey stick bent.
Not exactly sure what AI’s Korg M1 will be, but I don’t think we’re there, yet.
It's like saying if you don't learn to use a smartphone you'll be left behind. Even babies can use it now.
I think if you orient your experimentation right you can think of some good tactics that are helpful even when you're not using AI assistance. "Making this easier for the robot" can often align with "making this easier for the humans" as well. It's a decent forcing function
Though I agree with the sentiment. People who have been doing this for less than a year convinced that they have some permanent lead over everyone.
I think a lot about my years being self taught programming. Years spent spinning my wheels. I know people who after 3 months of a coding bootcamp were much further than me after like ... 6 years of me struggling through material.
I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)
If I could have the best of both worlds, that would be a genuine win, and I don't think it's impossible. It won't save as much time as pure vibe coding promises to, of course.
It is strange because the tech now moves much faster than the development of human expertise. Nobody on earth achieved Sonnet 3.5 mastery, in the 10k hours sense, because the model didn't exist long enough.
Prior intuitions about skill development, and indeed prior scientifically based best practices, do not cleanly apply.
I can confidently say that being able to prompt and train LoRAs for Stable Diffusion makes zero difference for your ability to prompt Nano Banana.
Deleted Comment
Deleted Comment
Until then, I keep up and add my voice to the growing number who oppose this clear threat on worker rights. And when the bubble pops or when work mandates it, I can catch up in a week or two easy peasy. This shit is not hard, it is literally designed to be easy. In fact, everything I learn the old way between now and then will only add to the things I can leverage when I find myself using these things in the future.
Deleted Comment
Oooh, let me dive in with an analogy:
Screwdriver.
Metal screws needed inventing first - they augment or replace dowels, nails, glue, "joints" (think tenon/dovetail etc), nuts and bolts and many more fixings. Early screws were simply slotted. PH (Philips cross head) and PZ (Pozidrive) came rather later.
All of these require quite a lot of wrist effort. If you have ever screwed a few 100 screws in a session then you know it is quite an effort.
Drill driver.
I'm not talking about one of those electric screw driver thingies but say a De W or Maq or whatever jobbies. They will have a Li-ion battery and have a chuck capable of holding something like a 10mm shank, round or hex. It'll have around 15 torque settings, two or three speed settings, drill and hammer drill settings. Usually you have two - one to drill and one to drive. I have one that will seriously wrench your wrist if you allow it to. You need to know how to use your legs or whatever to block the handle from spinning when the torque gets a bit much.
...
You can use a modern drill driver to deploy a small screw (PZ1, 2.5mm) to a PZ3 20+cm effort. It can also drill with a long auger bit or hammer drill up to around 20mm and 400mm deep. All jolly exciting.
I still use an "old school" screwdriver or twenty. There are times when you need to feel the screw (without deploying an inadvertent double entendre).
I do find the new search engines very useful. I will always put up with some mild hallucinations to avoid social.microsoft and nerd.linux.bollocks and the like.
This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.
The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.
The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.
My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.
Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.
Testing medical drugs is doing science. They test on mice because it's dangerous to test on humans, not to restrict scope to small increments. In doing science, you don't always want to be extremely cautious and incremental.
Trying to build a browser with 100 parallel agents is, in my view, doing science, more than adopting science. If they figure out that it can be done, then people will adopt it.
Trying to become a more productive engineer is adopting science, and your advice seems pretty solid here.
You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.
I notice that I get into this automatically during AI-assisted coding sessions if I don't lower my standards for the code. Eventually, I need to interact very closely with both the AI and the code, which feels similar to what you describe when coding manually.
I also notice I'm fresher because I'm not using many brainscycles to do legwork- so maybe I'm actually getting into more situations where I'm getting good ideas because I'm tackling hard problems.
So maybe the key to using AI and staying sharp is to refuse to sacrifice your good taste.
When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.
I think the best you can do now is try lots of different new ways of working keep an open mind
Note, if staying on the bleeding edge is what excites you, by all means do. I'm just saying for people who don't feel that urge, there's probably no harm just waiting for stuff to standardize and slow down. Either approach is fine so long as you're pragmatic about it.
Deleted Comment
Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.
It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.
There is zero evidence that LLM's improve software developer productivity.
Any data-driven attempts to measure this give ambivalent results at best.
How's that? If it ever gets good, it seems rather implausible that today's tool-of-the-month will turn out to be the winner.
You think it's going to get harder to use as time goes on?
that's nowhere near guaranteed
My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.
Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.
Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.
Have you tried explicitly asking them about the latter? If you just tell them to code, they aren't going to work on figuring out the software engineering part: it's not part of the goal that was directly reinforced by the prompt. They aren't really all that smart.
Also, it prevents repetitive strain injury. At least, it does for me.
Deleted Comment
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
You can choose to grow in both areas.
[0]: https://kerrick.blog/articles/2025/kerricks-wager/
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
[2]: https://news.ycombinator.com/item?id=46702093
[3]: https://news.ycombinator.com/item?id=46719500
No, it's different from other skills in several ways.
For one, the difficulty of this skill is largely overstated. All it requires is basic natural language reading and writing, the ability to organize work and issue clear instructions, and some relatively simple technical knowledge about managing context effectively, knowing which tool to use for which task, and other minor details. This pales in comparison with the difficulty of learning a programming language and classical programming. After all, the entire point of these tools is to lower the required skill ceiling of tasks that were previously inaccessible to many people. The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I would argue that the results depend far more on the user's familiarity with the domain than their skill level. Domain experts know how to ask the right questions, provide useful guidance, and can tell when the output is of poor quality or inaccurate. No amount of technical expertise will help you make these judgments if you're not familiar with the domain to begin with, which can only lead to poor results.
> might be useful now or in the future
How will this skill be useful in the future? Isn't the goal of the companies producing these tools to make them accessible to as many people as possible? If the technology continues to improve, won't it become easier to use, and be able to produce better output with less guidance?
It's amusing to me that people think this technology is another layer of abstraction, and that they can focus on "important" things while the machine works on the tedious details. Don't you see that this is simply a transition period, and that whatever work you're doing now, could eventually be done better/faster/cheaper by the same technology? The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
I'm of the opinion that this goal is unachievable with the current tech generation, and that the bubble will burst soon unless another breakthrough is reached. In the meantime, your own skills will continue to atrophy the more you rely on this tech, instead of on your own intellect.
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
You'll probably be forming some counter-arguments in your head.
Skip them, throw the DDD books in the bin, and do your co-workers a favour.
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
https://www.amazon.com/Learning-Domain-Driven-Design-Alignin...
It presents the main concepts like a good lecture and a more modern take than the blue book. Then you can read the blue book.
But DDD should be taken as a philosophy rather than a pattern. Trying to follow it religiously tends to results in good software, but it’s very hard to nail the domain well. If refactoring is no longer an option, you will be stuck with a non optimal system. It’s more something you want to converge to in the long term rather than getting it right early. Always start with a simpler design.
But I think this only works is because I have a decade of experience in basically every field in the programming space and I had to learn it all without AI. I know exactly what I want from AI where opus 4.6 and codex 5.3 understands that and executes on it faster than I could ever write.
The workflow that seems more perilous is the one where the developer fires up gas town with a vague prompt like "here's my crypto wallet please make me more money". We should be wielding these tools like high end anime mech suits. Serialized execution and human fully in the loop can be so much faster even if it consumes tokens more slowly.
Deleted Comment
I have like 15 personalized apps now, mostly chrome extensions
Deleted Comment
I use AI for the mundane parts, for brainstorming bugs. It is actually more consistent than me in covering corner cases, making sure guard conditions exist etc. So I now focus more on design/architecture and what to build and not minutea.