It's completely absurd how wrong this article is. Development speed is 100% the bottleneck.
Just to quote one little bit from the piece regarding Google: "In other words, there have been numerous dead ends that they explored, invalidated, and moved on from. There's no knowing up front."
Every time you change your mind or learn something new and you have to make a course correction, there's latency. That latency is just development velocity. The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise. The bottleneck for that is 100% development speed.
If you can shrink your iteration time, then there are fewer meetings trying to determine prioritization. There are fewer discussions and bargaining sessions you need to do. Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.
If you can shrink your iteration time between versions 2 and 3, between versions 3 and 4, etc. The advantage compounds over your competitors. You find promising solutions earlier, which lead to new promising solutions earlier. Over an extended period of time, this is how you build a moat.
This article is right insofar as "development velocity" has been redefined to be "typing speed."
With LLMs, you can type so much faster! So we should be going faster! It feels faster!
(We are not going faster.)
But your definition, the right one, is spot on. The pace of learning and decisions is exactly what drives development velocity. My one quibble is that if you want to learn whether something is worth doing, implementing it isn't always the answer. Prototyping vs. production-quality implementation is different, even within that. But yeah, broadly, you need to test and validate as many _ideas_ as possible, in order take make as many correct _decisions_ as possible.
That's one place I'm pretty bullish on AI: using it to explore/test ideas, which otherwise would have been too expensive. You can learn a ton by sending the AI off to research stuff (code, web search, your production logs, whatever), which lets you try more stuff. That genuinely tightens the feedback loop, and you go faster.
Naur’s theory of programming has always felt right to me. Once you known everything about the current implementation, planning and decision making can be done really fast and there’s not much time lost on actually implementing prototypes and dead ends (learning with extra steps).
It’s very rare to not touch up code, even when writing new features. Knowing where to do so in advance (and planning to not have to do that a lot) is where velocity is. AI can’t help.
I can agree with this sentiment. It does not matter how insanely good LLMs become, if you cannot assess it quickly enough. You will ALWAYS want a human to verify and validate, and test the software. There could be a ticking timebomb in there somewhere.
Maybe the real skynet will kill us with ticking time bomb software bugs we blindly accepted.
I think people are largely split on LLMs based on whether they've reached a point of mastery where they can work close to as fast as they think and the tech would therefore slow them down rather than accelerate them.
It's like speed of light in different mediums. It's not that photons slow down. They just hit more stuff and spend more time getting absorbed and remitted.
Better developer wastes less time solving the wrong problem.
> It's completely absurd how wrong this article is. Development speed is 100% the bottleneck.
The current trend in anti-vibe-coding articles is to take whatever the vibe coding maximalists are saying and then stake out the polar opposite position. In this case, vibe coding maximalists are claiming that LLM coding will dramatically accelerate time to market, so the anti-vibe-coding people feel like they need to claim that development speed has no impact at all. Add a dash of clickbait (putting "development speed" in the headline when they mean typing speed) and you get the standard LLM war clickbait article.
Both extremes are wrong, of course. Accelerating development speed is helpful, but it's not the only factor that goes into launching a successful product. If something can accelerate development speed, it will accelerate time to market and turnaround on feature requests.
I also think this mentality appeals to people who have been stuck in slow moving companies where you spend more time in meetings, waiting for blockers from third parties, writing documents, and appeasing stakeholders than you do shipping code. In some companies, you really could reduce development time to 0 and it wouldn't change anything because every feature must go through a gauntlet of meetings, approvals, and waiting for stakeholders to have open slots in their calendars to make progress. For anyone stuck in this environment, coding speed barely matters because the rest of the company moves so slow.
For those of us familiar with faster moving environments that prioritize shipping and discourage excessive process and meetings, development speed is absolutely a bottleneck.
Since I haven't mentioned the context in the article, it is a small agency with a customer target of early-stage (ideally earliest-stage) product startups.
We have literally one half-hour-long sync meeting a week. The rest is as lightweight as possible, typically averaging below 10 minutes daily with clients (when all the decisions happen on the fly).
I've worked in the corpo world, too, and it is anything but.
We do use vibe coding a lot in prototyping. Depending on the context, we sometimes have a lot of AI-agent-generated code, too.
What's more, because of working on multiple projects, we have a fairly decent pool of data points. And we don't see much of speed improvement from a perspective of a project (I wrote more on it here: https://brodzinski.com/2025/08/most-underestimated-factor-es...).
So, I don't think I'm biased toward bureaucratic environments, where developers code in MS Word rather than VS Code.
But these are all just one dimension of the discussion. The other is a simple question: are there ways of validating ideas before we turn them into implemented features/products?
The answer has always been a wholehearted "yes".
If development pace were all that counted, Googles and Amazons of this world would be beating the crap out of every aspiring startup in any niche the big tech cared about, even remotely. And that simply is not happening.
Incumbents are known to be losing ground, and old-school behemoths that still kick butts (such as IBM) do so because they continuously reinvent their businesses.
> so the anti-vibe-coding people feel like they need to claim that development speed has no impact at all
Strange, I'd been more of the impression that this is an argument from pro vibe-coders. As more data comes in, the "productivity increases" of AI are not showing up as expected. So as people question, how come things are not getting done faster even though you say you are 10x faster at coding? The vibe-coders answer by saying that coding isn't the bottleneck, as opposed to capitulating and saying that maybe they're not that much faster at coding after-all.
Sure, but the actual lag from "I have an idea worth trying" to "here's a working version people can interact with" is one of the larger pieces of latency in that entire process.
You can't test or evaluate something that doesn't work yet.
Exactly the comment I came to make after reading this article. The article is basically claiming that "trying different things until something works" is what takes time, but the actual act of "trying things" requires development time. I can't see how someone can think about this topic this long, which the author clearly has, and come to this conclusion.
Perhaps I've just misunderstood the point, but it seems like a nonsensical argument.
This, so much. As an engineer turned PM, I am usually sympathetic to the idea that doing more discovery up front leads to better outcomes, but the simple reality is that it's hard to try anything, make any bets, or even do sure wins when the average development lifecycle is 12-18 months to get something released in a large organization and they're allergic to automation, hiring higher quality engineers, and hiring more engineers to improve velocities. Development velocity basically trumps everything, after basic sanity checks on the cost/benefit tradeoffs, because you can just try things and if it doesn't work you try something else.
This is /especially/ true in software in 2025, because most products are SaaS or subscription based, so you have a consistent revenue stream that can cover ongoing development costs which gives you the necessary runway to iterate repeatedly. Development costs then become relatively stable for a given team size and the velocity of that team entirely determines how often you can iterate, which determines how quickly you find an optimal solution and derive more value.
> and they're allergic to automation, hiring higher quality engineers, and hiring more engineers to improve velocities
I think there's another issue, but it could relate to your first two statement here. Even to try ideas, to explore the space of solutions, you need to have ideas to try. When entering development, you need clarity on what you're trying. It's very hard to make decisions on even a single attempt. I see engineers working task the entire time simply not sure what really the task is about.
And in a way, the coding agents need even more clarity in what you ask of them to deliver good result.
So even inside of what we consider "development" or "coding", the bottleneck is often: "what am I supposed to do here?" and not so much "I don't know how to do this" or "I have so much to implement".
This is obvious as well, once you throw more engineers, and you can't break up the work, because you have no clue what so many people could even all do. Knowing what all the needed tasks even are is hard and a big bottleneck.
> it's hard to try anything, make any bets, or even do sure wins when the average development lifecycle is 12-18 months to get something released in a large organization and they're allergic to automation, hiring higher quality engineers, and hiring more engineers to improve velocities.
I think you and the article actually agree and you are arguing only with their use of the word "development."
The article uses "development" to refer only to the part where code is generated, while you are saying "development" is the process as a whole.
You both agree that latency in the real-world validation feedback loop leads to longer cycles and fewer promising solutions and that is the bottleneck.
I would agree if the only way to achieve (digital product) success were to implement as many versions of software as possible. That's not true.
The whole Lean Startup was about figuring out how to validate ideas without actually developing them. And it is as relevant as ever, even with AI (maybe, especially with AI).
In fact, it's enough to look at the appalling rate of product success. We commonly agree that 90% of startups fail. The majority of that cohort have built things that shouldn't have been built at all in the first place. That's utter waste.
If only, instead of focusing on building more, they stopped and reevaluated whether they were building the right thing in the first place. Yet, most startups are completely immersed in the "development as a bottleneck" principle. And I tell that part from our own experience of 20+ years of helping such companies to build their early-stage products. The biggest challenge? Convince them to build less, validate, learn, and only then go back to further development.
When it comes to existing products, it gets even more complex. The quote from Leah Tharin explicitly mentions waiting weeks/months of wait till they were able to get statistically significant data. What follows is that within that part of experimentation, they were blocked.
Another angle to take a look at it is the fundamental difference in innovation between Edison/Dyson and Tesla.
The first duo was known for "I have not failed. I found 10,000 ways that don't work." They were flailing around with ideas till something eventually clicked.
Tesla, in contrast, would be at the Einstein's end of the spectrum with "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and 5 minutes thinking about [or in Tesla's case, making] solutions."
While most of the product companies would be somewhere in between, I'd argue that development is a bottleneck only if we are very close to Edison/Dyson's approach.
> The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise.
I have found that spending more time thinking generally reduces the amount of failed attempts. It's amazing what "thinking hard" beforehand can do to eliminate reprioritization scrambling.
> Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.
Thank you for articulating something I knew but haven't been able to express as eloquently.
It frustrates me to no end to watch half a dozen non-technical bureaucrats argue for days about something that can be tried (and discarded) in a few hours with zero consequences.
"Let's write a position paper so that everyone involved can agree before we do anything."
Noooo! Just do it! See if it works in practice! Validate the marketing! Kick the tyres! Go for a test drive. Just. Get. Behind. The. Wheel.
About a decade ago, I was the sole developer for a special project. The code took 2 weeks to complete (a very simple Java servlet + JDBC app) but an entire year to actually deliver due to indecisive leadership, politics, and extremely overzealous security policies. By the time it was successfully deployed to prod, I had been chewed out by management countless times, who usually asked questions like “how on Earth can it take so long to do this one simple thing??”.
I saw two projects in a row in a German Fintech (the one that has AI in its name that forbids usage of AI) go exactly the same way.
Two/three months to code everything ("It's maximum priority!"), about four to QA, and then about a year to deploy to individual country services by ops team.
During test and deploy phases, the developers were just twiddling thumbs because ops refused to allow them access and product refused to take in new projects due to possibility of developers having to go back to code.
It took the CEO to intervene and investigate the issues, and the CTO's college best friend that was running DevOps was demoted.
I see that a lot too. Something is super urgent, you work your ass off to deliver and then somebody sits on it for months before actually shipping. If ever.
I don’t actually mind (because I won’t work my ass off). So when enthusiasm fizzle out, I just take a lot of notes (to onboard myself quickly) and shelve the project.
IME, in most cases, it's the dickhead's fault in the first place.
This is often a CTO putting pressure on a dev manager when the bottleneck is ops, or product, or putting pressure on product when the bottleneck is dev.
The normal rationalization is that "you should be putting pressure on them".
The actual reason is that they are putting pressure on you as a show of force, rather than actually wanted it to go faster.
This is why the only response to a bad manager is to run away.
You have the unbelievably productive programmers - we all know their names, we use the code they wrote every day. Then you have the programmers who want to be there and will try everything they can to be there - except gain depth of knowledge. They tend to be shallow programmers. If you give them a task and spell it out, they can knock out code for it at a really good pace and wow upper management. But they will always lack the ability to take a task not spelled out and complete it. Vibe-coding is like sugar and crack mixed together for these people.
It’s infecting expectations I’ve noticed as well. The thing LLM coding tools expose very plainly if someone wasn’t already aware is that management would rather ship with bugs or missing features - no matter how many - as long as the “happy path” works.
The vibe coders can deliver on happy path results pretty fast but I already have seen within 2 months it starts to fall apart quick and has to be extensively refactored which ends up ultimately taking more time than if it was done with quality in mind in the first place
And supposedly the free market makes companies “efficient and logical”
We’ve only had these tools for, less than 2 years?
I think those “fall apart in 2 months” kinds of projects will still keep happening, but some of us had that experience and are refining our use of the tools. So I think in the future we will see a broader spread of “percent generated code” and degrees of success
> If you give them a task and spell it out, they can knock out code for it at a really good pace and wow upper management.
This is so true. I sometimes spend entire days, weeks, all I do is provide those type of engineers the clarity to "unblock" them. Yet I always wonder, if I had just spent that time coding myself, I might have gotten more done.
But it's also this that I think bottlenecks development. The set of people who really know what needs to be done, at the level of detail that these developer will need to be told, or that coding agents will need to be told, is very small, and that's your bottleneck, you have like 1 or 2 devs on all project that knows what to do, and everyone else need a Standard Operating Procedure handed to them for every task. And now everyone is always just waiting on the same 2 devs to tell them what to do.
I think we should we put this title-based distinction to rest.
Whether you call yourself an engineer, developer, programmer, or even a coder is mostly a localized thing, not an evaluation of expertise.
We're confusing everyone when we pretend a title reflects how good we are at the craft, especially titles we already use to refer to ourselves without judgement. At least use script kiddie or something.
I would reconcile the seeming paradox that AI-assisted coding produces more code faster, yet doesn't seem to produce products or features much faster by considering that AI code generation and in particular CoPilot-style code suggestions means the programmer is constantly invalidating and re-building their mental model of the code, which is not only slow but exhausting (and a tired programmer makes more errors in judgement).
It's basically the wetware equivalent of page thrashing.
My experience is that I write better code faster by turning off the AI assistants and trying to configure the IDE to as best possible produce deterministic and fast suggestions, that way they become a rapid shorthand. This makes for a fast way of writing code that doesn't lead to mental model thrashing, since the model can be updated incrementally as I go.
The exception is using LLMs to straight up generate a prototype that can be refined. That also works pretty well, and largely avoids the expensive exchanges of information back and forth between human and machine.
Whenever a development effort involves a lot of AI-generated code, the nature of the task shifts from typing-heavy to code-review-heavy.
Cognitively, these are very different tasks. With the former, we actively drive technical decisions (decide on architecture, implementation details, even naming). The latter offers all these decisions made, and we first need to untangle them all before we can scrutinize the details.
What's more, often AI-generated code results in bigger PRs, which again adds to the cognitive load.
And some developers fall into a rabbit hole of starting another thing while they wait for their agent to produce the code. Adding context switching to an already taxing challenge basically fries brains. There's no way such a code review to consistently catch the issues.
I see how development teams define health routines around working with generated code. Especially around limiting context switching. But also retaking tasks to be made by hand.
I’m moving this way as well after about 6 months of generating 95% of my code with Cursor/Claude.
My new paradigm is something like:
- write a few paragraphs about what is needed
- have the bot take in the context and produce a prototype solution outside of the main application
- have the bot describe main integration challenges
- do that integration myself — although I’m still somewhat lazy about this and keep trying to have the bot do it after the above steps; it seems to only have maybe 50% success rate
Validation is definitely the bottleneck, if you make all your product decisions through a/b tests and wait for a statistically significant result for each feature.
But there are people with great product taste who can know by trying a product whether it meets a real user need - some of these are early-adopter customers, sometimes they are great designers, sometimes PMs. And they really do need to try a product (or prototype) to really know whether it works. I was always frustrated as a junior engineer when the PM would design a feature in a written spec, we would implement it, and then when trying it out before launch, they would want to totally redesign it, often in ways which required either terrible hacks or significant technical design changes to meet the new requirements. But after 15 years of seeing some great ideas on paper fall flat with our users, and noticing that truly exceptional product people could tell exactly what was wrong after the feature was built but before it was released to users, I learned to be flexible about those sorts of rewrites. And it’s exactly that sort of thing that vibecoding can accelerate
It's interesting how frustrating it can feel to backtrack, even when it's the right move I definitely have felt this too.
Also, in the past I've done interactive maps and charts for different media organizations, and people would often debate for a considerable amount of time whether to, for example, make a bar or line chart (the actual questions and visualizations themselves were usually more sophisticated).
I remember occasionally suggesting prototyping both options and trying them out, and intuitively that usually struck people as impractical, even though it would often take less time than the discussions and yield more concrete results.
We have this saying:
Our clients always know what they want. Until they get it. Then they know they wanted something different.
And don't take that as a complaint. It's a basic behavioral observation. What we say we do is different from what we really do. By the same token, what we say we want is different from what we really want.
At a risk of being a bit sarcastic: we say we want regular exercise to keep fit, but we really want doomscrolling on a sofa with a beer in hand.
In the product development context, we have a very different attitude towards an imagined (hell, even wireframed) solution than an actual working piece of software. So it's kinda obvious we can't get it right on the first attempt.
We can be working toward the right direction, and many product teams don't even do that. For them, development speed is only a clock counting time remaining before VCs pull the plug.
Development is always a bottleneck. Writing lines of code usually isn’t. I end up pumping out more leetcode during an interview than I do during a week or two on real products. No one has meaningfully measured lines of code as a metric of productivity since my career began in the mid-2000.
On the other hand, there’s tons of people here on HN who will claim that there’s zero connection between lines of code written and developer productivity. Obviously, deleting bad/unused code is good. And obviously, some tricky bugs are fixed in one line. But you can’t build something new without some (usually, very many) lines of code.
What would be the function mapping lines of code to "value" look like. Most agile teams aim to deliver "value" these days. We can't put a number of value. We most certainly can't say on average that adding a single line of code adds 0.01 units of value for a certain project.
The article sort of glosses over this, but to me the real question is delivering value over the long run. This takes patience and tenacity not just from developers but also from management. Making a product that lasts and that evolves and that delivers for your clients is definitely a lot more challenging (and finally rewarding) than vibe-coding an MVP in a couple of weeks. I have the impression that in that regard AI coding tools are quite inadequate and don't really deliver the value they purport to.
That's just another great vantage point to consider when looking at product development.
Accompanying many early-stage startups in their journey, I see how often the development (which we're responsible for) takes a back seat. Sometimes the pivotal role will be customer support, sometimes it will be business development, and often product management will drive the whole thing.
And there's one more follow-up thought to this observation. Products that achieved success, inevitably, get into a spiral of getting more features. That, in turn, makes them more clunky and less usable, and ultimately opens a way for new players who disrupt the niche.
At some point, adding more features in general makes things worse--too complicated, too overwhelming, making it harder to accomplish the core task. And yet, adding new stuff never ceases.
In the long run, the best tactic may actually be to go slower (and stop at some point), but focus on the meaningful changes.
Yeah, “development speed” is almost never the real blocker. I’ve worked on teams where folks shipped code at lightning speed… straight into the wrong direction. Turns out it’s way slower to undo that than to just move carefully with clarity.
Just to quote one little bit from the piece regarding Google: "In other words, there have been numerous dead ends that they explored, invalidated, and moved on from. There's no knowing up front."
Every time you change your mind or learn something new and you have to make a course correction, there's latency. That latency is just development velocity. The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise. The bottleneck for that is 100% development speed.
If you can shrink your iteration time, then there are fewer meetings trying to determine prioritization. There are fewer discussions and bargaining sessions you need to do. Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.
If you can shrink your iteration time between versions 2 and 3, between versions 3 and 4, etc. The advantage compounds over your competitors. You find promising solutions earlier, which lead to new promising solutions earlier. Over an extended period of time, this is how you build a moat.
With LLMs, you can type so much faster! So we should be going faster! It feels faster!
(We are not going faster.)
But your definition, the right one, is spot on. The pace of learning and decisions is exactly what drives development velocity. My one quibble is that if you want to learn whether something is worth doing, implementing it isn't always the answer. Prototyping vs. production-quality implementation is different, even within that. But yeah, broadly, you need to test and validate as many _ideas_ as possible, in order take make as many correct _decisions_ as possible.
That's one place I'm pretty bullish on AI: using it to explore/test ideas, which otherwise would have been too expensive. You can learn a ton by sending the AI off to research stuff (code, web search, your production logs, whatever), which lets you try more stuff. That genuinely tightens the feedback loop, and you go faster.
I wrote a bit more about that here: https://tern.sh/blog/you-have-to-decide/
It’s very rare to not touch up code, even when writing new features. Knowing where to do so in advance (and planning to not have to do that a lot) is where velocity is. AI can’t help.
Maybe the real skynet will kill us with ticking time bomb software bugs we blindly accepted.
Better developer wastes less time solving the wrong problem.
The current trend in anti-vibe-coding articles is to take whatever the vibe coding maximalists are saying and then stake out the polar opposite position. In this case, vibe coding maximalists are claiming that LLM coding will dramatically accelerate time to market, so the anti-vibe-coding people feel like they need to claim that development speed has no impact at all. Add a dash of clickbait (putting "development speed" in the headline when they mean typing speed) and you get the standard LLM war clickbait article.
Both extremes are wrong, of course. Accelerating development speed is helpful, but it's not the only factor that goes into launching a successful product. If something can accelerate development speed, it will accelerate time to market and turnaround on feature requests.
I also think this mentality appeals to people who have been stuck in slow moving companies where you spend more time in meetings, waiting for blockers from third parties, writing documents, and appeasing stakeholders than you do shipping code. In some companies, you really could reduce development time to 0 and it wouldn't change anything because every feature must go through a gauntlet of meetings, approvals, and waiting for stakeholders to have open slots in their calendars to make progress. For anyone stuck in this environment, coding speed barely matters because the rest of the company moves so slow.
For those of us familiar with faster moving environments that prioritize shipping and discourage excessive process and meetings, development speed is absolutely a bottleneck.
We have literally one half-hour-long sync meeting a week. The rest is as lightweight as possible, typically averaging below 10 minutes daily with clients (when all the decisions happen on the fly).
I've worked in the corpo world, too, and it is anything but.
We do use vibe coding a lot in prototyping. Depending on the context, we sometimes have a lot of AI-agent-generated code, too.
What's more, because of working on multiple projects, we have a fairly decent pool of data points. And we don't see much of speed improvement from a perspective of a project (I wrote more on it here: https://brodzinski.com/2025/08/most-underestimated-factor-es...).
However, developers sure report their perception of being more productive. We do discuss how much these perceptions are grounded in reality, though. See this: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... and this: https://substack.com/home/post/p-172538377
So, I don't think I'm biased toward bureaucratic environments, where developers code in MS Word rather than VS Code.
But these are all just one dimension of the discussion. The other is a simple question: are there ways of validating ideas before we turn them into implemented features/products?
The answer has always been a wholehearted "yes".
If development pace were all that counted, Googles and Amazons of this world would be beating the crap out of every aspiring startup in any niche the big tech cared about, even remotely. And that simply is not happening.
Incumbents are known to be losing ground, and old-school behemoths that still kick butts (such as IBM) do so because they continuously reinvent their businesses.
Strange, I'd been more of the impression that this is an argument from pro vibe-coders. As more data comes in, the "productivity increases" of AI are not showing up as expected. So as people question, how come things are not getting done faster even though you say you are 10x faster at coding? The vibe-coders answer by saying that coding isn't the bottleneck, as opposed to capitulating and saying that maybe they're not that much faster at coding after-all.
It's agreed that testing, evaluating, learning and course correcting are what takes the time. That's the entire point being made.
You can't test or evaluate something that doesn't work yet.
Perhaps I've just misunderstood the point, but it seems like a nonsensical argument.
Do we always have to build it before we know that it will work (or, in 9 cases out of 10, that it will not work)?
Even more so, do we have to build a fully-fledged version of it to know?
If yes, then I agree, development is the bottleneck.
That sounds like an awful way of software design. Trial and error isn’t engineering but explains the current state of software security.
This is /especially/ true in software in 2025, because most products are SaaS or subscription based, so you have a consistent revenue stream that can cover ongoing development costs which gives you the necessary runway to iterate repeatedly. Development costs then become relatively stable for a given team size and the velocity of that team entirely determines how often you can iterate, which determines how quickly you find an optimal solution and derive more value.
I think there's another issue, but it could relate to your first two statement here. Even to try ideas, to explore the space of solutions, you need to have ideas to try. When entering development, you need clarity on what you're trying. It's very hard to make decisions on even a single attempt. I see engineers working task the entire time simply not sure what really the task is about.
And in a way, the coding agents need even more clarity in what you ask of them to deliver good result.
So even inside of what we consider "development" or "coding", the bottleneck is often: "what am I supposed to do here?" and not so much "I don't know how to do this" or "I have so much to implement".
This is obvious as well, once you throw more engineers, and you can't break up the work, because you have no clue what so many people could even all do. Knowing what all the needed tasks even are is hard and a big bottleneck.
This has been my experience as well :/
The article uses "development" to refer only to the part where code is generated, while you are saying "development" is the process as a whole.
You both agree that latency in the real-world validation feedback loop leads to longer cycles and fewer promising solutions and that is the bottleneck.
Prototyping was never the issue.
The lessons you're talking about come from stressing applications and their design, which requires users to stress it.
The whole Lean Startup was about figuring out how to validate ideas without actually developing them. And it is as relevant as ever, even with AI (maybe, especially with AI).
In fact, it's enough to look at the appalling rate of product success. We commonly agree that 90% of startups fail. The majority of that cohort have built things that shouldn't have been built at all in the first place. That's utter waste.
If only, instead of focusing on building more, they stopped and reevaluated whether they were building the right thing in the first place. Yet, most startups are completely immersed in the "development as a bottleneck" principle. And I tell that part from our own experience of 20+ years of helping such companies to build their early-stage products. The biggest challenge? Convince them to build less, validate, learn, and only then go back to further development.
When it comes to existing products, it gets even more complex. The quote from Leah Tharin explicitly mentions waiting weeks/months of wait till they were able to get statistically significant data. What follows is that within that part of experimentation, they were blocked.
Another angle to take a look at it is the fundamental difference in innovation between Edison/Dyson and Tesla.
The first duo was known for "I have not failed. I found 10,000 ways that don't work." They were flailing around with ideas till something eventually clicked.
Tesla, in contrast, would be at the Einstein's end of the spectrum with "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and 5 minutes thinking about [or in Tesla's case, making] solutions."
While most of the product companies would be somewhere in between, I'd argue that development is a bottleneck only if we are very close to Edison/Dyson's approach.
I have found that spending more time thinking generally reduces the amount of failed attempts. It's amazing what "thinking hard" beforehand can do to eliminate reprioritization scrambling.
Thank you for articulating something I knew but haven't been able to express as eloquently.
It frustrates me to no end to watch half a dozen non-technical bureaucrats argue for days about something that can be tried (and discarded) in a few hours with zero consequences.
"Let's write a position paper so that everyone involved can agree before we do anything."
Noooo! Just do it! See if it works in practice! Validate the marketing! Kick the tyres! Go for a test drive. Just. Get. Behind. The. Wheel.
Deleted Comment
Two/three months to code everything ("It's maximum priority!"), about four to QA, and then about a year to deploy to individual country services by ops team.
During test and deploy phases, the developers were just twiddling thumbs because ops refused to allow them access and product refused to take in new projects due to possibility of developers having to go back to code.
It took the CEO to intervene and investigate the issues, and the CTO's college best friend that was running DevOps was demoted.
This is often a CTO putting pressure on a dev manager when the bottleneck is ops, or product, or putting pressure on product when the bottleneck is dev.
The normal rationalization is that "you should be putting pressure on them".
The actual reason is that they are putting pressure on you as a show of force, rather than actually wanted it to go faster.
This is why the only response to a bad manager is to run away.
The vibe coders can deliver on happy path results pretty fast but I already have seen within 2 months it starts to fall apart quick and has to be extensively refactored which ends up ultimately taking more time than if it was done with quality in mind in the first place
And supposedly the free market makes companies “efficient and logical”
I think those “fall apart in 2 months” kinds of projects will still keep happening, but some of us had that experience and are refining our use of the tools. So I think in the future we will see a broader spread of “percent generated code” and degrees of success
> If you give them a task and spell it out, they can knock out code for it at a really good pace and wow upper management.
This is so true. I sometimes spend entire days, weeks, all I do is provide those type of engineers the clarity to "unblock" them. Yet I always wonder, if I had just spent that time coding myself, I might have gotten more done.
But it's also this that I think bottlenecks development. The set of people who really know what needs to be done, at the level of detail that these developer will need to be told, or that coding agents will need to be told, is very small, and that's your bottleneck, you have like 1 or 2 devs on all project that knows what to do, and everyone else need a Standard Operating Procedure handed to them for every task. And now everyone is always just waiting on the same 2 devs to tell them what to do.
Whether you call yourself an engineer, developer, programmer, or even a coder is mostly a localized thing, not an evaluation of expertise.
We're confusing everyone when we pretend a title reflects how good we are at the craft, especially titles we already use to refer to ourselves without judgement. At least use script kiddie or something.
It's basically the wetware equivalent of page thrashing.
My experience is that I write better code faster by turning off the AI assistants and trying to configure the IDE to as best possible produce deterministic and fast suggestions, that way they become a rapid shorthand. This makes for a fast way of writing code that doesn't lead to mental model thrashing, since the model can be updated incrementally as I go.
The exception is using LLMs to straight up generate a prototype that can be refined. That also works pretty well, and largely avoids the expensive exchanges of information back and forth between human and machine.
Cognitively, these are very different tasks. With the former, we actively drive technical decisions (decide on architecture, implementation details, even naming). The latter offers all these decisions made, and we first need to untangle them all before we can scrutinize the details.
What's more, often AI-generated code results in bigger PRs, which again adds to the cognitive load.
And some developers fall into a rabbit hole of starting another thing while they wait for their agent to produce the code. Adding context switching to an already taxing challenge basically fries brains. There's no way such a code review to consistently catch the issues.
I see how development teams define health routines around working with generated code. Especially around limiting context switching. But also retaking tasks to be made by hand.
My new paradigm is something like:
- write a few paragraphs about what is needed
- have the bot take in the context and produce a prototype solution outside of the main application
- have the bot describe main integration challenges
- do that integration myself — although I’m still somewhat lazy about this and keep trying to have the bot do it after the above steps; it seems to only have maybe 50% success rate
- obviously test thoroughly
But there are people with great product taste who can know by trying a product whether it meets a real user need - some of these are early-adopter customers, sometimes they are great designers, sometimes PMs. And they really do need to try a product (or prototype) to really know whether it works. I was always frustrated as a junior engineer when the PM would design a feature in a written spec, we would implement it, and then when trying it out before launch, they would want to totally redesign it, often in ways which required either terrible hacks or significant technical design changes to meet the new requirements. But after 15 years of seeing some great ideas on paper fall flat with our users, and noticing that truly exceptional product people could tell exactly what was wrong after the feature was built but before it was released to users, I learned to be flexible about those sorts of rewrites. And it’s exactly that sort of thing that vibecoding can accelerate
Also, in the past I've done interactive maps and charts for different media organizations, and people would often debate for a considerable amount of time whether to, for example, make a bar or line chart (the actual questions and visualizations themselves were usually more sophisticated).
I remember occasionally suggesting prototyping both options and trying them out, and intuitively that usually struck people as impractical, even though it would often take less time than the discussions and yield more concrete results.
And don't take that as a complaint. It's a basic behavioral observation. What we say we do is different from what we really do. By the same token, what we say we want is different from what we really want.
At a risk of being a bit sarcastic: we say we want regular exercise to keep fit, but we really want doomscrolling on a sofa with a beer in hand.
In the product development context, we have a very different attitude towards an imagined (hell, even wireframed) solution than an actual working piece of software. So it's kinda obvious we can't get it right on the first attempt.
We can be working toward the right direction, and many product teams don't even do that. For them, development speed is only a clock counting time remaining before VCs pull the plug.
No code -> no software.
Accompanying many early-stage startups in their journey, I see how often the development (which we're responsible for) takes a back seat. Sometimes the pivotal role will be customer support, sometimes it will be business development, and often product management will drive the whole thing.
And there's one more follow-up thought to this observation. Products that achieved success, inevitably, get into a spiral of getting more features. That, in turn, makes them more clunky and less usable, and ultimately opens a way for new players who disrupt the niche.
At some point, adding more features in general makes things worse--too complicated, too overwhelming, making it harder to accomplish the core task. And yet, adding new stuff never ceases.
In the long run, the best tactic may actually be to go slower (and stop at some point), but focus on the meaningful changes.