A quick scan of various reddit forums reveals how "vibe coders" experience exponential levels of difficulty past the simple landing page. Things like setting up basic auth for example is non-trivial (outside of PasS) and there are many other aspects that require at least some understanding of what is going on.
My only concern of this type of programming is when it starts involving end users, specifically around privacy and security. It is not that AI written software is less or more secure. It is just about the whole live-cycle of software development and maintenance. Vibe-coders might not visit this forum for example and might not be aware of a security exploit that must be fixed asap... or even be aware that they need to perform basic level of software support to avoid costly intervention in the future. It is not going to happen - not today.
I am thinking that this type of coding is only going to increase the demand for professional services. The closest analogy I can provide is that almost anyone can perform basic level of DYI and many do - but when it comes to more serious work you rely on contractors to get the job done.
> Vibe-coders might not visit this forum for example and might not be aware of a security exploit that must be fixed asap.
This is true of "regular" coders too, i've worked with/for extremely experienced programmers who, while knowing the security risks, implemented (or even worse, asked me to implement) flakey authentication systems, exposed databases, etc. I don't think this is an AI issue, it's a "care about your users" issue.
> is only going to increase the demand for professional services
Then shouldn't we be happy that more people are embracing vibe coding?
> Then shouldn't we be happy that more people are embracing vibe coding?
I don’t know about you, but I’m not looking forward to reverse engineering and maintaining someone else’s vibe coded mess.
It’s like when someone makes a quick and dirty proof of concept to impress management and then hands it off to another team to make it usable in production. They did 20% of the work but took 80% of the credit. Now you have to to the remaining 80% of the work to make it into what it needs to be, but management is only going to be disappointed because you’re moving so slowly relative to the proof of concept creator.
There's a big difference between someone knowing about security and taking a calculated risk, and someone who understands neither the risks nor the semantics of code handed to them. There are also legal issues. Even in the most optimistic cases, I think we're 50 years away from this making sense for most software.
Accenture, IBM and other big companies have employed vibe coders for years.
Java is a language designed to make it possible. Java style classes make it possible for someone to design the overall structure and pass it to someone who just vomits something inside it.
They have teams where maybe 1 in 3 consultants have programming knowledge and the rest just vibe something into the Java API that is given to them. Back and forth until it passes and then to the customers. LLMs make this just faster and maybe even better.
The code will be bloated, slow, and hard to maintain.
> Accenture, IBM and other big companies have employed vibe coders for years.
Having worked in a few codebases/projects by these companies (and some other "big name" consultancy companies in this same "space"), I cannot imagine copypasta from Claude/Copilot would be anything other than a massive improvement.
What companies have paid these firms to do is nothing short of appalling, and I'm not even referring to the notorious Hertz lawsuit, just the average run of the mill junk.
Java classes are no more enabling of shit code than any other language. Idiots can and do “vomit” code into pre-canned structures built for them in any language.
Java was, as Gosling says in the first Java white paper, designed for average programmers.
Java classes and types are strict and "save" enough that it make it possible to put straight jacked to coder, but loose enough that they can make it compile. You can't do that with C++, Python because they are too loose and you can't do it with Rust or strict functional programming language because vibe programmer can't make the code compile.
It's perfect when you have software architect who does the intelligent work and code monkeys who fill the blanks.
“non-programmers can now build functional software by working with AI rather than writing code directly.”
I’m not sure about that, it may be true, but you can also build a ecom store without coding knowledge using wordpress and shopify but you still face a lot of limitations.
Tools to build web apps and web pages without coding knowledge have been around for over 2 decades. Microsoft Frontpage allowed uses to build a website using a WYSIWYG editor.
That's very true. But programmers are extremely expensive. I don't think these tools or vibe coding replace expensive programmers, but they do make it possible to noodle where once it would have taken a potentially very risky commitment for a small business / solopreneur.
One thing that AI made possible for me (amongst many other benefits) is to frequently build auxiliary development tooling that I would otherwise not bother with, with almost no required time investment.
Tools and visualizations for debugging that I might need only for a week, local dev automations, linters, etc.
As long as I can eye-ball correctness, it's a huge time-saver and help.
Some commenters are likening this to no-code/low-code. However, there is an important difference: When you prompt an LLM to generate some code, then for any further modification, the generated code becomes part of the software’s specification instead of the original prompt. In other words, as soon as you have some history of “coding” the software, you can’t experiment with modifying previous prompts to see how it changes the software. (You can in theory and replay the whole history, but then subsequent existing prompts won’t make sense anymore in the context of the changes made to preceding prompts.)
This is unlike no-code/low-code, where the “coder” still has full control and visibility over the specification and can tinker with it. With LLM-generated code, an increasing part of the “specification” is constituted by the code that has been generated so far, into which the “coder” has no insight. The “coder” can only try to run it to see how it behaves, but cannot reason about it like a no-code/low-code coder can about their specification.
It may not be auditable in the same was a block-based workflow, but there's nothing stopping you from saying "hey, this code doesn't do what I want, please improve it" to the LLM.
The whole point about the article is that reasoning about the code isn't always required any more, reasoning about the output is enough.
My point is that it is not the same, in a similar sense in which testing (as a software engineering discipline) is not a substitute for proving the correctness of code. Another way in which it is different is that the “coder” cannot test hypotheses of the kind “when I prompt x, then y will happen” other than as a one-off, because that conversion is lossy, whereas in no-code/low-code “when I specify x, then y will happen” is deterministic. In other words, with no-code/low-code, the “coder” can with time learn to predict precisely and reliably what their change will do, which is not really the case with LLM-based coding, where you have to go by the “vibe”.
I see many gatekeepers in the comments. I've made hundreds of small apps and scrips for me with with vibe coding, i dont want them maintainable, I dont want them safety critical, I dont want to launch a rocket with them.
I just want a free script or addon that will do its job instead of paying someone on upwork to do it for me. simple as.
Additionally it's only up from here. The agents will be able to do maintainable code and safety critical code at some point in the future. It's not all gonna happen overnight.
I don’t see anyone “gatekeeping” but I do see warnings. I don’t know what you’re good at, to give a good example. But if you have a hobby or job, imagine the new guy coming in, knowing absolutely nothing and doing shit they technically can do, but shouldn’t be done. That’s what this is. Just like a new cashier can technically just not give correct change, everyone knows this will end badly; eventually, even though everything looks fine from the outside.
That’s what AI code tends to do. It looks fine to the outsiders, it might even function correctly MOST of the time. Programmers are trained, mostly through experience but also classically, to think through every exceptional case and figure out how to handle it (or return an error so a human can handle it).
That isn’t to say this is ALWAYS the case with AI code, but rather it TENDS to be the case. YMMV, which is where these warnings are coming from.
> Programmers are trained, mostly through experience but also classically, to think through every exceptional case and figure out how to handle it (or return an error so a human can handle it).
Unfortunately, vibe coding is seen as acceptable to many specifically because what you say here hasn't been the norm for like 15 years now.
The rise of bootcamps, dependency-driven-development, and "move fast and break things" culture convinced a whole generation of programmers that all that matters is the happy path, and even then, only within the context of today's specific task. The ocean of garbage they produced during that time is both one of the reasons why LLM's can produce code in the first place and one of the reasons why that code is so consistently and irrecoverably poor by traditional standards. Aspiring vibe coders see their peers earning absurd six-figure salaries to produce the exact same sort of unstable, short-sighted noise that they see their LLM literally reproduce for pennies now.
In many ways, we should be gatekeeping by asserting a higher standard of foresight and quality for redistributed code, but we completely lost that battle many, many years ago already.
> It looks fine to the outsiders, it might even function correctly MOST of the time.
Most code (even written by good programmers) rarely functions correctly most of the time. Most code is broken. This is not the problem with AI. Unless I am using the tools wrong, LLMs can generate fully functioning scripts (and some of them are good) but they break after the 50k token context and start doing insane things that not even juniors will do (like randomly removing code).
If you want to see a shit-show, go to Bolt Discord channel. Some users are able to get a very simple and rough kinda single script app running. Everything else breaks once they start making simple amendments. This is not fixed by Claude 3.7 or O1 Pro or whatever. This is a fundamental issue in all of the LLM and a local maxima of the current tech.
Not that the current tech is not amazing. It is and there is a lot of value to be extracted from it. But everyone and his investor think they are about to reach nirvana and want to replace everything with "AI" where "AI" is a 100k context LLM.
I feel comfortable calling a lot of the comments gatekeeping "adjacent".
When a post is a "warning" about user's credit cards etc. and it was in none of the "vibe coding" examples, that feels like someone deflecting (but with a gate keeping mindset).
As others have said here, I too have knocked out little apps and sites vibin'. At first (1) it was to see if what everyone was saying about LLMs was true. Then (2) I wanted to see if LLMs could help in languages I was not familiar with. Then, even for languages I knew well enough, I (3) wanted to see an LLM's code to get the modern method of modularization for the language (JavaScript is one that has gone through phases). Finally I came to trust the machine and (4) have vibed code just for my own small projects.
What you said applies to programmers who are actually good.
Consider that the world is full of programmers who are objectively shit (I live in India, and have personal experience). The only difference is that they’ll be copy pasting from ChatGPT instead of StackOverflow, and honestly I’d rather have ChatGPT.
So this is about the ego of professionals? As the other person explained, there are many use cases for disposable software, so it's a good thing that there's a new tech enabling non-programmer to create these on their own.
> if you have a hobby or job, imagine the new guy coming in, knowing absolutely nothing and doing shit they technically can do, but shouldn’t be done
Isn't that the definition of gatekeeping? and even so, who cares? the new guy coming in creating/doing something in a way that "shouldn't be done" will eventually fail and notice the shortcomings of what they did. Then they will either drop their project or learn the way "it should be done" (like we all did, except now with AI).
To be fair, lots of "actual" programmers who don't know good from bad have been shipping insecure code to prod for decades.
AI is just another vector for this, not something entirely new.
When you have your amazing idea, instead of hiring an inexpensive low-skill developer (whose work you are also incapable of evaluating) to build and ship your idea in a low quality way, you're just paying AI to do it.
It's just putting they money into different (centralized) pockets.
Anyone can play the violin. Anyone can run a marathon. Anyone can …
People who spent their lifetime never quite able to sit down and write programs, for whatever reasons (time, focus, foundational knowledge, available mentors), have in the last year shipped working apps/scripts, by just saying in plain english what they wanted. That's exciting.
There’s some gate keeping, but many of us are also worried about the inevitable future where we have to take over someone else’s “vibe coding” mess of code.
The vibe coding style translates to trying a lot of different prompts and small adjustments until it looks like it works. In the past these people copied from StackOverflow and poked at lines until it compiled and appeared to work, but that only gets you so far. Now those same people can bang away at an LLM assistant all day long and produce volumes of code that appear to kind of work.
I’m in another forum dedicated to programming careers. Every day there’s a new thread from someone asking how to deal with all of their junior employees spamming code review with obvious LLM generated code that they don’t even understand.
A lot of the defenses of vibe coding rely on the assumption that it’s in the hands of someone knowledgeable who only wants to save a little time for something inconsequential. That’s fine. What’s worrying is that vibe coding is being used as a replacement for understanding code for many juniors and lazy seniors across the industry as long as they think they can get away with it.
> ... many of us are also worried about the inevitable future where we have to take over someone else’s “vibe coding” mess of code.
Another potential alternative is that these things progress quickly and soon can code and review code at or above the level of most SWEs. I suspect that's driving a non-zero amount of anxiety in the comments.
I'm not an SWE, and frankly, they're above my level today. Saying "plz don't code if you can't understand it" applies to my code today without AI assistance. Are you suggesting I shouldn't release anything because others might need to read it? The way to prevent this within an organization is smart hiring practices, not restrictions on tool use.
>The agents will be able to do maintainable code and safety critical code at some point in the future. It's not all gonna happen overnight.
Don't hold your breath. If the AI was good enough to do all that, "maintainability" would look very different from what it does today. What does it mean to be maintainable when an AI can completely rewrite the software in every iteration? If AI ever gets good enough to do all this, for safety-critical applications no less, probably 95% of the white collar jobs that exist today will be gone. There may also be robots to do 90% of all other work too.
I think the issue is Hacker News users are so deep in the space that they're surrounded by AI noise at work every day and might even be forced to use it every day. So they've essentially accepted that it's a useful invention. There's this foundational, overwhelmingly positive assumption that goes unsaid, while 100% of their comment text is pure criticism and negativity. But in their brain, it comes across as maybe like 1% criticism and negativity and 99% positive lived experience. So they don't realize how bizarrely negative they're all sounding about a good thing. Of course, there are some people do really completely hate AI coding, but I don't think the general sentiment one gets from reading hacker news comments literally is accurate. It has to be read in context. At my work, people prefix their critical comments with "nit -" to reassure the reader that it's overall good.
The article itself doesn't have comments. On HN, there's 2 posts that are against vibe coding which could be seen as gatekeeping. What are you referring to when you say "I see many gatekeepers in the comments."?
I'm not sure it's explicitly gatekeeping, more a lack of perspective.
The benefit and excitement is most felt by people with little to no experience writing code themselves. The fear seems to come from building.. what, code for critical infrastructure? That's just not what we're talking about here.
When I started programming, I didn’t have an AI to help me.
I wrote a lot of spaghetti and I confused myself a lot. And it was a lot of fun.
I think the doomsayers ITT are wrong. I think you’ve forgotten what it was like to go from “how do you even make a program” to “I put something on the screen and it’s amazing that I did that”.
I think AI will help a lot of people get over the bump from not even comprehending how software works, to putting something on their screen and evolving their skills from there.
Who cares if they make some spaghetti along the way. That’s necessary for learning. AI or not.
You call learning, making mistakes and fixing them, and improving "a bump"? That's the whole point.
> That’s necessary for learning
You haven't learned anything in the end. I read a lot of programming books in the past thinking I would be a computer god at the end, and I realized I learned nothing because "I did nothing" exactly like what we have with ChatGPT.
What skills? If you are just asking a computer for what you want you're not developing any skills, apart from maybe how to describe your requirements better†.
If you take the code the LLM outputs and use that as a basis to be able to write your own code I would call that "learning to program" and I applaud it whether you learn from adapting LLM code or by reading K&R cover-to-cover before you even touch a keyboard. But that's not what this article describes—what this article describes is the very deliberate act of not learning anything.
†Technically just describing your requirements in a way the particular LLM you're using responds best to, which is not necessarily "better" in an objective sense.
Everyone knows that copy pasting code without understanding it is exactly how you don't learn. Every programming class you're in has an underperforming student that does exactly this. Than they barely graduate, get interviewed, and they're somehow worse than the bootcamp grads.
Cursor is that, but prompted by you instead of a stackoverflow question.
It's nice that not technical people can also create something new but it is more or less the same like DIY approaches Vs hiring professionals to do something.
About software, it works, fine, but do you ever deal with maintenance, security patches and so on?
That's a very valid concern. What kind of maintenance though? Most of the examples in this article are very simple. Maintenance might involve iterating on features, no real live services that would require any work.
Security patches though, yeah, that's tougher. My position is that if security is a concern, you need to hire someone. As much as many of these tools can integrate with databases and set up auth, I'm not sure how much I trust it personally. Especially if the actual code is hidden.
My only concern of this type of programming is when it starts involving end users, specifically around privacy and security. It is not that AI written software is less or more secure. It is just about the whole live-cycle of software development and maintenance. Vibe-coders might not visit this forum for example and might not be aware of a security exploit that must be fixed asap... or even be aware that they need to perform basic level of software support to avoid costly intervention in the future. It is not going to happen - not today.
I am thinking that this type of coding is only going to increase the demand for professional services. The closest analogy I can provide is that almost anyone can perform basic level of DYI and many do - but when it comes to more serious work you rely on contractors to get the job done.
This is true of "regular" coders too, i've worked with/for extremely experienced programmers who, while knowing the security risks, implemented (or even worse, asked me to implement) flakey authentication systems, exposed databases, etc. I don't think this is an AI issue, it's a "care about your users" issue.
> is only going to increase the demand for professional services
Then shouldn't we be happy that more people are embracing vibe coding?
I don’t know about you, but I’m not looking forward to reverse engineering and maintaining someone else’s vibe coded mess.
It’s like when someone makes a quick and dirty proof of concept to impress management and then hands it off to another team to make it usable in production. They did 20% of the work but took 80% of the credit. Now you have to to the remaining 80% of the work to make it into what it needs to be, but management is only going to be disappointed because you’re moving so slowly relative to the proof of concept creator.
Java is a language designed to make it possible. Java style classes make it possible for someone to design the overall structure and pass it to someone who just vomits something inside it.
They have teams where maybe 1 in 3 consultants have programming knowledge and the rest just vibe something into the Java API that is given to them. Back and forth until it passes and then to the customers. LLMs make this just faster and maybe even better.
The code will be bloated, slow, and hard to maintain.
Having worked in a few codebases/projects by these companies (and some other "big name" consultancy companies in this same "space"), I cannot imagine copypasta from Claude/Copilot would be anything other than a massive improvement.
What companies have paid these firms to do is nothing short of appalling, and I'm not even referring to the notorious Hertz lawsuit, just the average run of the mill junk.
Java classes are no more enabling of shit code than any other language. Idiots can and do “vomit” code into pre-canned structures built for them in any language.
Java classes and types are strict and "save" enough that it make it possible to put straight jacked to coder, but loose enough that they can make it compile. You can't do that with C++, Python because they are too loose and you can't do it with Rust or strict functional programming language because vibe programmer can't make the code compile.
It's perfect when you have software architect who does the intelligent work and code monkeys who fill the blanks.
I’m not sure about that, it may be true, but you can also build a ecom store without coding knowledge using wordpress and shopify but you still face a lot of limitations.
We're talking about building useful tools. Small things, that work in a way that makes sense to the individual.
Something that could be churned out in a day or two by a competent dev, but is completely out of reach for Joe public.
That's where we're at.
Tools to build web apps and web pages without coding knowledge have been around for over 2 decades. Microsoft Frontpage allowed uses to build a website using a WYSIWYG editor.
Tools and visualizations for debugging that I might need only for a week, local dev automations, linters, etc.
As long as I can eye-ball correctness, it's a huge time-saver and help.
This is unlike no-code/low-code, where the “coder” still has full control and visibility over the specification and can tinker with it. With LLM-generated code, an increasing part of the “specification” is constituted by the code that has been generated so far, into which the “coder” has no insight. The “coder” can only try to run it to see how it behaves, but cannot reason about it like a no-code/low-code coder can about their specification.
The whole point about the article is that reasoning about the code isn't always required any more, reasoning about the output is enough.
I just want a free script or addon that will do its job instead of paying someone on upwork to do it for me. simple as.
Additionally it's only up from here. The agents will be able to do maintainable code and safety critical code at some point in the future. It's not all gonna happen overnight.
That’s what AI code tends to do. It looks fine to the outsiders, it might even function correctly MOST of the time. Programmers are trained, mostly through experience but also classically, to think through every exceptional case and figure out how to handle it (or return an error so a human can handle it).
That isn’t to say this is ALWAYS the case with AI code, but rather it TENDS to be the case. YMMV, which is where these warnings are coming from.
Unfortunately, vibe coding is seen as acceptable to many specifically because what you say here hasn't been the norm for like 15 years now.
The rise of bootcamps, dependency-driven-development, and "move fast and break things" culture convinced a whole generation of programmers that all that matters is the happy path, and even then, only within the context of today's specific task. The ocean of garbage they produced during that time is both one of the reasons why LLM's can produce code in the first place and one of the reasons why that code is so consistently and irrecoverably poor by traditional standards. Aspiring vibe coders see their peers earning absurd six-figure salaries to produce the exact same sort of unstable, short-sighted noise that they see their LLM literally reproduce for pennies now.
In many ways, we should be gatekeeping by asserting a higher standard of foresight and quality for redistributed code, but we completely lost that battle many, many years ago already.
Most code (even written by good programmers) rarely functions correctly most of the time. Most code is broken. This is not the problem with AI. Unless I am using the tools wrong, LLMs can generate fully functioning scripts (and some of them are good) but they break after the 50k token context and start doing insane things that not even juniors will do (like randomly removing code).
If you want to see a shit-show, go to Bolt Discord channel. Some users are able to get a very simple and rough kinda single script app running. Everything else breaks once they start making simple amendments. This is not fixed by Claude 3.7 or O1 Pro or whatever. This is a fundamental issue in all of the LLM and a local maxima of the current tech.
Not that the current tech is not amazing. It is and there is a lot of value to be extracted from it. But everyone and his investor think they are about to reach nirvana and want to replace everything with "AI" where "AI" is a 100k context LLM.
When a post is a "warning" about user's credit cards etc. and it was in none of the "vibe coding" examples, that feels like someone deflecting (but with a gate keeping mindset).
As others have said here, I too have knocked out little apps and sites vibin'. At first (1) it was to see if what everyone was saying about LLMs was true. Then (2) I wanted to see if LLMs could help in languages I was not familiar with. Then, even for languages I knew well enough, I (3) wanted to see an LLM's code to get the modern method of modularization for the language (JavaScript is one that has gone through phases). Finally I came to trust the machine and (4) have vibed code just for my own small projects.
Consider that the world is full of programmers who are objectively shit (I live in India, and have personal experience). The only difference is that they’ll be copy pasting from ChatGPT instead of StackOverflow, and honestly I’d rather have ChatGPT.
Isn't that the definition of gatekeeping? and even so, who cares? the new guy coming in creating/doing something in a way that "shouldn't be done" will eventually fail and notice the shortcomings of what they did. Then they will either drop their project or learn the way "it should be done" (like we all did, except now with AI).
But to build something which could handle a customer’s credit card, password, or other PII and charge them for it, you better know what you’re doing.
It’s all fun and games until you’re the cause of someone’s identity or password getting stolen.
Anyone can use CAD software, but if you’re designing a public space, you better know something about safety.
AI is just another vector for this, not something entirely new.
When you have your amazing idea, instead of hiring an inexpensive low-skill developer (whose work you are also incapable of evaluating) to build and ship your idea in a low quality way, you're just paying AI to do it.
It's just putting they money into different (centralized) pockets.
Anyone can play the violin. Anyone can run a marathon. Anyone can …
People who spent their lifetime never quite able to sit down and write programs, for whatever reasons (time, focus, foundational knowledge, available mentors), have in the last year shipped working apps/scripts, by just saying in plain english what they wanted. That's exciting.
The vibe coding style translates to trying a lot of different prompts and small adjustments until it looks like it works. In the past these people copied from StackOverflow and poked at lines until it compiled and appeared to work, but that only gets you so far. Now those same people can bang away at an LLM assistant all day long and produce volumes of code that appear to kind of work.
I’m in another forum dedicated to programming careers. Every day there’s a new thread from someone asking how to deal with all of their junior employees spamming code review with obvious LLM generated code that they don’t even understand.
A lot of the defenses of vibe coding rely on the assumption that it’s in the hands of someone knowledgeable who only wants to save a little time for something inconsequential. That’s fine. What’s worrying is that vibe coding is being used as a replacement for understanding code for many juniors and lazy seniors across the industry as long as they think they can get away with it.
worried? we're going to make a goddamned fortune
Another potential alternative is that these things progress quickly and soon can code and review code at or above the level of most SWEs. I suspect that's driving a non-zero amount of anxiety in the comments.
I'm not an SWE, and frankly, they're above my level today. Saying "plz don't code if you can't understand it" applies to my code today without AI assistance. Are you suggesting I shouldn't release anything because others might need to read it? The way to prevent this within an organization is smart hiring practices, not restrictions on tool use.
Don't hold your breath. If the AI was good enough to do all that, "maintainability" would look very different from what it does today. What does it mean to be maintainable when an AI can completely rewrite the software in every iteration? If AI ever gets good enough to do all this, for safety-critical applications no less, probably 95% of the white collar jobs that exist today will be gone. There may also be robots to do 90% of all other work too.
Then hopefully they run on a computer without network access, otherwise everything is safety critical.
The benefit and excitement is most felt by people with little to no experience writing code themselves. The fear seems to come from building.. what, code for critical infrastructure? That's just not what we're talking about here.
So, you pay the billionaires to rent their graphics cards so you can avoid paying a normal person? -_-
I wrote a lot of spaghetti and I confused myself a lot. And it was a lot of fun.
I think the doomsayers ITT are wrong. I think you’ve forgotten what it was like to go from “how do you even make a program” to “I put something on the screen and it’s amazing that I did that”.
I think AI will help a lot of people get over the bump from not even comprehending how software works, to putting something on their screen and evolving their skills from there.
Who cares if they make some spaghetti along the way. That’s necessary for learning. AI or not.
You call learning, making mistakes and fixing them, and improving "a bump"? That's the whole point.
> That’s necessary for learning
You haven't learned anything in the end. I read a lot of programming books in the past thinking I would be a computer god at the end, and I realized I learned nothing because "I did nothing" exactly like what we have with ChatGPT.
What skills? If you are just asking a computer for what you want you're not developing any skills, apart from maybe how to describe your requirements better†.
If you take the code the LLM outputs and use that as a basis to be able to write your own code I would call that "learning to program" and I applaud it whether you learn from adapting LLM code or by reading K&R cover-to-cover before you even touch a keyboard. But that's not what this article describes—what this article describes is the very deliberate act of not learning anything.
†Technically just describing your requirements in a way the particular LLM you're using responds best to, which is not necessarily "better" in an objective sense.
Cursor is that, but prompted by you instead of a stackoverflow question.
About software, it works, fine, but do you ever deal with maintenance, security patches and so on?
Security patches though, yeah, that's tougher. My position is that if security is a concern, you need to hire someone. As much as many of these tools can integrate with databases and set up auth, I'm not sure how much I trust it personally. Especially if the actual code is hidden.