That said, this article is very obviously not rhetoric. It seems almost dumb to argue this point. Maybe we should ask an AI if it is or not. I mean, I don’t know the author nor do I have anything to gain from debating this, but you can’t just go calling everything “rhetoric” when it’s clearly not. Yes there’s plenty of negative rhetoric about LLMs out there. But that doesn’t make everything critical of LLMs negative rhetoric. I’m very much pro-AI btw.
anyways, it doesn't matter that much :) we could be both right.
Those of us that consider software development to be “typing until you more or less get the outcome that you want” love LLMs. Non-deterministic vibes all around.
This is also why executives love LLMs; executives speak words and little people do what was asked of them, generally, sometimes wrong, but are later corrected. An LLM takes instructions and does what was asked, generally, sometimes wrong, and is later corrected, but much faster than unreliable human plebs who get sick all the time and demand vacation and time to mourn deaths of other plebs.
If you choose to accept bad code, that's on you. But I am not seeing that in practice, especially if you learn how to give quality prompts with proper rules. You have to get good at prompts - there is no escaping that. Now programmers do suck at communicating sometimes and that might be an issue. But in my experience, it can write far higher quality code than most programmers if used correctly.
I can give you a concrete example since things sometimes can be so philosophical. The other day I needed a LIS code (Longest Increasing subsequence) with some very specific constraints. It would've honestly taken me a few hours to get it right as it's been a while I coded that kind of thing. I was able to generate the solution with o3 in around 10 minutes, with some back and forth. It wasn't one shot, but took me 2-3 iteration cycles. I was able to get highly performant code that worked for a very specific constraint. It used Fenwick trees (https://en.wikipedia.org/wiki/Fenwick_tree) which I honestly hadn't programmed myself before. It felt like a science fiction moment to me as the code certainly wasn't trivial. In fact I am pretty sure most senior programmers would fail at this task, let alone be fast at it.
As a professional programmer, I deal with 20 examples every day where using a quality LLM saved me significant time, sometimes hours per task. I still do manual surgery a bunch of times everyday but I see no need to write most functions anymore or do multi-file refactors myself. In a few weeks, you get very good at applying Cursor and all its various features intelligently, like an amazing pair programmer who has different strengths than you. I'll go so far as to say I wouldn't hire an engineer who isn't very adept at utilizing the latest LLMs. The difference is just so stark - it really is like science fiction.
Cursor is popular for a reason. Lot of incredible programmers still get incredible value out of it, it isn't just for vibe coding. Implying that Cursor can be a net negative to programmers based on an example is a lot of fear mongering.
I think part of that comes from the difficulty of working with probabilistic tools that needs plenty of prompting to get things right, especially for more complex things. To me, it's a training issue for programmers, not a fundamental flaw in the approach. They have different strengths and it can take a few weeks of working closely to get to a level where it starts feeling natural. I personally can't imagine going back to the pre LLM era of coding for me and my team.