I think about 2 months ago my company got a license for Cursor/claude ai access.
At first it was really cool getting an understanding of what it can do. It can be really powerful, especially for things like refactoring.
Then, I found it to be in the way. First, I had to rebind the auto-insert from TAB to ctrl+space because I would try tabbing code over and blamo: lines inserted, resulting in more work deleting them.
Second, I found that I'd spend more time reading the ai generated autocomplete that pops up. It would pop up, I'd shift focus to read what it generated, decide if it's what I want, then try to remember what the hell I was typing.
So I turned it all off. I still have access to context aware chats, but not the autocomplete thing.
I have found that I'm remembering more and understanding the code more (shocking). I also find that I'm engaging with the code more: taking more of an effort to understand the code
Maybe some people have the memory/attention span/ability to context switch better than me. Maybe younger people more used to distractions and attention stealing content.
I remember discussing with some coworkers a year(?) ago about autocomplete vs chat, and we were basically in agreement that autocomplete was the better feature of the two.
Since we've had Claude Code for a few months I think our opinions have shifted in the opposite direction. I believe my preference for autocomplete was driven by the weaknesses of Chat/Agent Mode + Claude Sonnet 3.5 at the time, rather than the strengths of autocomplete itself.
At this point, I write the code myself without any autocomplete. When I want the help, Claude Code is open in a terminal to lend a hand. As you mentioned, autocomplete has this weird effect where instead of considering the code, you're sort of subconsciously trying to figure out what the LLM is trying to tell you with its suggestions, which is usually a waste of time.
LSP giving us high-quality autocomplete for nearly every language has made simple llm-driven autocomplete less magical. Yes, it has good suggestions some of the time, but it's not really revolutionary
On the other hand I love cursor's autocomplete implementation. It doesn't just provide suggestions for the current cursor location, it also provides suggestions where the cursor should jump next within the file. You change a function name and just press tab a couple of times to change the name in the docstring and everywhere else. Granted, refactoring tools have done that forever for function names, but now it works for everything. And if you do something repetitive it picks up on what you are doing and turns it into a couple quick keypresses
I think the worst part of the autocomplete is when you actually just want to tab to indent a line and it tries to autocomplete something at the end of the line.
ok call me a spoiled Go programmer but I have had an allergy to manually formatting code since getting used to gofmt on save. I highly recommend setting up an autoformatter so you can write nasty, undented code down the left margin and have it snap into place when you save the file like I do, and never touch tab for indent. Unless you're writing Python of course haha
"AI" autocomplete has become a damn nuisance. It always wants to second-guess what I've already done, often making it worse. I try to hit escape to make it go away, but it just instantly suggests yet another thing I don't want. It's cumbersome. It gets in the way to an annoying extent. It's caused so many problems, I am about to turn it off.
The only time it helps is when I have several similar lines and I make a change to the first line it offers to change all the rest of the lines. It's almost always correct, but sometimes it is subtlety not and then I waste 5 minutes trying to figure out why it didn't work only to notice the subtle bug it introduced. I'm not sure how anyone thinks this is somehow better than just knowing what you're doing and doing it yourself.
Wait, really? This is kind of surprising to me. Even without adaptive cruise control, I generally spend very few brain cycles paying attention to speed. My speed just varies based on conditions and the traffic flow around me, and I'm virtually never concerned with the number on the dial itself.
As a result I've never found adaptive cruise control (or self-driving) to be all that big a deal for me. But hearing your perspective suddenly makes me realize why it is so compelling for so many others.
Autocomplete is a totally different thing that this article isn’t talking about. It is referring to the loop of prompt refinement which by definition means it’s referring to the Agent Mode type of integrations. Autocomplete has no prompting.
I agree autocomplete kinda gets in the way, but don’t confuse that with all AI coding being bad, they’re 2 totally distinct functions.
Yeah, I also have the auto complete disabled. To me its the most useful when I am working in an area I know, but not the details. Such as, I know cryptography, but I don't know the cryptography APIs in nodejs, so Claude is very helpful when writing code for that.
I kinda agree with the author — as a person with more than enough coding experience I don't get much value (and, certainly, much enjoyment) from using AI to write code for me. However it's invaluable when you're operating in even a slightly unfamiliar environment — essentially, by providing (usually incorrect or incomplete) examples of the code that can be used to solve the problem it allows to overcome the main "energy barrier" for me — helping to navigate e.g. the vast standard library of a new programming language, or provide idiomatic examples of how to do things. I usually know _what_ I want to do, but I don't know exactly the syntax to express it in a certain framework or language
Yeah, I don't leverage LLMs much, but I have used it to look up APIs for writing vscode extensions. The code wasn't usable as-is, but it gave me an example that I could turn into working code - without looking up all of the individual api calls.
I've also used it in the past to look up windows api, since I haven't coded for windows in decades. (For the equivalent of pipe, fork, exec.) The generated code had a resource leak, which I recognized, but it was enough to get me going. I suspect stack overflow also had the answer to that one.
And for fun, I've had copilot generate a monad implementation for a parser type in my own made-up language (similar to Idris/Agda), and it got fairly close.
There's a product called Context7 which among other things provides succinct examples of how to use an API in practice (example of what it does: https://context7.com/tailwindlabs/tailwindcss.com )
It's supposed to be consumed by LLMs to help prepare them to provide better examples - maybe a newer version of a library than is in the model's training data for example.
I've often thought rather than an MCP server of this that my LLM agent can query, maybe i just want to query this high signal to noise resource myself rather than trawl the documentation.
What additional value does an LLM provide when a good documentation resource exists?
The approach of treating the LLMs like a junior engineer that is uninterested in learning seems to be the best advice, and correctly leverages the existing intuitions of experienced engineers.
Spend more time on interfaces and test suites. Let the AI toil away making the implementation work according to your spec. Not implementing the interface is a wrong answer, not passing the tests is a wrong answer.
If you've worked in software long enough you will have encountered people who are uninterested in learning or uncoachable for whatever reason. That is all of the LLMs too. If the LLM doesn't get it, don't waste your time; it will probably never get it. You need to try a different model or get another human involved, same as you would for an incompetent and uncoachable human.
As an aside: my advice to junior engineers is to show off your wetware, demonstrate learning and adaptation at runtime. The models can't do that yet.
What's really funny is, if you copy its output, and start a new prompt, and ask it "From the perspective of Senior / Staff level engineer, what is wrong with this code?" and you paste the code you got from the LLM, it will trash all over its own code with a fresh mind. Technically you can do it in the existing prompt, but sometimes LLMs get a bug up their butts about what they've decided is reality all of a sudden.
When switching context in any way, I start a new prompt.
IMO no one is taking even the first bit of software development advice with Llms.
Today my teammate laughed off generating UI components to quickly solve a ticket. Knowing full well no one will review the ticket now that it’s Llm generated and that it will probably make our application slower because of the unreviewed code gets merged. The consensus is that anything they make worse, they can push off to fix onto me because I’m the expert on our small team. I have been extremely vocal about this. However It is more important to push stuff through for release and make my life miserable than make sure the code is right.
Today I now refuse to fix anymore problems on this team and might quit tomorrow. This person tells me weekly they always want to spend more time writing and learning good code and then always gets upset when I block a PR merge.
Today I realized I might hate my current job now. I think all Llms have done is enabled my team to collect a pay check and embrace disinterest.
Job market is currently really bad, it has never been worse. Two years ago, it was almost impossible to find an expert for a more specialized domain like computer vision or RTOS. Now, it’s impossible not to receive applications from multiple experts for a single role (and that’s only counting experts; senior and junior software developers or architects aren’t even included) that isn't even a sepcialized role and at best, "just a simple" senior role.
I am in the minority who agrees with you that the code should be right.
Don't quit. Get fired instead (strictly without cause). In this way you can at least collect some severance and also unemployment. You will also absolve yourself of any regrets for having quit. Actually, just keep doing what you're doing, and you will get fired soon enough.
The other thing you can try is to ask for everyone to have their own project that they own, and for the assigned owner be fully responsible for it, so you can stop reviewing the work of other people.
If you're not in step with where you're at, and you can find other employment where you'll be happier, why not change?
You could apply your same logic to, "If you're in a relationship with a significant other, don't break up with them... get them to break up with you! You will absolve yourself of any regrets of dumping them." Yes, and you will have wasted both your time, and their time.
And the same goes for working at a company that you feel isn't good for you.
Sorry to hear your situation, but that doesn't really sound like it's LLM's (a tool in the end) fault, more that poor ways of working are a norm in company you work at. Not much would change if you replace "LLM" with "Consultancy" in your post.
And it's hard to really connect the dot between "generated by llm" and "slow" -- code performance doesn't really depend on whether it's being generated or typed out.
I use AI as a pairing buddy who can lookup APIs and algorithms very quickly, or as a very smart text editor that understands refactoring, DRY, etc. but I still decide the architecture and write the tests. Works well for me.
Apparently what the article talks against is using it like software factory - give it a prompt of what you want and when it gets it wrong, iterate on the prompt.
I understand why this can be a waste of time: if programming is a specification problem [1], just shifting from programming language to natural language doesn’t solve it.
Yes, but…
The AI has way more context on our industry than the raw programming language does. I can say things like “add a stripe webhook processor for the purchase event” and it’s gonna know which library to import, how to structure the API calls, the shape of the event, the database tables that people usually back Stripe stuff with, idempotency concerns of the API, etc.
So yes you have to specify things but there’s a lot more implicit understanding and knowledge that can be retrieved relevant to the task you’re doing than a regular language would have
Unless you're solving the same old problem for the Nth time for a new customer, you don't really understand the problem fully until you write the code.
If it's a new problem, you need to write the code so that you discover all the peculiar corner cases and understand them.
If it's the (N+M)th time, and you've been using AI to write the code for the last M times, you may find you no longer understand the problem.
> Ask AI for an initial version and then refactor it to match your expectations.
> Write the initial version yourself and ask AI to review and improve it.
> Write the critical parts and ask AI to do the rest.
> Write an outline of the code and ask AI to fill the missing parts.
So well put. I'm writing these on a post it note and putting it above my monitor. I held off on using agents to generate code for a long time and finally was forced to really make use of them and this is so in line with my experience.
My biggest surprises have been how much the model doesn't seem to matter (?) when I'm making the prompts appropriately narrow. Also surprised at how hard it is to pair program in something like cursor. If your prompting is even slightly off it seems like it can go from 10xing a build process to making it a complete waste of time with nothing to show but spaghetti code at the end.
Anyway long live the revolution, glad this was so technically on point and not just a no-ai rant (love those too tho).
At first it was really cool getting an understanding of what it can do. It can be really powerful, especially for things like refactoring.
Then, I found it to be in the way. First, I had to rebind the auto-insert from TAB to ctrl+space because I would try tabbing code over and blamo: lines inserted, resulting in more work deleting them.
Second, I found that I'd spend more time reading the ai generated autocomplete that pops up. It would pop up, I'd shift focus to read what it generated, decide if it's what I want, then try to remember what the hell I was typing.
So I turned it all off. I still have access to context aware chats, but not the autocomplete thing.
I have found that I'm remembering more and understanding the code more (shocking). I also find that I'm engaging with the code more: taking more of an effort to understand the code
Maybe some people have the memory/attention span/ability to context switch better than me. Maybe younger people more used to distractions and attention stealing content.
Since we've had Claude Code for a few months I think our opinions have shifted in the opposite direction. I believe my preference for autocomplete was driven by the weaknesses of Chat/Agent Mode + Claude Sonnet 3.5 at the time, rather than the strengths of autocomplete itself.
At this point, I write the code myself without any autocomplete. When I want the help, Claude Code is open in a terminal to lend a hand. As you mentioned, autocomplete has this weird effect where instead of considering the code, you're sort of subconsciously trying to figure out what the LLM is trying to tell you with its suggestions, which is usually a waste of time.
On the other hand I love cursor's autocomplete implementation. It doesn't just provide suggestions for the current cursor location, it also provides suggestions where the cursor should jump next within the file. You change a function name and just press tab a couple of times to change the name in the docstring and everywhere else. Granted, refactoring tools have done that forever for function names, but now it works for everything. And if you do something repetitive it picks up on what you are doing and turns it into a couple quick keypresses
It's still annoying sometimes
Deleted Comment
Deleted Comment
Deleted Comment
The only time it helps is when I have several similar lines and I make a change to the first line it offers to change all the rest of the lines. It's almost always correct, but sometimes it is subtlety not and then I waste 5 minutes trying to figure out why it didn't work only to notice the subtle bug it introduced. I'm not sure how anyone thinks this is somehow better than just knowing what you're doing and doing it yourself.
I feel like what I felt with adaptive cruise control.
Instead of watching my speed, I was watching traffic flow, watching cars way up ahead instead.
The syntax part of my brain is turned off, but the "data flow" part is 100% on when reading the code instead.
As a result I've never found adaptive cruise control (or self-driving) to be all that big a deal for me. But hearing your perspective suddenly makes me realize why it is so compelling for so many others.
I agree autocomplete kinda gets in the way, but don’t confuse that with all AI coding being bad, they’re 2 totally distinct functions.
What you can do is create a hotkey to toggle autocomplete on and off.
I've also used it in the past to look up windows api, since I haven't coded for windows in decades. (For the equivalent of pipe, fork, exec.) The generated code had a resource leak, which I recognized, but it was enough to get me going. I suspect stack overflow also had the answer to that one.
And for fun, I've had copilot generate a monad implementation for a parser type in my own made-up language (similar to Idris/Agda), and it got fairly close.
Its like the car navigation or Google Maps. Annoying and not much useful when in hometown. Very helpful when traveling or in unfamiliar territory.
It's supposed to be consumed by LLMs to help prepare them to provide better examples - maybe a newer version of a library than is in the model's training data for example.
I've often thought rather than an MCP server of this that my LLM agent can query, maybe i just want to query this high signal to noise resource myself rather than trawl the documentation.
What additional value does an LLM provide when a good documentation resource exists?
Spend more time on interfaces and test suites. Let the AI toil away making the implementation work according to your spec. Not implementing the interface is a wrong answer, not passing the tests is a wrong answer.
If you've worked in software long enough you will have encountered people who are uninterested in learning or uncoachable for whatever reason. That is all of the LLMs too. If the LLM doesn't get it, don't waste your time; it will probably never get it. You need to try a different model or get another human involved, same as you would for an incompetent and uncoachable human.
As an aside: my advice to junior engineers is to show off your wetware, demonstrate learning and adaptation at runtime. The models can't do that yet.
When switching context in any way, I start a new prompt.
Taking a step back and reviewing all my changes gives a different perspective and often find things I didn’t see when in the weeds.
"From the perspective of Senior / Staff level engineer, what is good about this code"
Does it praise it?
Today my teammate laughed off generating UI components to quickly solve a ticket. Knowing full well no one will review the ticket now that it’s Llm generated and that it will probably make our application slower because of the unreviewed code gets merged. The consensus is that anything they make worse, they can push off to fix onto me because I’m the expert on our small team. I have been extremely vocal about this. However It is more important to push stuff through for release and make my life miserable than make sure the code is right.
Today I now refuse to fix anymore problems on this team and might quit tomorrow. This person tells me weekly they always want to spend more time writing and learning good code and then always gets upset when I block a PR merge.
Today I realized I might hate my current job now. I think all Llms have done is enabled my team to collect a pay check and embrace disinterest.
Don't quit. Get fired instead (strictly without cause). In this way you can at least collect some severance and also unemployment. You will also absolve yourself of any regrets for having quit. Actually, just keep doing what you're doing, and you will get fired soon enough.
The other thing you can try is to ask for everyone to have their own project that they own, and for the assigned owner be fully responsible for it, so you can stop reviewing the work of other people.
If you're not in step with where you're at, and you can find other employment where you'll be happier, why not change?
You could apply your same logic to, "If you're in a relationship with a significant other, don't break up with them... get them to break up with you! You will absolve yourself of any regrets of dumping them." Yes, and you will have wasted both your time, and their time.
And the same goes for working at a company that you feel isn't good for you.
Apparently what the article talks against is using it like software factory - give it a prompt of what you want and when it gets it wrong, iterate on the prompt.
I understand why this can be a waste of time: if programming is a specification problem [1], just shifting from programming language to natural language doesn’t solve it.
1. https://pages.cs.wisc.edu/~remzi/Naur.pdf
So yes you have to specify things but there’s a lot more implicit understanding and knowledge that can be retrieved relevant to the task you’re doing than a regular language would have
Can you show it to us?
If it's a new problem, you need to write the code so that you discover all the peculiar corner cases and understand them.
If it's the (N+M)th time, and you've been using AI to write the code for the last M times, you may find you no longer understand the problem.
Fair warning. Write the damn code.
> Write the initial version yourself and ask AI to review and improve it.
> Write the critical parts and ask AI to do the rest.
> Write an outline of the code and ask AI to fill the missing parts.
So well put. I'm writing these on a post it note and putting it above my monitor. I held off on using agents to generate code for a long time and finally was forced to really make use of them and this is so in line with my experience.
My biggest surprises have been how much the model doesn't seem to matter (?) when I'm making the prompts appropriately narrow. Also surprised at how hard it is to pair program in something like cursor. If your prompting is even slightly off it seems like it can go from 10xing a build process to making it a complete waste of time with nothing to show but spaghetti code at the end.
Anyway long live the revolution, glad this was so technically on point and not just a no-ai rant (love those too tho).
Dead Comment