If you're offended that the author isn't claiming that AI is the biggest productivity gain of the last N years, consider that perhaps you could do these 5 things and still use your AI tools.
The article is about the effect of these tools on an organization. If your org isn't doing these 5 things and thinks "adding AI" will finally make them more productive than ever... they might see modest gains but the article is claiming it won't be as big as if they'd used these practices.
It's hard to measure which has more impact: changing your management style and organization structure or using AI.
I'm willing to bet they both have some impact. From experience I believe the former has a bigger impact. But I'm not sure it's true industry-wide.
Have done some of these recently
- Smaller teams are better value/$ spent [Confirmed]
- More frequent releases accelerate learning what has real value [No improvement]
- Limiting work in progress, solving one problem at a time,increases delivery throughput [Continued]
- Cross-functional teams experience fewer bottlenecks and blockers than specialised teams [Confirmed]
Empowered, self-organising teams spend less time waiting for decisions and more time getting sh*t done [Confirmed]
Additionally, smaller teams 1-3 engineers per project who are empowered are much happier. Side effect was time spent on process, tickets, communication dropped dramatically. Time spent on creating and confirming increased.
In a large organization solving your own blockers can be the difference between releasing next week and releasing next quarter. More frequent releases only help in a business where users adopt new features quickly.
As someone in a work environment that does NOT have these 5 boring things in place, I can certainly speak to how much that slows productivity. So I think there's some value there, but it doesn't go into much detail, nor give you a game plan for implementing those things in your workplace. (And in many cases, you aren't empowered to implement them, so you either attempt to manage up or you find a new job.)
Empty is not the same as succinct & straight to the points. I'd say strong title, succint content. Each of the points invites as much thought as you can give it.
Any time critics single out “autocomplete” in AI coding tooling, I know they haven’t really played around with this stuff. Autocomplete is barely useful with or without AI. The real game changers are “chat-with-codebase” or agentic development tools (do these have a better name?)
When someone throws out a whole article because of a word in a meme image the author used to add some levity to the discussion, I can tell they either didn't read the article, or they just want to discredit the author without addressing the content of the article.
The "article" (the article scarequotes "A.I.", so fair game) is a data-free, extremely low-effort tosser blog post that could have been a tweet. It's pandering to the incredibly boring "ha ha AI is bad! Look, distraction!" head-in-the-sand approach that is far too common.
Maybe. A bigger impact factor imo is the distance between the tech you're using and the center of mass of training data. Showcase site in NextJS with a CSS gradient hero banner and a news letter? AI will be amazing. But then again, so will many no-code solutions. The sweet spot is probably custom enough that you're just outside of Wordpress but within well-trained domains like web and its popular frameworks. If you venture outside the mainstream, quality degrades rapidly.
I use Cursor as an agent and sometimes I use autocomplete. Both tools are dependent on what you are doing at the moment. I like autocomplete when I am focused in tight on one file. I spell things out in depth in file and start out code and hand pick completions. It brings my mind in sharp on what I am doing. Agent is when I am doing big but simple stuff where I am crafting less in detail. Refactoring and setting up tests and basic shallow framework code.
But this article is on point. All of things listed are more impactful than LLMs
I've been using vim forever, and often use "advanced" editing techniques: macros, .-repeat, :g/.../norm ..., occasional templates.
For certain projects I'd used `vscode` (with the vim plugin!), and there's definitely some helpful bits. The biggest helper for me is/was the `F2-rename-symbol` capability. Being able to "rename" securely in the whole file (or function), and across the project is super-useful.
Working with Cursor and the autocomplete is (often) pretty shockingly good. eg: when I go to rename `someVar` to `someOtherVar`, it'll prompt to `<tab>` and:
* rename the function call
* edit the log lines
* rename the return object value
* ...etc...
In vim, I'd `*` to automatically search for `someVar`, then `cwsomeOtherVar`, (change-word), then `n.n.n.` (next, repeat, etc.)
...so my overhead (by keystrokes) is `*` (search), `cw` (change word), (`n.`) next-and-change. Five "vim" characters, and I mentally get to (or have to) review each change place.
In straight `vscode`, I can do `F2-rename` and that'll get me replace _some_ of the variables (then I still have to rename the log lines, etc).
With Cursor, I make the `cw...` and it's 90%+ accurate in "doing what I probably also want to do" with the single `<tab>` character.
It gets even more intriguing where you'll say `s/foo/fooSorted/` and it automatically inserts the `\*.sort()` call, or changes it to call `this.getFooSorted()` or `this.getSorted( foo )` or whatever.
For "cromulent" code, cursor autocomplete is "faster than vim". For people that can't type that good, or even that can't program that good, it's a freaking god-send. Adding in the `Agent...` capabilities (again, for "cromulent" code)... if you're just guiding it along: "Now, add more tests" => "Now 50% more cowbell!" => "Whoops, that section would be more efficient if you cached stuff outside the loop."
Even then, I have to have some empathy with the AI/Agent coding, "Hey... you messed up that part (btw, I probably would have messed up that part the first time through as well...)". We can't hold them to gold standards that we wouldn't meet either, but treating them as "helpful partners" really reduces the mental burden of typing in EVERY SINGLE CHARACTER by yourself.
they are "game changers" if you are a mediocre software dev thats churning out crud widgets .
any decent software developer uses and creates abstractions instead of generating reams of code using AI.
From what i've seen at work AI is "game changer" for coding in worst sense. reams and reams of duplicated code that looks slightly different from other generated code doing similar things. Before AI ppl used to stop and create some library now they just generate shit because its so easy. AI is death of software engineering.
Any good engineer enjoys creating abstractions too instead of hoping a machine trained on code golf will solve the problems.
My manager probably adds 10+ hours to my week by pushing Llm code at our projects only to follow up with several merge requests to fix his work. I just approve whatever he pushes because he isn’t interested in actually solving the problem. He’s interested in seeing if he can fiddle the solution out of an Llm. Each time it involves me telling him the answer. His boss is the same way. Literally dragging the company efficiency down and proving the efficiency gains are meaningless.
Sometimes it's not simply a lack of curiosity, but having the space to build these tools into your workflow. As a solo dev responsible for all development, project management, support, & infra, I've gotten as far as trying to use Aider for doing some features in our legacy codebase but not having it break the time-cost-benefit barrier.
Now I feel like there's probably other workflows out there that I'm ignorant to that could be better, but keeping up feels impossible. Is there a particular approach/tool that you're finding to be really beneficial?
The cynical question that I ask is: Will AI tools make it possible to finish projects? Show me a project that goes from being 3 years late to only 2 years late, thanks to AI tools.
The article is about the effect of these tools on an organization. If your org isn't doing these 5 things and thinks "adding AI" will finally make them more productive than ever... they might see modest gains but the article is claiming it won't be as big as if they'd used these practices.
It's hard to measure which has more impact: changing your management style and organization structure or using AI.
I'm willing to bet they both have some impact. From experience I believe the former has a bigger impact. But I'm not sure it's true industry-wide.
- More frequent releases accelerate learning what has real value [No improvement]
- Limiting work in progress, solving one problem at a time,increases delivery throughput [Continued]
- Cross-functional teams experience fewer bottlenecks and blockers than specialised teams [Confirmed]
Empowered, self-organising teams spend less time waiting for decisions and more time getting sh*t done [Confirmed]
Additionally, smaller teams 1-3 engineers per project who are empowered are much happier. Side effect was time spent on process, tickets, communication dropped dramatically. Time spent on creating and confirming increased.
In a large organization solving your own blockers can be the difference between releasing next week and releasing next quarter. More frequent releases only help in a business where users adopt new features quickly.
Author hand waves at "organizations not wanting to open a can of worms" when I wanted to examine each squirmy helminth.
The point is that these practices imply other practices and propagate their own culture. It's simple and not new but still unreasonably effective.
Deleted Comment
Whilst I agree, the mainstream is also where the vast majority of software development occurs. CRUD apps and enterprise workloads, etc.
Saying that current LLMs are only useful for the mainstream is saying that they're incredible useful.
But this article is on point. All of things listed are more impactful than LLMs
For certain projects I'd used `vscode` (with the vim plugin!), and there's definitely some helpful bits. The biggest helper for me is/was the `F2-rename-symbol` capability. Being able to "rename" securely in the whole file (or function), and across the project is super-useful.
Working with Cursor and the autocomplete is (often) pretty shockingly good. eg: when I go to rename `someVar` to `someOtherVar`, it'll prompt to `<tab>` and:
In vim, I'd `*` to automatically search for `someVar`, then `cwsomeOtherVar`, (change-word), then `n.n.n.` (next, repeat, etc.)...so my overhead (by keystrokes) is `*` (search), `cw` (change word), (`n.`) next-and-change. Five "vim" characters, and I mentally get to (or have to) review each change place.
In straight `vscode`, I can do `F2-rename` and that'll get me replace _some_ of the variables (then I still have to rename the log lines, etc).
With Cursor, I make the `cw...` and it's 90%+ accurate in "doing what I probably also want to do" with the single `<tab>` character.
It gets even more intriguing where you'll say `s/foo/fooSorted/` and it automatically inserts the `\*.sort()` call, or changes it to call `this.getFooSorted()` or `this.getSorted( foo )` or whatever.
For "cromulent" code, cursor autocomplete is "faster than vim". For people that can't type that good, or even that can't program that good, it's a freaking god-send. Adding in the `Agent...` capabilities (again, for "cromulent" code)... if you're just guiding it along: "Now, add more tests" => "Now 50% more cowbell!" => "Whoops, that section would be more efficient if you cached stuff outside the loop."
Even then, I have to have some empathy with the AI/Agent coding, "Hey... you messed up that part (btw, I probably would have messed up that part the first time through as well...)". We can't hold them to gold standards that we wouldn't meet either, but treating them as "helpful partners" really reduces the mental burden of typing in EVERY SINGLE CHARACTER by yourself.
any decent software developer uses and creates abstractions instead of generating reams of code using AI.
From what i've seen at work AI is "game changer" for coding in worst sense. reams and reams of duplicated code that looks slightly different from other generated code doing similar things. Before AI ppl used to stop and create some library now they just generate shit because its so easy. AI is death of software engineering.
My manager probably adds 10+ hours to my week by pushing Llm code at our projects only to follow up with several merge requests to fix his work. I just approve whatever he pushes because he isn’t interested in actually solving the problem. He’s interested in seeing if he can fiddle the solution out of an Llm. Each time it involves me telling him the answer. His boss is the same way. Literally dragging the company efficiency down and proving the efficiency gains are meaningless.
> I'd like to output lines where .stack_trace is non empty with JQ
I vaguely remembered that it sometimes has "null" and sometimes has empty strings
Time gained ~60s looking at the doc
> Here is a Jira ticket's description: ```....``` Please rephrase this more clearly and make the text flow better?
^ Then I picked and chose the improvements
No time gained but quality improved
> Critique this for accuracy: (A long comment about the properties of randomness)
No time gained but quality improved
> How do I make excel prompt me to select the goddamn delimiter when I open a CSV file instead of just picking a random fucking one that never works
Question was filtered due to content policy, because apparently they don't want you to offend the robot
Now I feel like there's probably other workflows out there that I'm ignorant to that could be better, but keeping up feels impossible. Is there a particular approach/tool that you're finding to be really beneficial?
https://chatgpt.com/share/682ddb36-50f4-8004-b54d-3e41a10ab8...