Readit News logoReadit News
wolframhempel · 9 months ago
I believe there are two kinds of skill: standalone and foundational.

Over the centuries we’ve lost and gained a lot of standalone skills. Most people throughout history would scoff at my poor horse-riding, sword fighting or my inability to navigate by the stars.

My logic, reasoning and oratory abilities on the other hand, as well as my understanding of fundamental mechanics and engineering principles would probably hold up quite well (language barrier notwithstanding) back in ancient Greece or in 18th century France.

I believe AI is fine to use for standalone skills in programming. Writing isolated bits of logic, e.g. a getRandomHexColor() function in JavaScript or a query in an SQL dialect you’re not deeply familiar with is a great help and timesaver.

On the other hand, handing over the fundamental architecture of your project to an AI will erode your foundational problem solving and software design abilities.

Fortunately, AI is quite good at the former, but still far from being able to do the latter. So, to me at least, AI based code editors are helpful without the risk of long term skill degradation.

globular-toast · 9 months ago
This is a great comment and says what I've been thinking but hadn't put into words yet.

Too many people think what I do is "write code". That is incorrect. What I do is listen, read, watch and think. If code needs writing then it already basically writes itself because at that point I've already done the thinking. The typing part is an inconvenience that I'd happily give up if I could get my thoughts into the computer directly somehow.

AI tools make the easy stuff easier. They don't help much with hard stuff. The most useful thing I've found them for is getting an initial orientation in a completely unfamiliar area. But after that, when I need hard details, it's books, manuals, blogs etc just like before. I find juniors are already lacking in their ability to find and assimilate knowledge and I feel like having AI isn't going to help here.

namaria · 9 months ago
Abstracting away the software paraphernalia makes this more clear in my view: our job is to understand and specify abstract symbolic systems. Making them work with the current computer architectures is incidental.

This is why I don't see LLM assisted coding as revolutionary. At best I think it's a marginal improvement on indexing, search and code completion as they have existed for at least a decade now.

NLP is a poor medium for specifying abstract symbolic systems. And LLMs work by finding patterns in latent space, I think. But the latent space doesn't represent reality, it represents language as recorded in the training data. It's easy to underestimate just how much training data were used for the current state-of-the-art foundational models. And it's easy to overestimate the ability these tools have to weave language and by induction attribute reasoning abilities to them.

The intuition I have about these LLM-driven tools is that we're adding degrees of freedom to the levers we use. When you're near an attractor congruent with your goals it feels like magic. But I think this is over fitting: the things we do now are closely mirrored by the data we used to train these models. But as we move forward in terms of tooling, domains, technology, culture etc, the data available will become increasingly obsolete, relevant data increasingly scarce.

Besides there's the problem of unknown unknowns: lots of people using these tools are assuming that the attractors they see pulling on their outcome is adequate because they can only see some arbitrary surface of it. And since they don't know what geometries lie beneath, they end up creating and exposing systems with several unknown issues that might have implications in security, legality, morality, etc. And since there's a time delay between their feeling of accomplishment and the surfacing of issues, and they will be likely to use the same approach, we might be heading for one hell of a bullwhip effect across dimension we can't anticipate at all.

Arisaka1 · 9 months ago
>The typing part is an inconvenience that I'd happily give up if I could get my thoughts into the computer directly somehow

I understand what you mean, but for some reason I cannot imagine my younger self getting into his first programming practice, going "ugh, why do I have to type this? Why can't I just think and let the computer do it for me". I don't think I would've reached where I am if I saw the act of practice as a tedium that I wish to get it removed.

You probably see it like that because you're not that kid anymore, and for today's "you" code is just a means to provide and nothing more.

flowerthoughts · 9 months ago
I'd classify this as theoretical skills vs tool skills.

Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."

My point being that theory (and thus what is considered foundational) has progressed as well.

politelemon · 9 months ago
> horse-riding, sword fighting or my inability to navigate by the stars.

Some better more suitable examples would be warranted here, none of these were as widespread or common as you'd assume. Little to no metaphorical scoffing would happen for those. Now, sewing and darning, and subsistence, while mundane, are uncommon for many of us.

sshine · 9 months ago
For some strange reason, I’m better at sewing than both my wife and mother-in-law. I learned it in public school when both genders learned both woodworking and sewing, and maintained an interest so that I could wear “grunge” in the 1990s. The teachers I had remembered that those classes were gendered while they worked.
codebra · 9 months ago
“still far from being able to do the latter” These models have been in wide use for under three years. AI IDEs barely a year. Gemini 2.5 Pro is shockingly good at architecture if you make it into a conversation rather than expecting a one-shot exercise. I share your native skepticism, but the pace of improvement has taken me aback and made me reluctant to stake much on what LLMs can’t do. Give it 6 months.
sceptic123 · 9 months ago
Taking your SQL example, if you don't properly understand the SQL dialect how can you know that what the AI gives you is correct?
LiKao · 9 months ago
I'd say because psychologically (and also based on CS Theory) creating something and verifying draw from similar but also unrelated skills.

It's like NP. Solving an NP problem is very hard. Verifying that the solution is correct is very easy.

You might not know the statements required, but once the AI reminds you of which statements are available, you can check the logic using these statements makes sense.

Yes, there is a pitfall of being lazy and forgetting to verify the output. That's where a lot of vibe coding problems come from in my opinion.

satvikpendem · 9 months ago
I do the same now, I don't use Cursor or similar edit-level AI tools anymore, I just use inline text completions and chat to talk through a problem, and then, I'll copy-paste anything needed (or rather type it in manually just to have more control).

I literally felt myself getting AI brain rot, as one Ask HN put it recently, where it felt like I started losing brain cells and depended too much on the AI over my own thinking and felt my skills atrophy. At the end of the day, in the future, I sense there will be a much wider gap between those that truly know how to code, and those that, well, don't, due to such over-reliance on AI.

greyman · 9 months ago
I also stopped using Cline as well as Claude Desktop + MCPs. Gemini for example is rushing forward, Google surely is putting huge resources into developing it, and if in the matter of months AI will be able to implement additional feature itself in 0-shot, why bother with IDE?
satvikpendem · 9 months ago
And what will you do when that zero shot doesn't work, and continues not to work? It will always be necessary to dig in and manually change things, hence an editor or IDE will continue to be needed.
mentalgear · 9 months ago
I also do most of my coding artisanal, but use LLM for semantic search, to enrich the research part.

Definitely never trust an LLM to write entire files for you, at least if you don't want to spend more time in code review than writing or you expect maintaining it.

Also, a good quote regarding the AI tools market:

> A lot of companies are creating FOMO as a sales tactic to get more customers, to show traction to their investors, to get another round of funding, to generate the next model that will definitely revolutionize everything.

alfiedotwtf · 9 months ago
> I also do most of my coding artisanal

Off-topic, but I just wanted to say I love this as a statement!

specproc · 9 months ago
Nicholas Carr has a nice book on the dynamic the author is describing [0], i.e. that our skills atrophy the more we rely on automation.

Like a lot of others in the thread, I've also turned off Copilot and have been using chat a lot less during coding sessions.

There are two reasons for this decision, actually. Firstly, as noted above, in the original post and throughout this thread, it's making my already fair-to-middling skills worse.

The more important thing is that coding feels less fun. I think there are two reasons for this:

- Firstly, I'm not doing so much of the thinking for myself, and you know what? I really like thinking.

- Secondly, as a collary to the skill loss, I really enjoy improving. I got back into coding again later in life, and it's been a really fun journey. It's so satisfying feeling an incremental improvement with each project.

Writing code "on my own" again has been a little slower (line by line), but it's been a much more pleasant experience.

[^0]: https://www.nicholascarr.com/?page_id=18

lucianonooijen · 8 months ago
Hey! Article author here! I have read Carr's book a few years ago, and I think it has influenced my opinion about AI as well as computers as a whole quite a bit!
nesk_ · 9 months ago
I've recently disabled code completions, it's too much mental workload to read all those suggestions for so little quality.

I still use the chat whenever I need it.

rahkiin · 9 months ago
I onky use line-completion AI that comes with Rider. I think it is a reasonable mix of classic code completion but with a bit more smart to it, like suggesting a string for a Console.Write. But it does not write new lines, as indicated by the author.
acron0 · 9 months ago
This feels similar to articles with titles such as "Why every developer should learn Assembly" or "Relying on NPM packages considered harmful". I appreciate the core of truth inside the sentiment, and the author isn't _wrong_, but it won't matter over time. AI coding ability will improve, whether it's writing, debugging or planning. It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will. And yes, it won't be optimal or totally secure, or the abstractions might be questionable but...how is that really different than most real software projects anyway?
fieldcny · 9 months ago
Software is the connective tissue of the world, generating mediocre quality results (which will be the best outcome if you don’t really understand what you are looking at) is not just lazy it can be dangerous, do the worlds best engineers make mistakes? Of course they do, but that’s why building high quality software is collaborative process you have to work with others to build better systems. If you aren’t, you are wasting your time.

As of now (and this could change, but that doesn’t change the moral and ethical obligations), software engineers are richly rewarded specifically because they should be able to write and understand high quality code, the code written is the foundation of how our entire modern world is built.

tauchunfall · 9 months ago
>It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will.

What backs up this claim? And when will it reach it?

We could be very well reached a plateau right now, which means looking at previous trends in improvements does not allow us to predict future improvements. If I understand it correctly.

yapyap · 9 months ago
That is a hellish look toward the future. To be clear I don’t think you’re wrong, if companies can squeeze more out of devs by forcing them to use AI I bet they will, move fast and break stuff and all that, but it’s still quite the bummer.
futuraperdita · 9 months ago
I'd argue it's a hell many other people see daily, and we've been privileged to have the space to care about craft. Corporations have never cared about the craft. The business is paying me to make, and the moment they can get my level of quality and expertise from someone much cheaper, or a machine itself, I'm gone. That dystopia has always been present and we just haven't had to stare it down as much as some other industries have.
satvikpendem · 9 months ago
I don't think it's really any different than how most products are made currently, do you think most startups are caring about security and things that would slow down their initial release? All the rest is tech debt that can be solved once product market fit is solved.

The only thing I'd worry about is when no one knows how to solve these when everyone relies on AI.

ghaff · 9 months ago
I don't have a real opinion of the value at this point but, to the degree that there are significant productivity enhancement tools available for developers (or many other functions), and they refuse to use them, companies should properly mark those folks down as low performers with the associated consequences.

"I don't want to use the web."

Deleted Comment

didip · 9 months ago
Why? Clearly AI tools make life easier.

I could drive a manual car, but why? Automatic transmission is so much more convenient. Furthermore, for some use-cases FSD is even more convenient.

Another example: I don't want to think about gait movement of my robots, I just want it move, from A to B.

With programming, same thing: I don't want to waste time typing `if err != nil {}`, I want to think about the real problem. Ditto on happy-case unit tests. I don't want to waste my carpal tunnel prone wrists on those.

So on and so forth. Technology exists to make life more convenient. So why reject technology?

Deleted Comment

satvikpendem · 9 months ago
> Why? Clearly AI tools make life easier.

That is not clear, actually.

JohnKemeny · 8 months ago
> Technology exists to make life more convenient. So why reject technology?

Some technology exists to drive sales.

alfiedotwtf · 9 months ago
The fun factor: as an example, I specifically bought a manual car because automatics are boring to drive :)