Readit News logoReadit News
onion2k · a month ago
Learning what though? When I wrote software I learn the domain, the problem space, the architecture, the requirements, etc, as well as how to turn those things into code. I don't actually care about the code though - as soon as something changes I'll throw the code out or change it. It's an artefact of the process of solving a problem, which isn't the important bit. The abstract solution is what I care about.

LLMs only really help to automate the production of the least important bit. That's fine.

Calavar · a month ago
> Learning what though? When I wrote software I learn the domain, the problem space, the architecture, the requirements, etc

You don't learn these things by writing code? This is genuinely interesting to me because it seems that different groups of people have dramatically different ways of approaching software development

For me, the act of writing code reveals places where the requirements were underspecifed or the architecture runs into a serious snag. I can understand a problem space at a high level based on problem statements and UML diagrams, but I can only truly grok it by writing code.

Deleted Comment

yannyu · a month ago
You're right, but also coding 10 years ago, 20 years ago, and 30 years ago looked very different to coding today in most cases. In every decade, we've abstracted out things that were critical and manual before. Are LLMs writing the code that much different than pulling libraries rather than rolling your own? Or automating memory management instead of manually holding and releasing? Or using if/else/for instead of building your own logic for jumping to a subroutine?
orev · a month ago
Writing the code is like writing an essay—maybe you have some ideas in your head, but the act of writing them down forces you to interrogate and organize them into something cohesive. Without that process, those ideas remain an amorphous cloud, that as far as you’re concerned, are perfect. The process of forcing those thoughts into a linear stream is what exposes the missing pieces and errors in the logic.
onion2k · a month ago
Without that process, those ideas remain an amorphous cloud, that as far as you’re concerned, are perfect.

This is absolutely not the case. My first startup was an attempt to build requirements management software for small teams. I am acutely aware that there is a step between "an idea" and "some code" where you have to turn the idea into something cohesive and structured that you can then turn into language a computer can understand. The bit in the middle where you write down what the software needs to do in human language is the important part of the process - you will throw the code away by deleting it, refactoring it, improving it, etc. What the code needs to do doesn't change anywhere near as fast.

Any sufficiently experienced developer who's been through the fun of working on an application that's been in production for more than a decade where the only way to know what it does is by reading the code will attest to the fact that the code is not the important part of software development. What the code is supposed to do is more important, and the code can't tell you that.

pvelagal · a month ago
I totally agree. Trusting the LLM means, you are not thinking anymore and are happy with the high level ideas you had before you started coding, which may be incomplete. Missing pieces will be missed until you see issues in Production and I have seen this happen.
dangets · a month ago
Or similarly the difference between reading/listening to a foreign language vs. writing/speaking one. Knowing how to read code or learn algorithms or design is different than actually writing it. The difference between the theory and practice.
pvelagal · a month ago
LLMs are doing more than that. They are doing so much that I have seen bad ideas creeping into the code base. I used to trust some engineers code, but with introduction of LLMs, I am working more on code reviews and unable to trust significant portions of code checked in.
dionian · a month ago
I use llm to generate a lot of code but a large part of what i use it for is orchestration, testing, validation. that's not always 'learning', and by the way, i learn by watching the llm decide and execute, as it draws from knowledge pools faster than me.
LtWorf · a month ago
You're not learning
CharlieDigital · a month ago
Good taste in how to build software.
aeturnum · a month ago
The way I talk about is is that the value you deliver as a software "engineer" is: taste and good guesses. Anyone can bang out code given enough time. Anyone can read docs on how to implement an algorithm and implement it eventually. The way you deliver value is by having a feel for the service and good instincts about where to look first and how to approach problems. The only way to develop that taste and familiarity is to work on stuff yourself.

Once you can show, without doubt, what you should do software engineers have very little value. The reason they are still essential is that product choices are generally made under very ambiguous conditions. John Carmack said "If you aren't sure which way to do something, do it both ways and see which works better."[1] This might seem like it goes against what I am saying but actually narrowing "everything possible" to two options is huge value! That is a lot of what you provide as an engineer and the only way you are going to hone that sense is by working on your company's' product in production.

[1] https://aeflash.com/2013-01/john-carmack.html

Terr_ · a month ago
> Software development has always resisted the idea that it can be turned into an assembly line.

This is... only true in a very very narrow sense. Broadly, it's our job to create assembly lines. We name them and package them up, and even share them around. Sometimes we even delve into FactoryFactoryFactory.

> The people writing code aren't just 'implementers'; they are central to discovering the right design.

I often remember the title of a paper from 40 years ago "Programming as Theory Building". (And comparatively-recently discussed here [0].)

This framing also helps highlight the strengths and dangers of LLMs. The same aspects that lead internet-philosophers into crackpot theories can affect programmers creating their no-so-philosophical ones. (Sycophancy, false appearance of authoritative data, etc.)

[0] https://news.ycombinator.com/item?id=42592543

shadowgovt · a month ago
Hm... I think I get what Mr. Joshi is saying, but the headline clashes with the notion that the essence of what we do is automation, and that includes automating the automation.

This at first blush smells like "Don't write code that writes code," which... Some of the most useful tools I've ever written are macros to automate patterns in code, and I know that's not what he means.

Perhaps a better way to say it is "Automating writing software doesn't remove the need to understand it?"

Jtsummers · a month ago
> I think I get what Mr. Fowler is saying

Martin Fowler isn't the author, though. The author is Unmesh Joshi.

shadowgovt · a month ago
Thank you! Corrected.
crabmusket · a month ago
"Programming as theory building" still undefeated.

Also, fun to see the literal section separator glyphs from "A Pattern Language" turn up.

Jtsummers · a month ago
The actual title is: The Learning Loop and LLMs

For some reason johnwheeler editorialized it, and most of the comments are responding to the title and not the contents of the article (though that's normal regardless of whether the correct title or a different one is used, it's HN tradition).

sedatk · a month ago
Does the editorialized title contradict with the article?
Jtsummers · a month ago
Yes. The editorialized title includes a statement not present in the article at all, "Don't automate". Joshi actually describes how he has used LLMs and his experience with them, and he never says not to use them at all which is what the editorialized title suggested. The bulk of the article is describing how LLMs can break the learning loop (as hinted at in the original title) which is a much more interesting topic than HTML code generation a bunch of people are talking about.

[The title has been changed, presumably by a mod. For anyone coming later it was originally incorrect and included statements not present in the article.]

johannes1234321 · a month ago
There are parts of software development, which requires understanding purpose and code and making good decisions or having in depth understanding to ootikize. And there are parts where it's just boring ceremony for using a library or doing some refactorings.

The first one is mostly requiring experienced humans, the alter one is boring and good to automate.

The problem is with all the in between. And in getting people to be able to do the first. There AI can be a tool and a distraction.

MarsIronPI · a month ago
> There are parts of software development, which requires understanding purpose and code and making good decisions or having in depth understanding to ootikize. And there are parts where it's just boring ceremony for using a library or doing some refactorings.

I feel like maybe I'm preaching to the choir by saying this on HN, but this is what Paul Graham means when he says that languages should be as concise as possible, in terms of number of elements required. He means that the only thing the language should require you to write is what's strictly necessary to describe what you want.

AnIrishDuck · a month ago
The most critical skill in the coming era, assuming that AI follows its current trajectory and there are no research breakthroughs for e.g. continual learning is going to be delegation.

The art of knowing what work to keep, what work to toss to the bot, and how to verify it has actually completed the task to a satisfactory level.

It'll be different than delegating to a human; as the technology currently sits, there is no point giving out "learning tasks". I also imagine it'll be a good idea to keep enough tasks to keep your own skills sharp, so if anything kinda the reverse.

xnx · a month ago
I'm happy to learn the essential complexity (e.g. business logic) but see low/no value in learning incidental complexity (code implementation details).
waynesonfire · a month ago
I am completely the opposite. I could care less about whats in that packet of data. But, I deeply care about how I move it from A to B and if gets there according to specifications.
ares623 · a month ago
Spoken like a true CEO. LLMs makes everyone feel like CEOs. Imagine a world where everyone thinks they're CEOs.
tharne · a month ago
This one of the things that frightens me about LLMs. Like MBA programs, they seem to make people dumber over time, while simultaneously making them feel smarter and more confident in their abilities.
xnx · a month ago
Not CEO level at all, just one layer up from coding. Just as coding is one layer up from assembly, machin code, binary, logic gates and registers, etc.