"Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.
I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.
I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.
This is something that I struggle with for AI programming. I actually like writing the code myself. Like how someone might enjoy knitting or model building or painting or some other "tedious" activity. Using AI to generate my code just takes all the fun out of it for me.
This so much. I love coding. I might be the person that still paints stuff by hand long after image generation has made actual paintings superfluous, but it is what it is.
I don’t enjoy writing unit tests but fortunately this is one task LLMs seem to be very good at and isn’t high stakes, they can exhaustively create test cases for all kinds of conditions, and can torture test your code without mercy. This is the only true improvement LLMs have made to my enjoyment.
There were seamstresses who enjoyed sewing prior to the industrial revolution, and continued doing so afterwards. We still have people with those skills now, but it's often in very different contexts. But the ability to create a completely new garment industry was possible because of the scale that was then possible. Similarly for most artesanal crafts.
The industry will change drastically, but you can still enjoy your individual pleasures. And there will be value in unique, one-off and very different pieces that only an artesan can create (though there will now be a vast number of "unique" screen printed tees on the market as well)
The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.
I've accepted this way of working too. There is some code that I enjoy writing. But what I've found is that I actually enjoy just seeing the thing in my head actually work in the real world. For me, the fun part was finding the right abstractions and putting all these building blocks together.
My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.
Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.
I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.
I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.
The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.
what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.
Sure, but the vast majority of the time in greenfield applications situations, it's entirely unclear if what is being built is useful, even when people think otherwise. So the question of "maintainable" or not is frequently not the right consideration.
To be fair, this person wasn’t claiming they’re making a trade off on quality, just that they prefer to build things quickly. If an AI let you keep quality constant and deliver faster, for example.
I don’t think that’s what LLMs offer, mind you (right now anyway), and I often find the trade offs to not be worth it in retrospect, but it’s hard to know which bucket you’re in ahead of time.
I resonate so strongly with this. I’ve been a professional software engineer for almost twenty years now. I’ve worked on everything from my own solo indie hacker startups to now getting paid a half million per year to sling code for a tech company worth tens of billions. I enjoy writing code sometimes, but mostly I just want to build things. I’m having great fun using all these AI tools to build things faster than ever. They’re not perfect, and if you consider yourself to be a software engineer first, then I can understand how they’d be frustrating.
But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.
I don't think the author is saying it's a dichotomy. Like, you're either a disciple of doing things "ye olde way" or allowing the LLM to do it for you.
I find his point to be that there is still a lot of value in understanding what is actually going on.
Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.
I like coding on private projects at home; that is fun and creative. The coding I get to do at work inbetween waiting for CI, scouring logs, monitoring APM dashboards and reviewing PRs, in a style and abstraction level I find inappropriate is not interesting at all. A type of change that might take 10 minutes at home might take 2 days at work.
> "as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I find this statement problematic for a different reason: we live in a world where minimum wages (if they exist) are lower than living wages & mean wages are significantly lower the point at which well-being indices plateau. In that context calling people out for working in a field that "isn't for them" is inutile - if you can get by in the field then leaving it simply isn't logical.
THAT SAID, I do find the above comment incongruent with reality. If you're in a field that's "not for you" for economic reasons that's cool but making out that it is in fact for you, despite "tolerating" writing code, is a little different.
> I got into the game for creativity
Are you confusing creativity with productivity?
If you're productive that's great; economic imperative, etc. I'm not knocking that as a positive basis. But nothing you describe in your comment would fall under the umbrella of what I consider "creativity".
We've seen this happen over and over again, when a new leaky layer of abstraction is developed that makes it easier to develop working code without understanding the lower layer.
It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.
Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.
Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?
Wouldn't we all be smarter if we managed memory manually?
Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?
Wouldn't we all be smarter if we were wiring our own transistors?
It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
LLMs don't create an abstraction. They generate code. If you are thinking about LLMs as a layer of abstraction, you are going to have all kinds of problems.
My C compiler has been generating assembly code for me for 30 years. And people were saying the same thing even earlier about how compilers and HLLs made developers dumb because they couldn't code in asm.
They can also generate documentation of code you've written. So it is very useful if leveraged correctly to understand what the code is doing. Eventually you learn all of the behaviors of that code and able to write it yourself or improve on it.
I would consider it as a tool to teach and learn code if used appropriately. However LLMs are bullshit if you ask it to write something, pieces yes, whole code... yeah good luck having it maintain consistency and comprehension of what the end goal is. The reason it works great for reading existing code is that the input results into a context it can refer back to but because LLMs are weighted values it has no way to visualize the final output without significant input.
The point is LLMs may allow developers to write code for problems they may not fully understand at the current level or under the hood.
In a similar way using a high level web framework may allow a developer to work on a problem they don’t fully understand at the current level or under the hood.
There will always be new tools to “make developers faster” usually at a trade off of the developer understanding less of what specifically they’re instructing the computer to do.
Sometimes it’s valuable to dig and better understand, but sometimes not. And always responding to new developer tooling (whether LLMs or Web Frameworks or anything else) by saying they make developers dumber, can be naive.
Leaky abstractions is a really appropriate term for LLM-assisted coding.
The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.
Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
> Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
I'm finding that, If I don't have solid mastery of at least one aspect of generated code, I won't know that I have problems until they touch a domain I understand.
Those aren't the same thing at all, and you already mentioned why in your comment: leakiness. The higher up you go on the abstraction chain, the leakier the abstractions become, and the less viable it is to produce quality software without understanding the layer(s) below.
You can generally trust that transistors won't randomly malfunction and give you wrong results. You can generally trust that your compiler won't generate the wrong assembly, or that your interpreter won't interpret your code incorrectly. You can generally trust that your language's automatic memory management won't corrupt memory. It might be useful to understand how those layers work anyway, but it's usually not a hard requirement.
But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
AI is the highest level of abstraction so far, and as a result, it's also the leakiest abstraction so far. You CANNOT write proper functional and maintainable code using an LLM without having at least a decent understanding of what it's outputting, unless you're writing baby's first todo app or something.
> But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
I want to frame this. I am sick to death of every other website and application needing a gig of RAM and making my damn phone hot in my hand.
Not all of those abstractions are equally leaky though. Automatic memory management for example is leaky only for a very narrow set of problems, in many situations the abstraction works extremely well. It remains to be seen whether AI can be made to leak so rarely (which does not meant that it's not useful even in its current leaky state).
If we just talk in analogies: a cup is also leaky because fluid is escaping via vapours.
It's not the same as a cup with a hole in it.
Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)
> But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
This is the weakest point which breaks your whole argument.
I see it happening ALL the time: newer web developers enter the field from an angle of high abstraction, whenever these abstractions don't work well, then they are completelly unable to proceed. They wouldn't be in that place if they knew the low-level and it DOES prevent them from delivering "value" to their customers.
What is even worse than that, since these developers don't understand exactly why some problen manifests, and the don't even understand exactly what their abstraction trully solves, they wrongly proceed to solve a problem using the wrong (high level) tools.
That has some amount to do with the level abstraction but almost everything to do with inexperience. The lower level you get, the harder the impact of inexperience.
New web developers are still sorting themselves out and they are at a stage where they’ll suck no matter what the level of abstraction.
And every time the commentariat dismisses it with the trope that it’s the same as the other times.
It’s not the same as the other times. The naysayers might be the same elitists as the last time. But that’s irrelevant because the moment is different.
It’s not even an abstraction. An abstraction of what? It’s English/Farsi/etc. text input which gets translated into something that no one can vouch for. What does that abstract?
You say that they can learn about the lower layers. But what’s the skill transfer from the prompt engineering to the programming?
People who program in memory-managed languages are programming. There’s no paradigm shift when they start doing manual memory management. It’s more things to manage. That’s it.
People who write spreadsheet logic are programming.
But what are prompt engineers doing? ... I guess they are hoping for the best. Optimism is what they have in common with programming.
> (My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).
This "it's the same as the past changes" analogy is lazy - everywhere it's reached for, not just AI. It's basically just "something something luddites".
Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.
When a higher level of abstraction allows programmers to focus on the detail relevant to them they stop needing to know the low level stuff. Some programmers tend not to be a fan of these kinds of changes as we well know.
But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?
If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.
Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.
LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.
Last year I learned a new language and framework for the first time in a while. Until I became used to the new way of thinking, the discomfort I felt at each hurdle was both mental and physical! I imagine this is what many senior engineers feel when they first begin using an AI programming assistant, or an even more hands-off AI tool.
Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!
I think it’s usually helpful if your knowledge extends a little deeper than the level you usually work at. You need to know a lot about the layer you work in, a good amount about the surrounding layers, and maybe just a little bit about more distant layers.
If you are writing SQL, it’s helpful to understand how database engines manage storage and optimize queries. If you write database engine code, it’s helpful to understand (among many other things of course) how memory is managed by the operating system. If you write OS code, it’s helpful to understand how memory hardware works. And so on. But you can write great SQL without knowing much of anything about memory hardware.
The reverse is also true in that it’s good to know what is going on one level above you as well.
Anyway my experience has been that knowledge of the adjacent stack layers is highly beneficial and I don’t think exaggerated.
When a lower layer fails, your ability to remedy a situation depends on your ability to understand that layer of abstraction. Now: if an LLM produces wrong code, how do you know why it did that?
Let me alter this perspective, you can use it to learn why parts of code does and use it for commenting. Help you read what might be otherwise unreadable. LLMs and programming is good, but not great. However it can easily be someone to teach a developer what parts of the code they are working on.
Sounds like you’re coping cause you have some type of investment in LLM “coding”, whether that is financial or emotional.
I won’t waste my time too much reacting to this nonsensical comment but I’ll just give this example, LLMs can hallucinate, where they generate code that’s not real, LLMs don’t work off straight rules, they’re influenced by a seed. Normal abstraction layers aren’t.
I dearly hope you’re arguing in bad faith, otherwise you are really deluded with either programming terms or reality.
Abstraction is fine, it allows you to work faster, or easier. Reliance that becomes dependency is the problem - when abstraction supersedes fundamentals you're no longer able to reason about the leaks and they become blindspots.
Don't confuse low-level tedium with CS basics, if you're arguing that knowing how computers work is not relevant to working as a SWE then sure, but why would a company want a software dev that doesn't seem to know software? Usually your irreplaceable value as a developer is knowing and mitigating the leaks so they don't wind up threatening the business.
This is where the industry most suffers from not having a standardized-ish hierarchy, you're right that most shops don't need a trauma surgeon on-call for treating headaches but there's still many medical options before resorting to random grifter who simply "watched some Grey's Anatomy" as "medschool was a barrier for providing value to customers".
I've had a similar experience. I built out a feature using an LLM and then found the library it must have been "taking" the code from, so what I ended up was a much worse mangled version of what already existed, had I taken the time to properly research. I've now fully gone back to just getting it to prototype functions for me in-editor based off comments, and I do the rest. Setting up AI pipelines with rule files and stuff takes all the fun away and feels like extremely daunting work I can't bring myself to do. I would much rather just code than act as a PM for a junior that will mess up constantly.
When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.
You’re exactly right on the rage part, and that’s not something I’ve seen discussed enough.
Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?
I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.
It’s exponentially better for me to use AI for coding than it was two years ago. GPT-4 launched two years and two days ago. Claude 3.5 sonnet was still fifteen months away. There were no reasoning models. Costs were an order of magnitude or two higher. Cursor and Windsurf hadn’t been released.
The last two years have brought staggering progress.
LLMs also take away the motivation from students to properly concentrate and deeply understand a technical problem (including but not limited to coding problems); instead, they copy, paste and move on without understanding. The electronic calculator analogy might be appropriate: it's a tool appropriate once you have learned how to do the calculations by hand.
In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.
As a friend commented, "these language models should never have been made available to the general public", only to researchers.
> As a friend commented, "these language models should never have been made available to the general public", only to researchers.
That feels to me like a dystopian timeline that we've only very narrowly avoided.
It wouldn't just have been researchers: it would have been researchers and the wealthy.
I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.
I'm giving a programming class and students uses LLMs all the time. I see it as a big problem because:
- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"
- students almost don't ask questions anymore. Why would they when an AI give them code?
- AI output contains notions, syntax and API not seen in class, adding to the confusion
Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.
Job market will verify those students, but the outcome may be potentially disheartening for you, because those guys may actually succeed one way or another. Think punched cards: they are gone along with the mindset of "need to implement it correctly on first try".
I had this realization a couple weeks ago that AI and LLMs are the 2025 equivalent of what Wikipedia was in 2002. Everyone is worried about how all the kids are going to just use the “easy button” and get nonsense that’s in-checked and probably wrong and a whole generation of kids are going to grow up not knowing how to research, and trusting unverified sources.
And then eventually overall we learned what the limits of Wikipedia are. We know that it’s generally a pretty good resource for high level information and it’s more accurate for some things than for others. It’s still definitely a problem that Wikipedia can confidently publish unverified information (IIRC wasn’t the Scottish translation famously hilariously wrong and mostly written by an editor with no experience with the language?)
And yet, I think if these days people were publishing think pieces about how Wikipedia is ruining the ability of students to learn, or advocating that people shouldn’t ever use Wikipedia to learn something, we’d largely consider them crackpots, or at the very least out of touch.
I think AI tools are going to follow the same trajectory. Eventually we’ll gain enough cultural knowledge of their strengths and weaknesses to apply them properly and in the end they’ll be another valuable asset in our ever growing lists of tools.
it's particularly bad for students who should be trying to learn.
at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.
For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.
Imagine a calculator that computes definite integrals, but gives non-sensical results on non-smooth functions for whatever reason (i.e., not an error, but an incorrect but otherwise well-formed answer).
If there were a large number of people who didn't quite understand what it meant for a function to be continuous, let alone smooth, who were using such a calculator, I think you'd see similar issues to the ones that are identified with LLM usage: a large number of students wouldn't learn how to compute definite or indefinite integrals, and likely wouldn't have an intuitive understanding of smoothness or continuity either.
I think we don't see these problems with calculators because the "entry-level" ones don't have support for calculus-related functionality, and because people aren't taught how to arrange the problems that you need calculus to solve until after they've given some amount of calculus-related intuition. These conditions obviously aren't the case for LLMs.
What do you think is the big difference between these tools and *outsourcing*?
AI is far more comparable to delegating work to *people*.
Calculators and compilers are deterministic. Using them doesn't change the nature of your work.
AI, depending on how you use it, gives you a different role. So take that as a clue: if you are less interested in building things and more interested into getting results, maybe a product management role would be a better fit.
Fundamentally nothing, but everybody already knows that you shouldn't teach young kids to rely on calculators during the basic "four-function" stage of their mathematics education.
Calculators for the most part don't solve novel problems. They automate repetitive basic operations which are well-defined and have very few special cases. Your calculator isn't going to do your algebra for you, it's going to give you more time to focus on the algebraic principles instead of material you should have retained from elementary school. Algebra and calculus classes are primarily concerned with symbolic manipulation, once the problem is solved symbolically coming to a numerical answer is time-consuming and uninteresting.
Of course, if you have access to the calculator throughout elementary school then you're never going to learn the basics and that's why schoolchildren don't get to use calculators until the tail-end of middle school. At least that's how it worked in the early 2000s when i was a kid; from what i understand kids today get to use their phones and even laptops in class so maybe i'm wrong here.
Previously I stated that calculators are allowed in later stages of education because they only automate the more basic tasks; Matlab can arguably be considered a calculator which does automate complicated tasks and even when i was growing up the higher-end TI-89 series was available which actually could solve algebra and even simple forms of calculus problems symbolically; we weren't allowed access to these when i was in high school because we wouldn't learn the material if there was a computer to do it for us.
So anyways, my point (which is halfway an agreement with the OP and halfway an agreement with you) is that AI and calculators are fundamentally the same. It needs to be a tool to enhance productivity, not a crutch to compensate for your own inadequacies[1]. This is already well-understood in the case of calculators, and it needs to be well-understood in the case of AI.
[1] actually now that i think of it, there is an interesting possibility of AI being able to give mentally-impaired people an opportunity to do jobs they might never be capable of unassisted, but anybody who doesn't have a significant intellectual disability needs to be wary of over-dependence on machines.
There's a reason we don't let kids use calculators to learn their times tables. In order to be effective at more advanced mathematics, you need to develop a deep intuition for what 9 * 7 means, not just what buttons you need to push to get the calculator to spit out 63.
A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.
When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"
Turns out they just fed the task definition to some LLM and copied the answer to the pull request.
I wonder if it would work to introduce a company policy that says you should never commit consensus aren't able to explain how it works?
I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?
I call this the "house scrabble rule" because I used to play regularly with a group who imposed a rule that said you couldn't play a word without being able to define it.
I assume that would be seen as creating unnecessary burden, provided that the script works and does what's required. Is it better than the code written by people who have departed, and now no one can explain how it works?
The developer in question has been later promoted to a team lead, and (among other things) this explains why it's "my previous place" :)
One of the advantages of working with people who are not native english-speakers is that, if their english suddenly becomes perfect and they can write concise technical explanations in tasks, you know it's some LLM.
Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".
Plato, in the Phaedrus, 370BC: "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
Has it? Or do we instead have vast overfilled palaces of the sum of human knowledge, often stored in pointers and our limited working memory readily available for things recently accessed?
I'd argue that our ability to recall individual moments has gone down, but the sum of what we functionally know has gone up massively.
I may be old-fashioned but I remember a time when silent failure was considered to be one of the worst things a system can do.
LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.
> I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.
I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.
In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.
I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.
I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.
I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.
The industry will change drastically, but you can still enjoy your individual pleasures. And there will be value in unique, one-off and very different pieces that only an artesan can create (though there will now be a vast number of "unique" screen printed tees on the market as well)
The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.
My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.
Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.
I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.
I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.
The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.
what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.
Have you seen it work to the long term?
I don’t think that’s what LLMs offer, mind you (right now anyway), and I often find the trade offs to not be worth it in retrospect, but it’s hard to know which bucket you’re in ahead of time.
But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.
I find his point to be that there is still a lot of value in understanding what is actually going on.
Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.
Dead Comment
> "as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I find this statement problematic for a different reason: we live in a world where minimum wages (if they exist) are lower than living wages & mean wages are significantly lower the point at which well-being indices plateau. In that context calling people out for working in a field that "isn't for them" is inutile - if you can get by in the field then leaving it simply isn't logical.
THAT SAID, I do find the above comment incongruent with reality. If you're in a field that's "not for you" for economic reasons that's cool but making out that it is in fact for you, despite "tolerating" writing code, is a little different.
> I got into the game for creativity
Are you confusing creativity with productivity?
If you're productive that's great; economic imperative, etc. I'm not knocking that as a positive basis. But nothing you describe in your comment would fall under the umbrella of what I consider "creativity".
It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.
Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.
Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?
Wouldn't we all be smarter if we managed memory manually?
Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?
Wouldn't we all be smarter if we were wiring our own transistors?
It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
I would consider it as a tool to teach and learn code if used appropriately. However LLMs are bullshit if you ask it to write something, pieces yes, whole code... yeah good luck having it maintain consistency and comprehension of what the end goal is. The reason it works great for reading existing code is that the input results into a context it can refer back to but because LLMs are weighted values it has no way to visualize the final output without significant input.
The point is LLMs may allow developers to write code for problems they may not fully understand at the current level or under the hood.
In a similar way using a high level web framework may allow a developer to work on a problem they don’t fully understand at the current level or under the hood.
There will always be new tools to “make developers faster” usually at a trade off of the developer understanding less of what specifically they’re instructing the computer to do.
Sometimes it’s valuable to dig and better understand, but sometimes not. And always responding to new developer tooling (whether LLMs or Web Frameworks or anything else) by saying they make developers dumber, can be naive.
The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.
(Absolutely classic Joel Spolsky essay from 22 years ago which still feels relevant today: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... )
Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
I'm finding that, If I don't have solid mastery of at least one aspect of generated code, I won't know that I have problems until they touch a domain I understand.
You can generally trust that transistors won't randomly malfunction and give you wrong results. You can generally trust that your compiler won't generate the wrong assembly, or that your interpreter won't interpret your code incorrectly. You can generally trust that your language's automatic memory management won't corrupt memory. It might be useful to understand how those layers work anyway, but it's usually not a hard requirement.
But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
AI is the highest level of abstraction so far, and as a result, it's also the leakiest abstraction so far. You CANNOT write proper functional and maintainable code using an LLM without having at least a decent understanding of what it's outputting, unless you're writing baby's first todo app or something.
I want to frame this. I am sick to death of every other website and application needing a gig of RAM and making my damn phone hot in my hand.
Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)
This is the weakest point which breaks your whole argument.
I see it happening ALL the time: newer web developers enter the field from an angle of high abstraction, whenever these abstractions don't work well, then they are completelly unable to proceed. They wouldn't be in that place if they knew the low-level and it DOES prevent them from delivering "value" to their customers.
What is even worse than that, since these developers don't understand exactly why some problen manifests, and the don't even understand exactly what their abstraction trully solves, they wrongly proceed to solve a problem using the wrong (high level) tools.
That has some amount to do with the level abstraction but almost everything to do with inexperience. The lower level you get, the harder the impact of inexperience.
New web developers are still sorting themselves out and they are at a stage where they’ll suck no matter what the level of abstraction.
It’s not the same as the other times. The naysayers might be the same elitists as the last time. But that’s irrelevant because the moment is different.
It’s not even an abstraction. An abstraction of what? It’s English/Farsi/etc. text input which gets translated into something that no one can vouch for. What does that abstract?
You say that they can learn about the lower layers. But what’s the skill transfer from the prompt engineering to the programming?
People who program in memory-managed languages are programming. There’s no paradigm shift when they start doing manual memory management. It’s more things to manage. That’s it.
People who write spreadsheet logic are programming.
But what are prompt engineers doing? ... I guess they are hoping for the best. Optimism is what they have in common with programming.
Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).
…but I haven’t joined a new company since LLMs were a thing. How often is this use case necessary to justify $Ts in investment?
Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.
But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?
If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.
Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.
LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.
Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!
If you are writing SQL, it’s helpful to understand how database engines manage storage and optimize queries. If you write database engine code, it’s helpful to understand (among many other things of course) how memory is managed by the operating system. If you write OS code, it’s helpful to understand how memory hardware works. And so on. But you can write great SQL without knowing much of anything about memory hardware.
The reverse is also true in that it’s good to know what is going on one level above you as well.
Anyway my experience has been that knowledge of the adjacent stack layers is highly beneficial and I don’t think exaggerated.
I won’t waste my time too much reacting to this nonsensical comment but I’ll just give this example, LLMs can hallucinate, where they generate code that’s not real, LLMs don’t work off straight rules, they’re influenced by a seed. Normal abstraction layers aren’t.
I dearly hope you’re arguing in bad faith, otherwise you are really deluded with either programming terms or reality.
Don't confuse low-level tedium with CS basics, if you're arguing that knowing how computers work is not relevant to working as a SWE then sure, but why would a company want a software dev that doesn't seem to know software? Usually your irreplaceable value as a developer is knowing and mitigating the leaks so they don't wind up threatening the business.
This is where the industry most suffers from not having a standardized-ish hierarchy, you're right that most shops don't need a trauma surgeon on-call for treating headaches but there's still many medical options before resorting to random grifter who simply "watched some Grey's Anatomy" as "medschool was a barrier for providing value to customers".
When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.
Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?
I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.
The last two years have brought staggering progress.
In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.
As a friend commented, "these language models should never have been made available to the general public", only to researchers.
That feels to me like a dystopian timeline that we've only very narrowly avoided.
It wouldn't just have been researchers: it would have been researchers and the wealthy.
I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.
- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"
- students almost don't ask questions anymore. Why would they when an AI give them code?
- AI output contains notions, syntax and API not seen in class, adding to the confusion
Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.
It’s the college’s responsbility now to teach students how to harness the power of LLMs effectively. They can’t keep their heads in the sand forever.
And then eventually overall we learned what the limits of Wikipedia are. We know that it’s generally a pretty good resource for high level information and it’s more accurate for some things than for others. It’s still definitely a problem that Wikipedia can confidently publish unverified information (IIRC wasn’t the Scottish translation famously hilariously wrong and mostly written by an editor with no experience with the language?)
And yet, I think if these days people were publishing think pieces about how Wikipedia is ruining the ability of students to learn, or advocating that people shouldn’t ever use Wikipedia to learn something, we’d largely consider them crackpots, or at the very least out of touch.
I think AI tools are going to follow the same trajectory. Eventually we’ll gain enough cultural knowledge of their strengths and weaknesses to apply them properly and in the end they’ll be another valuable asset in our ever growing lists of tools.
Dead Comment
at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.
For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.
If there were a large number of people who didn't quite understand what it meant for a function to be continuous, let alone smooth, who were using such a calculator, I think you'd see similar issues to the ones that are identified with LLM usage: a large number of students wouldn't learn how to compute definite or indefinite integrals, and likely wouldn't have an intuitive understanding of smoothness or continuity either.
I think we don't see these problems with calculators because the "entry-level" ones don't have support for calculus-related functionality, and because people aren't taught how to arrange the problems that you need calculus to solve until after they've given some amount of calculus-related intuition. These conditions obviously aren't the case for LLMs.
AI, depending on how you use it, gives you a different role. So take that as a clue: if you are less interested in building things and more interested into getting results, maybe a product management role would be a better fit.
Calculators for the most part don't solve novel problems. They automate repetitive basic operations which are well-defined and have very few special cases. Your calculator isn't going to do your algebra for you, it's going to give you more time to focus on the algebraic principles instead of material you should have retained from elementary school. Algebra and calculus classes are primarily concerned with symbolic manipulation, once the problem is solved symbolically coming to a numerical answer is time-consuming and uninteresting.
Of course, if you have access to the calculator throughout elementary school then you're never going to learn the basics and that's why schoolchildren don't get to use calculators until the tail-end of middle school. At least that's how it worked in the early 2000s when i was a kid; from what i understand kids today get to use their phones and even laptops in class so maybe i'm wrong here.
Previously I stated that calculators are allowed in later stages of education because they only automate the more basic tasks; Matlab can arguably be considered a calculator which does automate complicated tasks and even when i was growing up the higher-end TI-89 series was available which actually could solve algebra and even simple forms of calculus problems symbolically; we weren't allowed access to these when i was in high school because we wouldn't learn the material if there was a computer to do it for us.
So anyways, my point (which is halfway an agreement with the OP and halfway an agreement with you) is that AI and calculators are fundamentally the same. It needs to be a tool to enhance productivity, not a crutch to compensate for your own inadequacies[1]. This is already well-understood in the case of calculators, and it needs to be well-understood in the case of AI.
[1] actually now that i think of it, there is an interesting possibility of AI being able to give mentally-impaired people an opportunity to do jobs they might never be capable of unassisted, but anybody who doesn't have a significant intellectual disability needs to be wary of over-dependence on machines.
Deleted Comment
Calculators don't pretend to think or solve a class of problems. They are pure execution. The comparison in tech is probably compilers, not code.
Dead Comment
A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.
When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"
Turns out they just fed the task definition to some LLM and copied the answer to the pull request.
I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?
The developer in question has been later promoted to a team lead, and (among other things) this explains why it's "my previous place" :)
Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".
Dead Comment
I'd argue that our ability to recall individual moments has gone down, but the sum of what we functionally know has gone up massively.
I'm sure somebody out there would argue that the answer is yes, but personally I have my doubts.
https://en.wikipedia.org/wiki/Plato%27s_unwritten_doctrines
But sometimes the new tech is a hot x-ray foot measuring machine.
LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.
I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.
In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.
Dead Comment