> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails
Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.
Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.
It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.
A long time ago I would write these stupid long, wordy, emails to my manager summing up my work week. He finally told me, "please, keep it short and sweet. I don't need to know every wire or line of code you touched. Just summarize it into a few sentences." Best conversation ever. Went from 2 hours of typing Friday afternoon to 10 minutes or so. I'm stumped as to why we went backwards.
The exception is when you're sending emails to people who don't have the same background knowledge or assumptions as you do.
Imagine:
Write a coherent but succinct email to Ms Griffin, principal of the school where my 8yo son goes, explaining;
- Quizlet good for short term recollection
- no point memorising stuff if going to forget later
- better to start using anki, and only memorize stuff where worth remembering forever
That seems like an effective way to get Ms Griffin annoyed. Given the prevalence of cheating in education they are might be much more likely to identify that an LLM was used to generate the text, after which they label the email as spam and the parent as someone would would send them such spam.
I think this is the author's point. The ability to write short and concisely is a skill. So goes the saying: "If I had more time, I would have written a shorter letter."
Using LLMs to do that shortening is potentially hindering that practice.
The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.
I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.
I think it's a fair hill to die on, I'll join you. I go so far as to say if I take a very direct tone with you after a formality and you keep up the formalities, it's a bit of a red flag. Gimmie the words with what you want only, please.
It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.
For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.
Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?
I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.
And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.
> "People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did."
Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.
I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.
> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"
Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.
Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.
I recommend using Anki (or whatever software does the job) to commit everyday, normal stuff that comes up to long-term memory.
Anki has desktop and phone apps, and if you make an account online, you can connect both to it and sync across the two devices with no effort. I can do my daily review and add cards from laptop or phone whenever something comes up.
I use no subdecks, and zero complex features. Add cards, edit in the "browser", delete sometimes if I've second thoughts. 40 new cards each day, reviewing is ~45 mins and a joy.
All that to say - it's a direct antidote to the issues being described here. I rush to new things less, and spend much more time consolidating and forming links between stuff I know or "knew".
It's directly pushing me towards behaviour that fits the reality of how my brain works. Tabs are being closed with me saying to myself - I'll learn the name of the author and book for now, that's a good start.
Great for birthdays, names, an anecdote you loved, a little idea you had, fleshing out your geography, history, knowledge of plants, lyrics, nuggets from the Common Lisp book you're doing, etc etc.
So for me one huge thing to reclaim your brain and get acquainted with your memory is - flashcards!
> For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did.
> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?
I'd argue that using calculators instead of learning how addition is done does hurt kids' ability to do mental arithmetic. It's an experiment we haven't tried, or at least not in places I've lived in. Sure, once you get how addition is done, feel free to free up your mind skipping 2+ digit arithmetics using a calculator. Same as: sure, once you learned what caching is and implement a small prototype, feel free to ask Claude to implement caching for you.
I wonder if in the place of many lower level skills one is then freed to explore higher order skills. We now have very fancy calculators, such as in the form of tools like notebooks that connect to data sources and run transformations and show visualizations.
Looking around, I don't see too many people exploring higher order skills using that spare brain power. I think at the margins, you've probably got really smart scientists/engineers/philosophers doing that, but what does the ordinary person in the street do? This is the grumpiest thing I'll say today, but it seems like they just scroll social media on their phones while streaming media plays in the background.
"While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly. Moving up the ladder of abstraction has consistently been good."
Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.
The analogy likening LLMs to compilers is extremely specious. In both steps, the text written by the user/programmer is higher-level and thus "easier" but beyond that, the analogy doesn't hold.
- Natural language is not precise and has no spec, unlike programming languages.
- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.
- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.
Exactly. The industry has encouraged mediocrity and in-efficiency with over-abstraction and abusing technologies in areas where it doesn't make sense for basic software.
This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.
Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.
Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.
>This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.
Very subjective but IME, understanding assembly is correlated with being a skilled web developer. Even though you don't actually write assembly while doing web dev.
I love LLMs, and actually feel they are making me smarter.
I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.
If you're curious about torque converters I suspect you're careful about this, but what's your information vetting process? I use LLMs via text, so I can verify info as it streams in. How do you verify what's spoken to you in a car?
I also rather use them as a tutor of sorts than "please do things for me." I think they're quite useful in that regard, albeit I know not to trust them fully as the only source of information.
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails.
Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.
For sure. If I get an email with 3 dense paragraphs, I'm more likely to mark it unread and come back to it later, after processing the other 20 emails in my inbox.
There’ll be a move to oral ability assessment across the board.
Oral exams, face to face interviews, etc.
If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.
"However… even this might still be too slow. Why understand every line of code deeply if you can just build and ship?"
Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.
I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?
Really, do you need those Chrome extensions?
Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.
For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.
Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.
Which is why I try to treat LLMs like a “calculator” to check my work.
I do things myself, then after I do it myself - ask an LLM to do the same.
That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.
Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.
Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.
It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.
Imagine:
Deleted Comment
I use LLMs to shorten my emails.
Using LLMs to do that shortening is potentially hindering that practice.
The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.
I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.
But the recipient can just ask AI to convert the prose into bullet points.
Dead Comment
It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.
For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.
Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?
I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.
And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.
Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.
I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.
> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"
Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.
Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.
Anki has desktop and phone apps, and if you make an account online, you can connect both to it and sync across the two devices with no effort. I can do my daily review and add cards from laptop or phone whenever something comes up.
I use no subdecks, and zero complex features. Add cards, edit in the "browser", delete sometimes if I've second thoughts. 40 new cards each day, reviewing is ~45 mins and a joy.
All that to say - it's a direct antidote to the issues being described here. I rush to new things less, and spend much more time consolidating and forming links between stuff I know or "knew".
It's directly pushing me towards behaviour that fits the reality of how my brain works. Tabs are being closed with me saying to myself - I'll learn the name of the author and book for now, that's a good start.
Great for birthdays, names, an anecdote you loved, a little idea you had, fleshing out your geography, history, knowledge of plants, lyrics, nuggets from the Common Lisp book you're doing, etc etc.
So for me one huge thing to reclaim your brain and get acquainted with your memory is - flashcards!
> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?
Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.
- Natural language is not precise and has no spec, unlike programming languages.
- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.
- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.
This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.
Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.
Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.
NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.
I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.
Deleted Comment
The vetting process is the same as if I were driving up I-5 with a gear head friend of mine having a conversation with them as we go.
Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.
Oral exams, face to face interviews, etc.
If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.
Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.
I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?
Really, do you need those Chrome extensions?
Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.
For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.
Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.
Deleted Comment
Which is why I try to treat LLMs like a “calculator” to check my work.
I do things myself, then after I do it myself - ask an LLM to do the same.
That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.