Readit News logoReadit News
cambaceres · a day ago
> “I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?”

In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about. I think the same can be said about software development.

NalNezumi · a day ago
I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.

The most damning example I have about Swedish school system is anecdotal: by attending Saturday school, I never had to study math ever in the Swedish school. (same for my Asian classmates) when I finished 9th grade Japanese school curriculum taught ONLY one day per week (2h), I had learned all of advanced math in high school and never had to study math until college.

The focus on "no one left behind == no one allowed ahead" also meant that young me complaining math was boring and easy didn't persuade teachers to let me go ahead, but instead, they allowed me to sleep during the lecture.

StableAlkyne · a day ago
> no one left behind == no one allowed ahead

It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)

Teachers in my county were heavily discouraged from failing anyone, because pass rate became a target instead of a metric. They couldn't even give a 0 for an assignment that was never turned in without multiple meetings with the student and approval from an administrator.

The net result was classes always proceeded at the rate of the slowest kid in class. Good for the slow kids (that cared), universally bad for everyone else who didn't want to be bored out of their minds. The divide was super apparent between the normal level and honors level classes.

I don't know what the right answer is, but there was an insane amount of effort spent on kids who didn't care, whose parents didn't care, who hadn't cared since elementary school, and always ended up dropping out as soon as they hit 18. No differentiation between them, and the ones who really did give a shit and were just a little slow (usually because of a bad home life).

It's hard to avoid leaving someone behind when they've already left themselves behind.

Epa095 · a day ago
And still (or maybe because?) the resulting adults in Sweeden score above e.g Korea in both numeracy and adaptive problem solving (but slightly less than Japan). The race is not about being best at 16 after all.

https://gpseducation.oecd.org/CountryProfile?plotter=h5&prim...

https://gpseducation.oecd.org/CountryProfile?plotter=h5&prim...

kace91 · a day ago
>I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.

I’m curious, could you share your Saturday school‘s system? I’m very interested in knowing what a day of class was like, the general approach, etc.

JustExAWS · a day ago
I have as much of a fundamental issue with “Saturday school” for children as I do with professionals thinking they should be coding on their days off. When do you get a chance to enjoy your childhood?

Dead Comment

siva7 · a day ago
It's better to leave no one behind than to focus solely on those ahead. Society needs a stable foundation and not more ungrateful privileged people.
rkomorn · a day ago
Most of what I remember of my high school education in France was: here are the facts, and here is the reasoning that got us there.

The exams were typically essay-ish (even in science classes) where you either had to basically reiterate the reasoning for a fact you already knew, or use similar reasoning to establish/discover a new fact (presumably unknown to you because not taught in class).

Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.

jve · a day ago
I don't know if I have critical thinking or not. But I often question - WHY is this better? IS there any better way? WHY it must be done such a way or WHY such rule exists?

For example in electricity you need at least that amount of cross section if doing X amount of amps over Y length. I want to dig down and understand why? Ohh, the smaller the cross section, the more it heats! Armed with this info I get many more "Ohhs": Ohh, that's why you must ensure the connections are not loose. Oohhh, that's why an old extension cord where you don't feel your plug solidly clicks in place is a fire hazard. Ohh, that's why I must ensure the connection is solid when joining cables and doesn't lessen cross section. Ohh, that's why it's a very bad idea to join bigger cables with a smaller one. Ohh, that's why it is a bad idea to solve "my fuse is blowing out" by inserting a bigger fuse but instead I must check whether the cabling can support higher amperage (or check whether device has to draw that much).

And yeah, this "intuition" is kind of a discovery phase and I can check whether my intuition/discovery is correct.

Basically getting down to primitives lets me understand things more intuitively without trying to remember various rules or formulas. But I noticed my brain is heavily wired in not remembering lots of things, but thinking logically.

Saline9515 · a day ago
On the contrary, the French "dissertation" exercise requires to articulate reasoning and facts, and come up with a plan for the explanation. It is the same kind of thinking that you are required to produce when writing a scientifically paper.

It is however not taught very well by some teachers, who skirt on explaining how to properly do it, which might be your case.

darkwater · a day ago
> Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.

Why do you say so? Even just stating this probably means you are one or a few steps further...

biztos · a day ago
I’ve heard many bad things said of the Beaujolais Nouveau, and of my sense of taste for liking it, but this is the first time I’ve seen its critical-thinking skills questioned.

In its/your/our defense, I think it’s a perfectly smart wine, and young at heart!

jech · a day ago
> the same critical thinking skills as a bottle of Beaujolais Nouveau

I'm loving this expression. May I please adopt it?

diggan · a day ago
> In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great.

I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.

Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.

dsign · a day ago
This is, in my opinion, quite accurate.

In the world of software development I meet a breed of Swedish devs younger than 30 that can't write code very well, but who can wax Jira tickets and software methodologies and do all sort of things to get them into a management position without having to write code. The end result is toxic teams where the seniors and the devs brought from India are writing all the code while all the juniors are playing software architect, scrum master an product owners.

Not everybody is like that; seniors tend to be reliable and practical, and some juniors with programming-related hobbies are extremely competent and reasonable. But the chunk of "waxers" is big enough to be worrying.

Deleted Comment

tmcdos · a day ago
I have heard that in Netherlands there used to be (not sure if it is still there) a system where you have for example 4 rooms of children. Room A contains all children that are ahead of rooms B, C, D. If a child from room B learns pretty quickly - the child is moved to room A. However, if the child leaves behind the other children in room B - that child is moved in room C. Same for room C - those who can not catch up are moved to room D. In this way everyone is learning at max capacity. Those who can learn faster and better are not slowed down by others who can not (or do not want to) keep the pace. Everyone is happy - children, teachers, parents, community.
Frieren · a day ago
> The results has been...not great.

Sweden is the 19th country in the PISA scores. And it is in the upper section on all education indexes. There has been a world decline on scores, but has nothing to do with the Swedish education system. (That does not mean that Sweden should not continue monitoring it and bringing improvements)

From Swedish news: https://www.sverigesradio.se/artikel/swedish-students-get-hi...

- Swedish students skills in maths and reading comprehension have taken a drastic downward turn, according to the latest PISA study.

- Several other countries also saw a decline in their PISA results, which are believed to be a consequence of the Covid-19 pandemic.

whizzter · a day ago
Considering our past and the Finnish progress (they considered following us in the 80s/90s as they had done but stopped), 19th is an disappointment.

Having teenagers that's been through most of the primary and secondary schools I kind agree with GP, especially when it comes to math,etc.

Teaching concepts and ideas is _great_, and what we need to manage with advanced topics as adults. HOWEVER, if the foundations are shaky due to too little repetition of basics (that is seemingly frowned upon in the system) then being taught thinking about some abstract concepts doesn't help much because the tools to understand them aren't good enough.

cess11 · a day ago
One should note that from the nineties onwards we put a large portion of our kids' education on the stock exchange and in the hands of upper class freaks instead of experts.
kace91 · a day ago
I think there’s a balance to be had. My country (Spain) is the very opposite, with everything from university access to civil service exams being memory focused.

The result is usually bottom of the barrel in the subjects that don’t fit that model well, mostly languages and math - the latter being the main issue as it becomes a bottleneck for teaching many other subjects.

It also creates a tendency for people to take what they learn as truth, which becomes an issue when they use less reputable sources later in life - think for example a person taking a homeopathy course.

Lots of parroting and cargo culting paired with limited cultural exposition due to monolingualism is a bad combination.

oerdier · a day ago
Check out E.D. Hirsch Jr.'s work, e.g.'Why Knowledge Matters'.
throw8349498 · a day ago
> what to be critical about

Media can fill that gap. People should be critical about global warming, antivax, anti israel, anti communism, racism, hate, whitr man, anti democracy, russia, china, trump...

This thing is bad, imhate it, problem solved! Modern critical thinking is pretty simple!

In future goverment can provide daily RSS feed, of things to be critical about. You can reduce national schooling system to a single vps server!

0points · a day ago
Indeed, the swedish school system is an ongoing disaster.
Salgat · a day ago
The problem is, in a capitalist society, who is going to be the company that will donate their time and money to teaching a junior developer who will simply go to another company for double the pay after 2 years?
JackFr · a day ago
I think that’s a disingenuous take. Earlier in the piece the AWS CEO specifically says we should teach everyone the correct ways to build software despite the ubiquity of AI. The quote about creative problem solving was with respect to how to hire/get hired in a world where AI can let literally anyone code.
staticelf · a day ago
> The results has been...not great.

Well, I kind of disagree. The results are bad mainly because we have a mass immigration from low education countries with extremely bad cultures.

If you look at the numbers, it's easy to say swedes are stupid when in the real sense, ethnic swedes do very well in school.

RcouF1uZ4gsC · a day ago
Here is the thing though.

You can’t teach critical thinking like that.

You need to teach hard facts and then people can learn critical thinking inductively from the hard facts with some help.

moi2388 · 2 days ago
I completely agree.

On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.

I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.

As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..

brushfoot · 2 days ago
I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.

systemf_omega · 2 days ago
> B2B SaaS

Perhaps that's part of it.

People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

malfist · 2 days ago
Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that. Lines of code is a debit not a credit
Ballas · 2 days ago
There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.

If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

s1mplicissimus · 2 days ago
It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:

Which problem are you trying to solve?

At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.

codingdave · 2 days ago
And also ask: "How much money do you spend on LLMs?"

In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.

tetha · 2 days ago
I deal with a few code bases at work and the quality differs a lot between projects and frameworks.

We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.

We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.

But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.

lcnPylGDnU4H9OF · 2 days ago
> It's like we live in different worlds.

There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.

skydhash · 2 days ago
> Personally, I wrote 200K lines of my B2B SaaS

That would probably be 1000 line of Common Lisp.

albrewer · 2 days ago
My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.
haburka · 2 days ago
It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.

I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.

Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.

aDyslecticCrow · 2 days ago
I think its down to language and domain more than tools.

No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)

My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)

My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.

Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.

I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.

johnnyanmac · 2 days ago
> It's like we live in different worlds.

We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.

> I then carefully review and test.

If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.

oblio · 2 days ago
What tech stack do you use?

Betting in advance that it's JavaScript or Python, probably with very mainstream libraries or frameworks.

LauraMedia · a day ago
As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.

The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...

rozgo · 2 days ago
It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.
Fergusonb · a day ago
I agree, it's like they looked at GPT 3.5 one time and said "this isn't for me"

The big 3 - Opus 4.1 GPT5 High, Gemini 2.5 Pro

Are astonishing in their capabilities, it's just a matter of providing the right context and instructions.

Basically, "you're holding it wrong"

physicsguy · 2 days ago
Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.
abm53 · a day ago
I am also constantly astonished.

That said, observing attempts by skeptics to “unsuccessfully” prompt an LLM have been illuminating.

My reaction is usually either:

- I would never have asked that kind of question in the first place.

- The output you claim is useless looks very useful to me.

bubblyworld · a day ago
I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.

If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).

mirkodrummer · 2 days ago
B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take
cobbzilla · a day ago
It really depends, and can be variable, and this can be frustrating.

Yes, I’ve produced thousands of lines of good code with an LLM.

And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.

I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.

moi2388 · 2 days ago
GitHub copilot, Microsoft copilot, Gemini, loveable, gpt, cursor with Claude models, you name it.
deterministic · 17 hours ago
Lines of code is not a useful metric for anything. Especially not productivity.

The less code I write to solve a problem the happier I am.

kortilla · 2 days ago
It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.

Stuff Wordpress templates should have solved 5 years ago.

typpilol · 2 days ago
Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins

That way it constantly yells at sonnet 4 to get the code at least in a better state.

If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.

But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc

feoren · 2 days ago
[flagged]

Dead Comment

eloisius · 2 days ago
I agree. AI is a wonderful tool for making fuzzy queries on vast amounts of information. More and more I'm finding that Kagi's Assistant is my first stop before an actual search. It may help inform me about vocabulary I'm lacking which I can then go successfully comb more pages with until I find what I need.

But I have not yet been able to consistently get value out of vibe coding. It's great for one-off tasks. I use it to create matplotlib charts just by telling it what I want and showing it the schema of the data I have. It nails that about 90% of the time. I have it spit out close-ended shell scripts, like recently I had it write me a small CLI tool to organize my Raw photos into a directory structure I want by reading the EXIF data and sorting the images accordingly. It's great for this stuff.

But anything bigger it seems to do useless crap. Creates data models that already exist in the project. Makes unrelated changes. Hallucinates API functions that don't exist. It's just not worth it to me to have to check its work. By the time I've done that, I could have written it myself, and writing the code is usually the most pleasurable part of the job to me.

I think the way I'm finding LLMs to be useful is that they are a brilliant interface to query with, but I have not yet seen any use cases I like where the output is saved, directly incorporated into work, or presented to another human that did not do the prompting.

nwienert · 2 days ago
Have you tried Opus? It's what got me past using LLMs only marginally. Standard disclaimers apply in that you need to know what it's good for and guide it well, but there's no doubt at this point it's a huge productivity boost, even if you have high standards - you just have to tell it what those standards are sometimes.
kstenerud · a day ago
I just had Claude Sonnet 4 build this for me: https://github.com/kstenerud/orb-serde

Using the following prompt:

    Write a rust serde implementation for the ORB binary data format.

    Here is the background information you need:

    * The ORB reference material is here: https://github.com/kstenerud/orb/blob/main/orb.md
    * The formal grammar dscribing ORB is here: https://github.com/kstenerud/orb/blob/main/orb.dogma
    * The formal grammar used to describe ORB is called Dogma.
    * Dogma reference material is here: https://github.com/kstenerud/dogma/blob/master/v1/dogma_v1.0.md
    * The end of the Dogma description document has a section called "Dogma described as Dogma", which contains the formal grammar describing Dogma.

    Other important things to remember:

    * ORB is an extension of BONJSON, so it must also implement all of BONJSON.
    * The BONJSON reference material is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.md
    * The formal grammar desribing BONJSON is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.dogma
Is it perfect? Nope, but it's 90% of the way there. It would have taken me all day to build all of these ceremonious bits, and Claude did it in 10 minutes. Now I can concentrate on the important parts.

WA · a day ago
First and foremost, it’s 404. Probably a mistake, but I chuckled a bit when someone says "AI build this thing and it’s 90% there" and then posts a dead link.
JeremyNT · 2 days ago
What tooling are you using?

I use aider and your description doesn't match my experience, even with a relatively bad-at-coding model (gpt-5). It does actually work and it does generate "good" code - it even matches the style of the existing code.

Prompting is very important, and in an existing code base the success rate is immensely higher if you can hint at a specific implementation - i.e. something a senior who is familiar with the codebase somewhat can do, but a junior may struggle with.

It's important to be clear eyed about where we are here. I think overall I am still faster doing things manually than iterating with aider on an existing code base, but the margin is not very much, and it's only going to get better.

Even though it can do some work a junior could do, it can't ever replace a junior human... because a junior human also goes to meetings, drives discussions, and eventually becomes a senior! But management may not care about that fact.

foxyv · 2 days ago
The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions. I use the Duck Duck Go AI a lot to answer questions. I trust it about as far as I can throw the datacenter it resides in, but it's useful for quickly verifiable things. Especially stuff like syntax and command line options for various programs.
oblio · 2 days ago
> The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions.

One thing I can guarantee you is that this won't last. No sane MBA will ignore that revenue stream.

Image hosting services, all over again.

lbrito · 2 days ago
It's one of those you get what you put in kind of deals.

If you spend a lot of time thinking about what you want, describing the inner workings, edge cases, architecture and library choices, and put that into a thoughtful markdown, then maybe after a couple of iterations you will get half decent code. It certainly makes a difference between that and a short "implement X" prompt.

But it makes one think - at that point (writing a good prompt that is basically a spec), you've basically solved the problem already. So LLM in this case is little more than a glorified electric typewriter. It types faster than you, but you did most of the thinking.

jeremyjh · 2 days ago
Right, and then after you do all the thinking and the specs, you have to read and understand and own every single line it generated. And speaking for myself, I am no where near as good at thinking through code I am reviewing as thinking through the code I am writing.

Other people will put up PRs full of code they don't understand. I'm not saying everyone who is reporting success with LLMs are doing that, but I hear it a lot. I call those people clowns, and I'd fire anyone who did that.

AIorNot · 2 days ago
I’ve built 2 SaaS applications with LLM coding one of which was expanded and release to enterprise customers and is in good use today - note I’ve got years of dev experience and I follow context and documentation prompts and I’m using common LLM languages like typescript and python and react and AWS infra

Now it requires me to fully review all code and understand what the LLM is doing at the functional, class level and api level- in fact it works better at the method or component level for me and I had a lot of cleanup work (and lots of frustration with the models) on the codebase but overall there’s no way that I could equal the velocity I have now without it

bmcahren · 2 days ago
I think the other important step is to reject code your engineers submit that they can't explain for a large enterprise saas with millions of lines of code. I myself reject I'd say 30% of the code the LLMs generate but the power is in being able to stay focused on larger problems while rapidly implementing smaller accessory functions that enable that continued work without stopping to add another engineer to the task.

I've definitely 2-4X'd depending on the task. For small tasks I've definitely 20X'd myself for some features or bugfixes.

conradfr · 2 days ago
After all the exciting part of coding has always been code reviews.
dolebirchwood · a day ago
I do frontend work (React/TypeScript). I barely write my own code anymore, aside from CSS (the LLMs have no aesthetic sensibilities). Just prompting with Gemini 2.5 Pro. Sometimes Sonnet 4.

I don't know what to tell you. I just talk to the thing in plain but very specific English and it generally does what I want. Sometimes it will do stupid things, but then I either steer it back in the direction I want or just do it myself if I have to.

infecto · 2 days ago
I agree with the article but also believe LLM coding can boost my productivity and ability to write code over long stretches. Sure getting it to write a whole feature, high opportunity of risk. But getting it to build out a simple api with examples above and below it, piece of cake, takes a few seconds and would have taken me a few minutes.
herpdyderp · 2 days ago
The bigger the task, the more messy it'll get. GPT5 can write a single UI component for me no problem. A new endpoint? If it's simple, no problem. The risk increases as the complexity of the task does.
JustExAWS · 2 days ago
I break complex task down into simple tasks when using ChatGPT just like I did before ChatGPT with modular design.
richardlblair · a day ago
AI is really good at writing tests.

AI is also pretty good if you get it to do small chunks of code for you. This means you come with the architecture, the implementation details, and how each piece is structured. When I walk AI through each unit of code I find the results are better, and it's easier for me to address issues as I progress.

This may seem some what redundant, though. Sometimes it's faster to just do it yourself. But, with a toddler who hates sleep I've found I've been able to maintain my velocity... Even on days I get 3 hrs of sleep.

chrischen · 2 days ago
The AI agents tend to fail for me with open ended or complex tasks requiring multiple steps. But I’ve found it massively helpful if you have these two things: 1) a typed language… better if strongly typed 2) your program is logically structured and follows best practices and has hierarchical composition.

The agents are able to iterate and work with the compiler until it gets it right and the combination of 1 and 2 means there’s fewer possible “right answers” to whatever problem I have. If i structure my prompte to basically fill in the blanks of my code in specific areas it saves a lot of time. Most of what I prompt is something already done, and usually 1 google search away. This saves me the time to search it up, figure out whatever syntax I need, etc.

phatfish · a day ago
I don't code every day and am not an expert. Supposedly the sort of casual coder that LLMs are supposed to elevate into senior engineers.

Even I can see they have big blind spots. As the parent said I get overly verbose code that does run, but is no where near the best solution. Well, for really common problems and patterns I usually get a good answer. Need a more niche problem solved?You better brush up your Googling skills and do some research if you care about code quality.

randomjoe2 · 20 hours ago
If you actually believe this, you're either using bad models or just terrible at prompting and giving proper context. Let me know if you need help, I use generated code in every corner of my computer every day
gspencley · a day ago
My favourite code smell that LLMs love to introduce is redundant code comments.

// assign "bar" to foo

const foo = "bar";

They love to do that shit. I know you can prompt it not to. But the amount of PRs I'm reviewing these days that have those types of comments is insane.

burnte · a day ago
I see LLM coding as hinting on steroids. I don't trust it to actually write all of my code, but sometimes it can get me started, like a template.
Kiro · a day ago
The code LLMs write is much better than mine. Way less shortcuts and spaghetti. Maybe that means that I am a lousy coder but the end result is still better.
bdcravens · 2 days ago
I haven't had that experience, but I tend to keep my prompts very focused with a tightly limited scope. Put a different way, if I had a junior or mid level developer, and I wanted them to create a single-purpose class of 100-200 lines at most, that's how I write my prompts.
IT4MD · 2 days ago
Likewise with Powershell. It's good to give you an approach or some ideas, but copy/paste fails about 80% of the time.

Granted, I may be a inexpert prompter, but at the same time, I'm asking for basic things, as a test, and it just fails miserably most of the time.

jimbo808 · a day ago
I've been pondering this for a while. I think there's an element of dopamine that LLMs bring to the table. They probably don't make a competent senior engineer much more productive if at all, but there's that element of chance that we don't get a lot of in this line of work.

I think a lot of us eventually arrive at a point where our jobs get a bit boring and all the work starts to look like some permutation of past work. If instead of going to work and spending two hours adding some database fields and writing some tests, you had the opportunity to either:

A) Do the thing as usual in the predictable two hours

B) Spend an hour writing a detailed prompt as if you were instructing a junior engineer on a PIP to do it, and doing all the typical cognitive work you'd have done normally and then some, but then instead of typing out the code in the next hour, you have a random chance to either press enter, and tada the code has been typed and even kinda sorta works, after this computer program was "flibbertigibbeting" for just 10 minutes. Wow!

Then you get that sweet dopamine hit that tells you you're a really smart prompt engineer who did a two hour task in... cough 10 minutes. You enjoy your high for a bit, maybe go chat with some subordinate about how great your CLAUDE.md was and if they're not sure about this AI thing it's just because they're bad at prompt engineering.

Then all you have to do is cross your t's and dot your i's and it's smooth sailing from there.Except, it's not. Because you (or another engineer) will probably find architectural/style issues when reviewing the code that you explicitly told it to follow, but it ignored, and you'll have to fix those. You'll also probably be sobering up from your dopamine rush by now, and realize that you have to either review all the other lines of AI generated code, which you could have just correctly typed once.

But now you have to review with an added degree of scrutny, because you know it's really good at writing text that looks beautiful, but is ever so slightly wrong in ways that might even slip through code review and cause the company to end up in the news.

Alternatively, you could yolo and put up an MR after a quick smell, making some other poor engineer do your job for you (you're a 10x now, you've got better things to do anyway). Or better yet, just have Claude write the MR, and don't even bother to read it. Surely nobody's going to notice your "acceptance critera" section says to make sure the changes have been tested on both Android and Apple, even though you're building a microservice for an AI-powered smart fridge (mostly just a fridge, except every now and then it starts shooting ice cubes across the room at mach 3). Then three months later when someone, who never realized there are three different identical "authenticate," spends an hour scratching their head about why the code they're writing is not doing anything (because it's actually running another redundant function that nobody ever seems to catch in MR review because they're not reflected in a diff.

But yeah, that 10 minute AI magic trick sure felt good. There are times when work is dull enough that option B sounds pretty good, and I'll dabble. But yeah, I'm not sure where this AI stuff leads but I'm pretty confident it won't taking over our jobs any time soon (an ever-increasing quota of H1Bs and STEM opt student visas working for 30% less pay, on the other hand, might).

tempodox · 2 days ago
It's just that being the dumbest thing we ever heard still doesn't stop some people from doing it anyway. And that goes for many kinds of LLM application.
panny · 2 days ago
I think it has a lot to do with skill level. Lower skilled developers seem to feel it gives them a lot of benefit. Higher skilled developers just get frustrated looking at all the errors in produces.
platevoltage · 2 days ago
This is exactly how I use it.
larodi · 2 days ago
I must be a prompt wizard then.
threecheese · 2 days ago
I hate to admit it, but it is the prompt (call it context if ya like, includes tools). Model is important, window/tokensz are important, but direction wins. Also codebase is important, greenfield gets much better results, so much so that we may throw away 40 years of wisdom designed to help humans code amongst each other and use design patterns that will disgust us.
shaunxcode · 2 days ago
“we”
dionian · a day ago
Could the quality of your prompt be related to our differing outcome? I have decades of pre-AI experience and I use AI heavily. If I let it go off on its own its not as good as constraining and hand-holding it.
paulddraper · 2 days ago
> ya’ll must be prompt wizards

Thank you, but I don’t feel that way.

I’d ask you a lot of details…what tool, what model, what kind of code. But it’d probably take a lot to get to the bottom of the issue.

m3kw9 · 2 days ago
Not only a prompt wizard, you need to know what prompts are bad or good and also use bad/lazy prompts to your advantage
uh_uh · 2 days ago
Which model?
eatsyourtacos · 2 days ago
Sounds like you are using it entirely wrong then...

Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on]

Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there.

All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me.

jayd16 · 2 days ago
You mean like it turned on Hibernate or it wrote some custom rolled in app cache layer?

I usually find these kinds of caching solutions to be extremely complicated (well the cache invalidating part) and I'm a bit curious what approach it took.

You mention it only updated a single file so I guess it's not using any updates to the session handling so either sticky sessions are not assumed or something else is going on. So then how do you invalidate the app level cache for a user across all machine instances? I have a lot of trauma from the old web days of people figuring this out so I'm really curious to hear about how this AI one shot it in a single file.

lubesGordi · 2 days ago
I know, I don't understand what problems people are having with getting usable code. Maybe the models don't work well with certain languages? Works great with C++. I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude.

I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.

rootnod3 · 2 days ago
How large is that code-base overall? Would you be able to let the LLM look at the entirety of it without it crapping out?

It definitely sounds nice to go and change a few queries, but did it also consider the potential impacts in other parts of the source or in adjacent running systems? The query itself here might not be the best example, but you get what I mean.

pico303 · 2 days ago
At least one CEO seems to get it. Anyone touting this idea of skipping junior talent in favor of AI is dooming their company in the long run. When your senior talent leaves to start their own companies, where will that leave you?

I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning. Any time I use it I can hear my algebra teacher telling me not to use a calculator or I won’t learn anything.

But overall I’m starting to feel like AI is simply the natural culmination of US economic policy for the last 45 years: short term gains for the top 1% at the expense of a healthy business and the economy in the long term for the rest of us. Jack Welch would be so proud.

bigbadfeline · 2 days ago
> When your senior talent leaves to start their own companies, where will that leave you?

The CEO didn't express any concerns about "talent leaving". He is saying "keep the juniors" but he's implying "fire the seniors". This is in line with long standing industry trends and it's confirmed by the flowing quote from the OP:

>> [the junior replacement] notion led to the “dumbest thing I've ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

He is pushing for more of the same, viewing competence and skill as threats and liability to be "fixed". He's warning the industry to stay the course and keep the dumbing-down game moving as fast as possible.

johnnyanmac · 2 days ago
Well that's even stupider. What do you do when your juniors get better at using your tools?

The 2010's tech boom happened because big tech knew a good engineer is worth their weight in gold, and not paying them well meant they'd be headhunted after as little as a year of work. What's gonna happen when this repeats) if we're assuming AI makes things much more efficient)?

----

And that's my kindest interpretation. One that assumes that a junior and senior using a prompt will have a very close gap to begin with. Even seniors seem to struggle right now with current models working at scale on Legacy code.

threecheese · 2 days ago
100%, and this is him selling the new batch of AWS agent tools. If your product requirements + “Well Architected” NFRs are expressed as input, AWS wants to run it and extract your cost of senior engineers as value for him.
Nicook · a day ago
Also fits in very well to amazon's famously low average tenure/ hiring practices.
tombert · 2 days ago
I think AI has overall helped me learn.

There are lots of personal projects that I have wanted to build for years but have pushed off because the “getting started cost” is too high, I get frustrated and annoyed and don’t get far before giving up. Being able to get the tedious crap out of the way lowers the barrier to entry and I can actually do the real project, and get it past some finish line.

Am I learning as much as I would had I powered through it without AI assistance? Probably not, but I am definitely learning more than I would if I had simply not finished (or even started) the project at all.

skydhash · 2 days ago
What was your previous approach? From what I've seen, a lot of people are very reluctant about picking a book or read through a documentation before they try stuff. And then they got exposed to "cryptic" error message and then throw the towel.
latexr · 2 days ago
> At least one CEO seems to get it.

> (…)

> I’m not even sure AI is good for any engineer

In that case I’m not sure you really agree with this CEO, who is all-in on the idea of LLMs for coding, going so far as to proudly say 80% of engineers at AWS use it and that that number will only rise. Listen to the interview, you don’t even need ten minutes.

sky2224 · 2 days ago
> I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning.

Yes, but when there are certain mundane things in that discovery that are hindering my ability to get work done, AI can be extremely useful. It can be incredibly helpful in giving high level overviews of code bases or directing me to parts of codebases where certain architecture lives. Additionally, it exposes me to patterns and ideas I hadn't originally thought of.

Now, if I just take whatever is spit out by AI as gospel, then I'd be inclined to agree with you in saying AI is bad, but if you use it correctly, like any other tool, it's fantastic.

bushbaba · a day ago
Cause Matt comes from a technical background. Most CEOs dont.
devoutsalsa · a day ago
The whole premise is just silly of thinking we don't need juniors is just silly. If there's no juniors, eventually there will be no seniors. AI slop ain't gonna un-slop itself.
JustExAWS · 2 days ago
> When your senior talent leaves to start their own companies, where will that leave you?

In the case of Amazon with a shit ton of money to throw at a team of employees to crush your little startup?

murukesh_s · 2 days ago
Imagine you have shit ton of money but only agents that generate 10% bad code? You crushing or beating anyone..
sumoboy · 2 days ago
You also risk senior talent who stay but doesn't want to change or adopt, at least with any urgency. AI will accelerate that journey of discovery and learning, so juniors are going to learn super fast.
latexr · 2 days ago
That’s still to be determined. Blindly accepting code suggestions thrown at you without understanding them is not the same thing as learning.
johnnyanmac · 2 days ago
>will accelerate that journey of discovery and learning,

Okay, but what about work output? That's seems to be the only thing business cares about.

Also, maybe it's the HN bias but I don't see this notion where old engineers are rejecting this en masse. More younger people will embrace it. But most younger people haven't mucked in legacy code yet (the lifeblood of any businesses).

jryio · 2 days ago
In the last few months we have worked with startups who have vibe coded themselves into an abyss. Either because they never made the correct hires in the first place or they let technical talent go. [1]

The thinking was that they could iterate faster, ship better code, and have an always on 10x engineer in the form of Claude code.

I've observed perfectly rational founders become addicted to the dopamine hit as they see Claude code output what looks like weeks or years of software engineering work.

It's overgenerous to allow anyone to believe AI can actually "think" or "reason" through complex problems. Perhaps we should be measuring time saved typing rather than cognition.

[1] vibebusters.com

gregoryl · 2 days ago
Shush please. I wasn't old enough to cash in on the Y2K contracting boons; I'm hoping the vibe coding 200k LOC b2b AI slop "please help us scale to 200 users" contracting gigs will be lucrative.
s_dev · a day ago
Completely agree, software developers need to be using agentic coding as a writing tool not as a thinking tool.
JustExAWS · 2 days ago
As if startups before LLMs were creating great code. Right now on the front page, a YC company is offering a “Founding Full Stack Engineer” $100K-$150K. What quality of code do you think they will end up with?

https://www.ycombinator.com/companies/text-ai/jobs/OJBr0v2-f...

zdragnar · 2 days ago
Notably, that is a company that... adds AI to group chats. Startups offering crap salaries with a vague promise of equity in a vague product idea with no moat are a dime a dozen, and have been well before LLMs came around.
DirkH · 2 days ago
Give it a year or 2. Its not like 2 years ago everyone wasn't saying it would be 10+ years before AI can do what it does now.
johnnyanmac · 2 days ago
>Its not like 2 years ago everyone wasn't saying it would be 10+ years before AI can do what it does now.

So far I don't see that notion disproved. Ai still doesn't truly "reason with" nor understand the data it outputs.

lazarus01 · a day ago
> I think the skills that should be emphasized are how do you think for yourself?

Independent thinking is indeed the most important skill to have as a human. However, I sympathize for the younger generations, as they have become the primary target of this new technology that looks to make money by completely replacing some of their thinking.

I have a small child and took her to see a disney film. Google produced a very high quality long form advert during the previews. The ad portrays a lonely young man looking for something to do in the evening that meets his explicit preferences. The AI suggests a concert, he gets there and locks eyes with an attractive young woman.

Sending a message to lonely young men that AI will help reduce loneliness. The idea that you don't have to put any effort into gaining adaptive social skills to cure your own loneliness is scary to me.

The advert is complete survivor bias. For each success in curing your boredom, how many failures are there with lonely young depressed men talking to their phone instead of friends?

Critical thinking starts at home with the parents. Children will develop beliefs from their experience and confirm those beliefs with an authority figure. You can start teaching mindfulness to children at age 7.

Teaching children mindfulness requires a tremendous amount of patience. Now the consequence for lacking patience is outsourcing your Childs critical thinking to AI.

petralithic · a day ago
You should read the story The Perfect Match from the book Paper Menagerie and other stories by Ken Liu, it goes into what you mentioned about Google.
lazarus01 · a day ago
Thanks for sharing.

There is also a movie called Her, with Joaquin Phoenix and ScarJo. Absolutely brilliant.

Forricide · 2 days ago
> “How's that going to work when ten years in the future you have no one that has learned anything,”

Pretty obvious conclusion that I think anyone who's thought seriously about this situation has already come to. However, I'm not optimistic that most companies will be able to keep themselves from doing this kind of thing, because I think it's become rather clear that it's incredibly difficult for most leadership in 2025 to prioritize long-term sustainability over short-term profitability.

That being said, internships/co-ops have been popular from companies that I'm familiar with for quite a while specifically to ensure that there are streams of potential future employees. I wonder if we'll see even more focus on internships in the future, to further skirt around the difficulties in hiring junior developers?

tonymet · a day ago
If AI is truly this effective, we would be selling 10x-10Kx more stuff, building 10x more features (and more quickly), improving quality & reliability 10x. There would be no reason to fire anyone because the owners would be swimming in cash. I'm talking good old-fashioned greed here.

You don't fire people if you anticipate a 100x growth. Who cares about saving 0.1% of your money in 10 years? You want to sell 100x / 1000x/ 10000x more .

So the story is hard to swallow. The real reason is as usual, they anticipate a downturn and want to keep earnings stable.

Viliam1234 · a day ago
Exactly. If the AI can multiply everyone's power by hundred or thousand, you want to keep all people who make a positive contribution (and only get rid of those who are actively harmful). With sufficiently good AI, perhaps the group of juniors you just fired could have created a new product in a week.
tonymet · a day ago
even within the AI-paradigm, you could keep the juniors to validate and test the AI generated code. You still need some level of acceptance testing for the increased production. And the juniors could be producing automation engineering at or above the level of the product code they were producing prior to AI. A win win ( more production & more career growth)

In other words, none of these stories make any sense, even if you take the AI superpower at face value.

jqpabc123 · 2 days ago
He wants educators to instead teach “how do you think and how do you decompose problems”

Ahmen! I attend this same church.

My favorite professor in engineering school always gave open book tests.

In the real world of work, everyone has full access to all the available data and information.

Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.

Doing this is called "engineering". And this is what this professor taught.

simpaticoder · 2 days ago
In undergrad I took an abstract algebra class. It was very difficult and one of the things the teacher did was have us memorize proofs. In fact, all of his tests were the same format: reproduce a well-known proof from memory, and then complete a novel proof. At first I was aghast at this rote memorization - I maybe even found it offensive. But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it! Moreover, producing the novel proofs required the same kinds of "components" and now because they were "installed" in my brain I could use them more intuitively. (Looking back I'd say it enabled an efficient search of a tree of sequences of steps).

Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.

bitexploder · 2 days ago
Memorizing is a super power / skill. I work in a ridiculously complex environment and have to learn and know so much. Memorizing and spaced repetition are like little islands my brain can start building bridges between. I used to think memorizing was anti-first principles, but it is just good. Our brains can memorize so much if we make them. And then we can connect and pattern matching using higher order thinking.
MattPalmer1086 · 2 days ago
Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.

I remember being given a proof of why RSA encryption is secure. All the other students just regurgitated it. It made superficial sense I guess.

However, I could not understand the proof and felt quite stupid. Eventually I went to my professor for help. He admitted the proof he had given was incomplete (and showed me why it still worked). He also said he hadn't expected anyone to notice it wasn't a complete proof.

raincole · 2 days ago
During my elementary school years, there was a teacher who told me that I didn't need to memorize it as long as I understand them. I taught he was the coolest guy ever.

Only when I got late twenties, I realized how wrong he was. Memorization and understanding go hand in hand, but if one of them has to come first than it's memorization. He probably said that because that was what kids (who were forced to do rote memorization) wanted to hear.

Aurornis · 2 days ago
My controversial education hot take: Pointless rote memorization is bad and frustrating, but early education could use more directed memorization.

As you discovered: A properly structured memorization of carefully selected real world material forces you to come up with tricks and techniques to remember things. With structured information (proofs in your case) you start learning that the most efficient way to memorize is to understand, which then reduces the memorization problem into one of categorizing the proof and understanding the logical steps to get from one step to another. In doing so, you are forced to learn and understand the material.

Another controversial take (for HN, anyway) is that this is what happens when programmers study LeetCode. There’s a meme that the way to interview prep is to “memorize LeetCode”. You can tell who hasn’t done much LeetCode interviewing if they think memorizing a lot of problems is a viable way to pass interviews. People who attempt this discover that there are far too many questions to memorize and the best jobs have already written their own questions that aren’t out of LeetCode. Even if you do get a direct LeetCode problem in an interview, a good interview will expect you to explain your logic, describe how you arrived at the solution, and might introduce a change if they suspect you’re regurgitating memorized answers.

Instead, the strategy that actually works is to learn the categories of LeetCode style questions, understand the much smaller number of algorithms, and learn how to apply them to new problems. It’s far easier to memorize the dozen or so patterns used in LeetCode problems (binary search, two pointers, greedy, backtracking, and so on) and then learn how to apply those. By practicing you’re not memorizing the specific problems, you’re teaching yourself how to apply algorithms.

Side note: I’m not advocating for or against LeetCode, I’m trying to explain a viable strategy for today’s interview format.

dunham · 2 days ago
Fortunately that was not my experience in abstract algebra. The tests and homework were novel proofs that we hadn't seen in class. It was one of my favorite classes / subjects. Someone did tell me in college that they did the memorization thing in German Universities.

Code-wise, I spent a lot of time in college reading other people's code. But no memorization. I remember David Betz advsys, Tim Budd's "Little Smalltalk", and Matt Dillon's "DME Editor" and C compiler.

taeric · 2 days ago
I would wager some folks can memorize without understanding? I do think memorization is underrated, though.

There is also something to the practice of reproducing something. I always took this as a form of "machine learning" for us. Just as you get better at juggling by actually juggling, you get better at thinking about math by thinking about math.

johntarter · 2 days ago
Interesting I had the same problem and suffered in grades back in school simply because I couldn't memorize much without understanding. However, I seemed to be the only one because every single other student, including those with top grades, were happy to memorize and regurgitate. I wonder how they're doing now.
bwfan123 · 2 days ago
My abstract algebra class had it exactly backwards. It started with a lot of needless formalism culminating in galois theory. This was boring to most students as they had no clue why the formalism was invented in the first place.

Instead, I wished it showed how the sausage was actually made in the original writings of galois [1]. This would have been far more interesting to students, as it showed the struggles that went into making the product - not to mention the colorful personality of the founder.

The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.

[1] https://www.ams.org/notices/201207/rtx120700912p.pdf

eithed · 2 days ago
Depends on the subject - I can remember multiple subjects where the teacher would give you a formula to memorise without explaining why or where it came from. You had to take it as an axiom. The teachers also didn't say - hey, if you want to know why did we arrive to this, have a read here, no, it was just given.

Ofc you could also say that's for the student to find out, but I've had other things on my mind

stonemetal12 · 2 days ago
>Memorization is not a panacea.

It is What you memorize that is important, you can't have a good discussion about a topic if you don't have the facts and logic of the topic in memory. On the other hand using memory to paper over bad design instead of simplifying or properly modularizing it, leads to that 'the worst code I have seen is code I wrote six months ago' feeling.

WhitneyLand · 2 days ago
Your comment about memorizing as part of understanding makes a lot of sense to me, especially as one possible technique to get get unstuck in grasping a concept.

If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?

__alexs · 2 days ago
Is it the memorisation that had the desired effect or the having to come up with the novel proofs? Many schools seem to do the memorising part, but not the creating part.
ghelmer · 2 days ago
I find it's helpful to have context to frame what I'm memorizing to help me understand the value.
glitchc · 2 days ago
Indeed, not just math. Biology requires immense amounts of memorization. Nature is littered with exceptions.
tshaddox · 2 days ago
> But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it!

This may be true of mathematical proofs, but it surely must not be true in general. Memorizing long strings of digits of pi probably isn’t much easier if you understand geometry. Memorizing famous speeches probably isn’t much easier if you understand the historical context.

ian-g · 2 days ago
It's funny, because I had the exact opposite experience with abstract algebra.

The professor explained things, we did proofs in class, we had problem sets, and then he gave us open-book semi-open-professor take-home exams that took us most of a week to do.

Proof classes were mostly fine. Boring, sometimes ridiculously shit[0], but mostly fine. Being told we have a week for this exam that will kick our ass was significantly better for synthesizing things we'd learned. I used the proofs we had. I used sections of the textbook we hadn't covered. I traded some points on the exam for hints. And it was significantly more engaging than any other class' exams.

[0] Coming up with novel things to prove that don't require some unrelated leap of intuition that only one student gets is really hard to do. Damn you Dr. B, needing to figure out that you have to define a third equation h(x) as (f(x) - g(x))/(f(x) + g(x)) as the first step of a proof isn't reasonable in a 60 minute exam.

thoughtlede · 2 days ago
memorization + application = comprehension. Rinse and repeat.

Whether leet code or anything else.

trod1234 · 2 days ago
Mathematics pedagogy today is in a pretty sorrowful state due to bad actors and willful blindness at all levels that require public trust.

A dominant majority in public schools starting late 1970s seems to follow the "Lying to Children" approach which is often mistakenly recognized as by-rote teaching but are based in Paulo Freire's works that are in turn based on Mao's torture discoveries from the 1950s.

This approach contrary to classical approaches leverages torturous process which seems to be purposefully built to fracture and weed out the intelligent individual from useful fields, imposing sufficient thresholds of stress to impose PTSD or psychosis, selecting for and filtering in favor of those who can flexibly/willfully blind/corrupt themselves.

Such sequences include Algebra->Geometry->Trigonometry where gimmicks in undisclosed changes to grading cause circular trauma loops with the abandonment of Math-dependent careers thereafter, similar structures are also found in Uni, for Economics, Business, and Physics which utilize similar fail-scenarios burning bridges where you can't go back when the failure lagged from the first sequence, and you passed the second unrelated sequence. No help occurs, inducing confusion and frustration to PTSD levels, before the teacher offers the Alice in Wonderland Technique, "If you aren't able to do these things, perhaps you shouldn't go into a field that uses it". (ref Kubark Report, Declassified CIA Manual)

Have you been able to discern whether these "patterns" as you've called them aren't just the practical reversion to the classical approach (Trivium/Quadrivium)? Also known as the first-principles approach after all the filtering has been done.

To compare: Classical approaches start with nothing but a useful real system and observations which don't entrench false assumptions as truth, which are then reduced to components and relationships to form a model. The model is then checked for accuracy against current data to separate truth from false in those relationships/assertions in an iterative process with the end goal being to predict future events in similar systems accurately. The approach uses both a priori and a posteriori components to reasoning.

Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together). Upon the next iteration one must unlearn the false parts while relearning the true parts (but we can't really unlearn, we can only strengthen or weaken) which in turn creates inconsistent mental states imposing stress (torture). This is repeated in an ongoing basis often circular in nature (structuring), and leveraging psychological blindspots (clustering), with several purposefully structured failings (elements) to gatekeep math through torturous process which is the basis for science and other risky subject matter. As the student progresses towards mastery (gnosis), the systems become increasingly more useful. One must repeatedly struggle in their sessions to learn, with the basis being if you aren't struggling you aren't learning. This mostly uses a faux a priori reasoning without properties of metaphysical objectivity (tied to objective measure, at least not until the very end).

If you don't recognize this, an example would be the electrical water pipe pressure analogy. Diffusion of charge in-like materials, with Intensity (Current) towards the outermost layer was the first-principled approach pre-1978 (I=V/R). The Water Analogy fails when the naive student tries to relate the behavior to pressure equations that ends up being contradictory at points in the system in a number of places introducing stumbling blocks that must be unlearned.

Torture being the purposefully directed imposition of psychological stress beyond a individuals capacity to cope towards physiological stages of heightened suggestability and mental breakdown (where rational thought is reduced or non-existent in the intelligent).

It is often recognized by its characteristic subgroups of Elements (cognitive dissonance, a lack of agency to remove oneself and coercion/compulsion with real or perceived loss or the threat thereof), Structuring (circular patterns of strictness followed by leniency in a loop, fractionation), and Clustering (psychological blindspots).

PhantomHour · 2 days ago
It's the core problem facing the hiring practices in this field. Any truly competent developer is a generalist at heart. There is value to be had in expertise, but unless you're dealing with a decade(s) old hellscape of legacy code or are pushing the very limits of what is possible, you don't need experts. You'd almost certainly be better off with someone who has experience with the tools you don't use, providing a fresh look and cover for weaknesses your current staff has.

A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.

But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.

It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.

And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)

mschuster91 · 2 days ago
> Rather spend $500,000 on the hunt than $50,000 on training someone into the role.

Capex vs opex, that's the fundamental problem at heart. It "looks better on the numbers" to have recruiting costs than to have to set aside a senior developer plus paying the junior for a few months. That is why everyone and their dog only wants to hire seniors, because they have the skillset and experience that you can sit their ass in front of any random semi fossil project and they'll figure it out on their own.

If the stonk analysts would go and actually dive deep into the numbers to look at hiring side costs (like headhunter expenses, employee retention and the likes), you'd see a course change pretty fast... but this kind of in-depth analysis, that's only being done by a fair few short-sellers who focus on struggling companies and not big tech.

In the end, it's a "tragedy of the commons" scenario. It's fine if a few companies do that, it's fine if a lot of companies do that... but when no one wants to train juniors any more (because they immediately get poached by the big ones), suddenly society as a whole has a real and massive problem.

Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.

no_wizard · 2 days ago
I can’t think of another career where management continuously does not understand the realities of how something gets built. Software best practices are on their face orthogonal to how all other parts of a business operate.

How does marketing operate? In a waterfall like model. How does finance operate? In a waterfall like model. How does product operate? Well you can see how this is going.

Then you get to software and it’s 2 week sprints, test driven development etc. and it decidedly works best not on a waterfall model, but shipping in increments.

Yet the rest of the business does not work this way, it’s the same old top down model as the rest.

This I think is why so few companies or even managers / executives “get it”

absurdistan · 2 days ago
That we talk about "building" software doesn't help.
jofla_net · 2 days ago
>For regular companies they're infested by managers imported from non-engineering fields

Someone's cousin, lets leave it at that, someones damn cousin or close friend, or anyone else with merely a pulse. I've had interviews where the company had just been turned over from people that mattered, and you. could. tell.

One couldn't even tell me why the project I needed to do for them ::rolleyes::, their own code boilerplate(which they said would run), would have runtime issues and I needed to self debug it to even get it to a starting point.

Its like, Manager: Oh heres this non-tangential thing that they tell me you need to complete before I can consider you for the positon.... Me: Oh can I ask you anything about it?.... Manager: No

Aperocky · 2 days ago
Could not agree more. Whenever I hear monikers like "Java developer" or "python developer" as a job description I roll my eyes slightly.
qsort · 2 days ago
Isn't that happening already? Half the usual CS curriculum is either math (analysis, linear algebra, numerical methods) or math in anything but name (computability theory, complexity theory). There's a lot of very legitimate criticism of academia, but most of the times someone goes "academia is stupid, we should do X" it turns out X is either:

- something we've been doing since forever

- the latest trend that can be picked up just-in-time if you'll ever need it

Loughla · 2 days ago
I've worked in education in some form or another for my entire career. When I was in teacher education in college . . . some number of decades ago . . . the number one topic of conversation and topic that most of my classes were based around was how to teach critical thinking, effective reasoning, and problem solving. Methods classes were almost exclusively based on those three things.

Times have not changed. This is still the focus of teacher prep programs.

fumeux_fume · 2 days ago
Parent comment is literally praising an experience they had in higher education, but your only takeaway is that it must be facile ridicule of academia.
ghaff · 2 days ago
In CS, it's because it came out of math departments in many cases and often didn't even really include a lot of programming because there really wasn't much to program.

Deleted Comment

roxolotl · 2 days ago
When I was in college the philosophy program had the marketing slogan: “Thinking of a major? Major in thinking”.

Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding. Of course I’m biased as a dual cs/philosophy major but it’s very rare I’m looking for someone who can just write a lot of code. Especially juniors as analytical thinking is way harder to teach than how to program.

hodgesrm · 2 days ago
> Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding.

The humanities, especially the classic texts, cover human interaction and communication in a very compact form. My favorite sources are the Bible, Cicero, and Machiavelli. For example Machiavelli says if you do bad things to people do them at once, while good things you should spread out over time. This is common sense. Once you catch the flavor of his thinking it's pretty easy to work other situations out for yourself, in the same why that good engineering classes teach you how to decompose and solve technical problems.

cmrdporcupine · 2 days ago
The #1 problem in almost all workplaces is communication related. In almost all jobs I've had in 25-30 years, finding out what needs to be done and what is broken -- is much harder than actually doing it.

We have these sprint planning meetings and the like where we throw estimates on the time some task will take but the reality is for most tasks it's maybe a couple dozen lines of actual code. The rest is all what I'd call "social engineering" and figuring out what actually needs to be done, and testing.

Meanwhile upper management is running around freaking out because they can't find enough talent with X years of Y [language/framework] experience, imagining that this is the wizard power they need.

The hardest problem at most shops is getting business domain knowledge, not technical knowledge. Or at least creating a pipeline between the people with the business knowledge and the technical knowledge that functions.

Anyways, yes I have 3/4 a PHIL major and it actually has served me well. My only regret is not finishing it. But once I started making tech industry cash it was basically impossible for me to return to school. I've met a few other people over the years like me, who dropped out in the 90s .com boom and then never went back.

maxsilver · 2 days ago
This is also why I went into the Philosophy major - knowing how to learn and how to understand is incredibly valuable.

Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".

Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".

It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."

tzs · 2 days ago
Many top STEM schools have substantial humanities requirements, so I think they agree with you.

At Caltech they require a total of at least 99 units in humanities or social sciences. 1 Caltech unit is 1 hour of work a week for each week of the term, and a typical class is 9 units consisting of 3 hours of classwork a week and 6 hours of homework and preparation.

That basically means that for 11 of the 12 terms that you are there for a bachelor's degree, you need to be taking a humanities or social sciences class. They require at least 4 of those to be in humanities (English, history, history and philosophy of science, humanities, music, philosophy, and visual culture), and at least 3 to be in social sciences (anthropology, business economics and management, economics, law, political science, psychology, and social science).

At MIT they have similar, but more complicated, requirements. They require humanities, art, and social sciences, and they require that you pick at least one subject in one of those and take more than one course in it.

ghaff · 2 days ago
I worked for someone who I believe was undergrad philosophy and then got a masters in CS.
reaperducer · 2 days ago
On a related note, the most accomplished people I've met didn't have degrees in the fields where they excelled and won awards. They were all philosophy majors.

Teaching people to think is perhaps the world's most under-rated skill.

m463 · 2 days ago
I would say you have some bias.

yes, sometimes you need people who can grasp the tech and talk to managers. They might be intermediaries.

But don't ignore the nerdy guys who have been living deeply in a tech ecosystem all their lives. The ones who don't dabble in everything. (the wozniaks)

rodrigodlu · 2 days ago
A professor in my very first semester called "crazy finger syndrome" the attempts to go straight to the code without decomposing the problem from a business or user perspective. It was a long time ago. It was a CS curriculum

I miss her jokes against anxious nerds that just wanted to code :(

Don't forget the rise of boot camps where some educators are not always aligned with some sort of higher ethical standards.

jawilson2 · 2 days ago
> "crazy finger syndrome" - the attempts to go straight to the code without decomposing the problem from a business or user perspective

Years ago I started on a new team as a senior dev, and did weeks of pair programming with a more junior dev to intro me to the codebase. His approach was maddening; I called it "spray and pray" development. He would type out lines or paragraphs of the first thing that came to mind just after sitting down and opening an editor. I'd try to talk him into actually taking even a few minutes to think about the problem first, but it never took hold. He'd be furiously typing, while I would come up with a working solution without touching a keyboard, usually with a whiteboard or notebook, but we'd have to try his first. This was c++/trading, so the type-compile-debug cycle could be 10's of minutes. I kept relaying this to my supervisor, but after a few months of this he was let go.

bob1029 · 2 days ago
I make a point to solve my more difficult problems with pen and paper drawings and/or narrative text before I touch the PC. The computer is an incredibly distracting medium to work with if you are not operating under clear direction. Time spent on this forum is a perfect example.
bluGill · 2 days ago
Memorization and closed book tests are important for some areas. When seconds are counting the ER doctor cannot go look up how to treat a heart attack. That doctor also needs to know now only how to treat the common heart attack, but how to recognize this isn't the common heart attack but the 1 in 10,000 not a heart attack but has exactly the same symptoms as a heart attack case and give it the correct treatment.

However most of us are not in that situation. It is better for us to just look up those details as we need them because it gives us more room to handle a broader variety of situations.

flatb · 2 days ago
Humans will never outcompete ai in that regard however. Industry will eventually optimize for humans and ai separately: ai will know a lot and think quickly, humans will provide judgement and legal accountability. We’re already on this path.
duxup · 2 days ago
Speaking with a relative who is a doctor recently it’s interesting how much each of our jobs are “troubleshooting”.

Coding, doctors, plumber… different information, often similar skill sets.

I worked a job doing tech support for some enterprise level networking equipment. It was the late 1990s and we were desperate for warm bodies. Hired a former truck driver who just so happened to do a lot of woodworking and other things.

Great hire.

bitwize · 2 days ago
Everyone going through STEM needs to see the movie Hidden Figures for a variety of reasons, but one bit stands out as poignant: I believe it was Katherine Johnson, who is asked to calculate some rocket trajectory to determine the landing coordinates, thinks on it a bit and finally says, "Aha! Newton's method!" Then she runs down to the library to look up how to apply Newton's method. She had the conceptual tools to find a solution, but didn't have all the equations memorized. Having all the equations in short term memory only matters in a (somewhat pathological) school setting.

Deleted Comment

mnky9800n · 2 days ago
My favorite professor in my physics program would say, "You will never remember the equations I teach. But if you learn how the relationships are built and how to ask questions of those relationships, then I have done my job." He died a few years ago. I never was able to thank him for his lessons.
sitkack · 2 days ago
You just did.
Workaccount2 · 2 days ago
Being resourceful is an extremely valuable skill in the real world, and basically shut out of the education world.

Unlike my teachers, none of my bosses ever put me in an empty room with only a pencil and a sheet of paper to solve given problems.

yodsanklai · 2 days ago
> My favorite professor in engineering school always gave open book tests.

My experience as a professor and a student is that this doesn't make any difference. Unless you can copy verbatim the solution to your problem from the book (which never happens), you better have a good understanding of the subject in order to solve problems in the allocated time. You're not going to acquire that knowledge during your test.

jqpabc123 · 2 days ago
My experience as a professor and a student is that this doesn't make any difference.

Exactly the point of his test methodology.

What he asked of students on a test was to *apply* knowledge and information to *unique* problems and create a solution that did not exist in any book.

I only brought 4 things to his tests --- textbook, pencil, calculator and a capable, motivated and determined brain. And his tests revealed the limits of what you could achieve with these items.

VBprogrammer · 2 days ago
Isn't this an argument for why you should allow open book tests rather than why you shouldn't? It certainly removes some pressure to remember some obscure detail or formula.
mh- · 2 days ago
Isn't that just an argument for always doing open book tests, then? Seems like there's no downside, and as already mentioned, it's closer to how one works in the real world.
tomrod · 2 days ago
During some of the earlier web service development days, one would find people at F500 skating by in low-to-mid level jobs just cutting and pasting between spreadsheets, things would take them hours could be done in seconds, and with lower error rates, with a proper data interface.

Very anecdotally, but I hazard that most of these types of low-hanging fruit, low-value add roles are much less common since they tended to be blockers for operational improvement. Six-sigma, Lean, various flavors of Agile would often surface these low performers up and they either improved or got shown the door between 2005 - 2020.

Not that everyone is 100% all the time, every day, but what we are left with is often people that are highly competent at not just their task list but at their job.

dirkc · 2 days ago
I had a like minded professor in university, ironically in AI. Our big tests were all 3 day take home assignments. The questions were open ended, required writing code, processing data and analyzing results.

I think the problem with this is that it requires the professor to mentally fully engage when marking assignments and many educators do not have the capacity and/or desire to do so.

michaelt · 2 days ago
Sadly, I doubt 3-day take-home assignments have much future as a means of assessment in the age of LLMs.
SkyBelow · 2 days ago
It depends what level the education is happening at. Think of it like students being taught how to do for loops but are just copying and pasting AI output. That isn't learning. They aren't building the skills needed to debug when the AI gets something wrong with a more complicated loop, or understand the trade offs of loops vs recursion.

Finding the correct balance for a given class it hard. Generally, the lower level the education, the more it should be closed books because the more it is about being able to manually solve the smaller challenges that are already well solved so you build up the skills needed to even tackle the larger challenges. The higher the education level, the more it is about being able to apply those skills to then tackle a problem, and one of those skills is being able to pull relevant formulas and such from the larger body of known formulas.

thewebguyd · 2 days ago
Agreed coming from the ops world also.

I've had a frustrating experience the past few years trying to hire junior sysadmins because of a real lack of problem solving skills once something went wrong outside of various playbooks they memorized to follow.

I don't need someone who can follow a pre-written playbook, I have ansible for that. I need someone that understands theory, regardless of specific implementations, and can problem solve effectively so they can handle unpredictable or novel issues.

To put another way, I can teach a junior the specifics of bind9 named.conf, or the specifics of our own infrastructure, but I shouldn't be expected to teach them what DNS in general is and how it works.

But the candidates we get are the opposite - they know specific tools, but lack more generalized theory and problem solving skills.

laveur · 2 days ago
Same here! I always like to say that software engineering is 50% knowing the basics (How to write/read code, basic logic) and 50% having great research skills. So much of our time is spent finding documentation and understanding what it actually means as opposed to just writing code.
kazinator · 2 days ago
You cannot teach "how to to think". You have to give students thinking problems to actually train thinking. Those kinds of problems can increasingly be farmed off to AI, or at least certain subproblems in them.

I meam, yes, to an extent you can teach how to think: critical thinking and logic are topics you can teach and people who take their teaching to heart can become better thinkers. However, those topics cannot impart creativity. Critical thinking is called exactly that because it's about tools and skills for separating bad thinking from good thinking. The skill of generating good thinking probably cannot be taught; it can only be improved with problem-solving practice.

busyant · 2 days ago
> In the real world of work, everyone has full access to all of the available data and information.

In general, I also attend your church.

However, as I preached in that church, I had two students over the years.

* One was from an African country and told me that where he grew up, you could not "just look up data that might be relevant" because internet access was rare.

* The other was an ex US Navy officer who was stationed on a nuclear sub. She and the rest of the crew had to practice situations where they were in an emergency and cut off from the rest of the world.

Memorization of considerable amounts of data was important to both of them.

koliber · 2 days ago
Each one of us has a mental toolbox that we use to solve problems. There are many more tools that we don’t have in our heads that we can look up if we know how.

The bigger your mental toolbox the more effective you will be at solving the problems. Looking up a tool and learning just enough to use it JIT is much slower than using a handy tool that you already masterfully know how to use.

This is as true for physical tools as for programming concepts like algorithms and data structures. In the worst case you won’t even know to look for a tool and will use whatever is handy, like the proverbial hammer.

herval · 2 days ago
People have been saying that since the advent of formal education. Turns out standardized education is really hard to pull off and most systems focus on making the average good enough.

It’s also hard to teach people “how to think” while at the same time teaching them practical skills - there’s only so many hours in a day, and most education is setup as a way to get as many people as possible into shape for taking on jobs where “thinking” isn’t really a positive trait, as it’d lead to constant restructuring and questioning of the status quo

techpineapple · 2 days ago
While there’s no reasonable way to disagree with the sentiment, I don’t think I’ve ever met anyone who can “think and decompose problems” who isn’t also widely read, and knows a lot of things.

Forcing kids to sit and memorize facts isn’t suddenly going to make them a better thinker, but much of my process of being a better thinker is something akin to sitting around and memorizing facts. (With a healthy dose of interacting substantively and curiously with said facts)

mvkel · 2 days ago
> Everyone has full access to all of the available data and information

Ahh, but this is part of the problem. Yes, they have access, but there is -so much- information, it punches through our context window. So we resort to executive summaries, or convince ourselves that something that's relevant is actually not.

At least an LLM can take full view of the context in aggregate and peel out signal. There is value there, but no jobs are being replaced

bnug · 2 days ago
>but no jobs are being replaced

I agree that an LLM is a long way from replacing most any single job held by a human in isolation. However, what I feel is missed in this discussion is that it can significantly reduce the total manpower by making humans more efficient. For instance, the job of a team of 20 can now be done by 15 or maybe even 10 depending on the class of work. I for one believe this will have a significant impact on a large number of jobs.

Not that I'm suggesting anything be "stopped". I find LLM's incredibly useful, and I'm excited about applying them to more and more of the mundane tasks that I'd rather not do in the first place, so I can spend more time solving more interesting problems.

proee · 2 days ago
Also, some problems don't have enough data for a solution. I had a professor that gave tests where the answer was sometimes "not solvable." Taking these tests was like sweating bullets because you were not sure if you're just too dumb to solve the problem, or there was not enough data to solve the problem. Good times!
libraryatnight · 2 days ago
One of my favorite things about Feynman interviews/lectures is often his responses are about how to think. Sometimes physicists ask questions in his lectures and his answer has little to do with the physics, but how they're thinking about it. I like thinking about thinking, so Feynman is soothing.
movpasd · 2 days ago
I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.

(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)

_mu · 2 days ago
Agree, hopefully this insight / attitude will become more and more prevalent.

For anyone looking for resources, may we recommend:

* The Art of Doing Science and Engineering by Richard Hamming (lectures are available on YouTube as well)

* Measurement by Paul Lockhart (for teaching mindset)

Deleted Comment

Herring · 2 days ago
Talk is cheap. Good educators cost money, and America famously underpays (and under-appreciates) its teachers. Does he also support increasing taxes on the wealthy?
turnsout · 2 days ago
Even more broadly, it's "critical thinking," which definitely seems to be on the decline (though I'm sure old people have said this for millennia)
mclau157 · 2 days ago
Have there been studies about abilities of different students to memorize information? I feel this is under-studied in the world of memorizing for exams
Waterluvian · 2 days ago
Yeah. Memorization and trivial knowledge is an optimization mechanism.
bakuninsbart · 2 days ago
It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.

Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.

sim7c00 · 2 days ago
wanted to chime in on the educational system. in the west, we have the 'banking system' which treat a student as a bank account and knowledge as currency, hence the dump more info into ppl to make them sm0rt attitude.

in developing areas, they actually implement more modern models commonly, as its newer and free to implement newer things.

those newer models focus more on exactly this. teach a person how to go through the process of finding solutions. rather than 'knowing a lot to enable the process of thinking'.

not saying what is better or worse, but reading this comment and article it reminds me of this.

a lot of people i see, they know tons of interesting things, but anything outside of their knowledge is a complete mystery.

all the while ppl from developing areas learn to solve issues. alot of individuals from there also, get out of their poverty and do really well for themselves.

ofcourse, this is a generalization and doesnt hold up in all cases. but i cant help think about it.

a lot of my colleagues dont know how to solve problems simply because they dont RTFM. they rely on knowledge from their education which is already outdated before they even sign up.. i try to teach them to RTFM. it seems hopeless. they look at me , downwards, because i have no papers. but if shit hits the fan, they come to me. solve the prolbem.

a wise guy i met once said (likely not his words). there are 2 type of ppl. those who think in problems, and those who think in solutions.

id related that to education, not prebaked human properties.