Readit News logoReadit News
auggierose · 3 months ago
> And as a last point, I fear very much what’s going on in schools. My son is in school. And the teacher literally said they need to use an LLM otherwise they won’t score high enough because the other pupils are using LLMs and the teacher is using an LLM to grade/check it. This is ridiculous. And if you think that’s okay or you don’t see a serious problem with this, then that’s an even greater problem.

If that is true, it is indeed a serious problem.

at_compile_time · 3 months ago
Any examination that isn't done in person on general course material (nothing an LLM could prepare for you) is just stubborn refusal to protect students from themselves in an age where anything else can be faked. Graded homework and take-home assignments are dead as useful pedagogical tools.
xnx · 3 months ago
Death of homework could be a great thing. The education system teaching that schoolwork must be done at home is conditioning future workers to accept working late and taking their work home.
energy123 · 3 months ago
Students will need to use the same LLM as their teacher, since LLMs are biased to grade their own outputs higher.

Imagine getting downgraded because the substrings "Honestly?" and "—" didn't constitute 3.2% of your submission.

zveyaeyv3sfye · 3 months ago
The school situation is indeed in serious problems.

I just read this piece the other day with countless witnesses from teachers in the US:

https://www.404media.co/teachers-are-not-ok-ai-chatgpt/

JustinCS · 3 months ago
This sounds like an assignment to learn to use LLMs, which as an isolated assignment sounds reasonable. Students should learn how to use tools of all kinds to maximize their effectiveness. It might be a bigger problem if all assignments are done like this but I doubt that's the case.
anal_reactor · 3 months ago
I attended one of best high schools in the area.

One teacher, when I asked him what I can do about my failing grade, told me "you can go hang yourself". Why was I getting a failing grade in the first place? Well, it was normal that three quarters of the class would be failing his tests. That's just the kind of teacher he was.

The PE teacher thought that his role was to teach us discipline based on fear. Later I heard a fun story about him getting a new class and thinking that one of the girls was a student, while she was actually the mother of one of the students. She saw the students being yelled at for 45 minutes straight, she got yelled personally at and called retarded. Of course nothing happened to him.

The literature teacher yelled at us so hard that we were literally afraid of talking to her. She hated us, and at some point made that openly clear, by being mean on purpose. She never gave me more than "barely passing", even though at the standardized test I got a near-perfect score.

Once she did a test, threw the paper away, and assigned us grades by how much she liked each student. I brought up this story during reunion, and was told "she actually prepared us for how we'd be treated in college and adult life".

And that was one of the best schools that always took the most talented students from the region. In this context, having two LLMs talk to each other really isn't a bad thing.

yubblegum · 3 months ago
This is so over the top that you might as well name and shame here to lend credibility to your story. What school in what "area" are you talking about?

Deleted Comment

raverbashing · 3 months ago
Hence why my sympathy to teachers is limited. Not zero, for sure. But there's a limit
alainx277 · 3 months ago
I've heard this often recently, we jokingly call it the "dead classroom theory": the students use LLMs to solve the assignment and the teacher uses an LLM to grade it.
hackyhacky · 3 months ago
I hear you, but also we need to ask why is it a problem?

Used to be, it was considered critically important that students learn to write in cursive and to multiply 3 digit numbers in their head. I can't do either, and I suspect many folks these days can't either. The world has not ended. I also can't tie a square knot, lasso a steed, or mend a fence.

School assignments have always been a waste of time. Essay-writing is not a critical skill, and I'm not sure much is lost if LLMs do it for us.

SomeoneOnTheWeb · 3 months ago
Actually the world is more or less ending because of that. People lack more and more critical thinking skills because they aren't taught about that in school.

Sure multiplying 3-digit numbers is not really useful in everyday's life, but the important part is not the knowledge itself, it's the capacity to think and solve problems.

daledavies · 3 months ago
This is the digital divide, where students from more wealthy backgrounds can afford access to better LLM subscriptions, and are able to achieve more "academically".
thatcat · 3 months ago
Pretty sure forming a logical argument is a critical skill, just add it to the list tho.
xnx · 3 months ago
> Essay-writing is not a critical skill,

Essay writing is mainly about organizing thoughts logically. That's pretty important.

drewcoo · 3 months ago
> Essay-writing is not a critical skill

If not that then what? Prompt engineering?

skarlso · 3 months ago
WOW this blew up. I didn't expect it at all... Thank you so much for everyone engaging with it. I know it's a difficult topic. And I might have used skeptics term incorrectly? :)

The school LLM thing is absolutely real. And it's not even a cheap school. It's a school in Denmark. I was very disappointed to know this and was not expecting it at all. And it's sadly, not an assignment to learn llms either. I'd wish it was something like that.

To the ones saying we've seen this before with google search, ides, etc. Not on this scale. And even then, you didn't _completely_ outsource you ability to think. I _think_ that is super dangerous. Especially for young people who get into the habit a lot faster. And suddenly, we have people unable to think without asking an LLM for advice. Other disruptive technologies didn't affect thinking on this massive scale.

I'm not saying stop AI booo, that's obviously not going to happen. And I don't want to, maybe the singularity is just an AI away. Who knows? However, I'm asking that we absolutely put that thing behind some kind of oversight especially in schools for young people before they develop critical thinking skills. After all you started to count on paper before you start using a calculator for a reason.

Again, thank you for this discussion. I'm really grateful for the engagement.

NitpickLawyer · 3 months ago
I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel. And how others using them make us feel. And how Hollywood-style stories about "AI" make us feel. And how people commenting on these things make us feel. And so on.

IMO it's best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow". The rest is too noisy for me. I'm OK with some skepticism, but not outright denial. You can't take an unbiased look at what these things can do today and say "well, yes, but can they do x y z"? That's literally moving the goalposts, and I find it extremely counter productive.

In a way I see a parallel to the self driving cars discussions of 5 years ago. Lots of very smart people were focusing on silly things like "they have to solve the trolley problem in 0.0001 ms before we can allow them on our roads", instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably, with some degree of variability between solutions (waymos, teslas, mercedes, etc). All that talk 5 years ago was useless, IMO.

tw04 · 3 months ago
> instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably,

No, we really aren’t. Let me know when any of those systems can get me from Souix falls South Dakota to Thunder Bay Ontario without multiple disengagements and we can talk.

Based on what I’ve seen we’re still about 10 years away best case scenario and more likely 20+ assuming society doesn’t collapse first.

I think people in the Bay Area commenting on how well self driving works need to visit middle America and find out just how bad and dangerous it still is…

JustinCS · 3 months ago
When you put it like that, it makes me wonder if we can just stick to using the self-driving cars in the Bay Area and not go to these bad and dangerous places.
krysp · 3 months ago
I agree that a lot of the noise at the moment is an emotional reaction to LLMs, rather than a dispassionate assessment of how useful they are. It's understandable - they are changing the way we work, and for lots of us (software developers), the reason we chose this career was because we _enjoy_ writing code and solving problems.

As with a lot of issues in today's world, each side is talking past the other. It can simultaneously be true that LLMs make writing code less enjoyable / gratifying, and that LLMs can speed up our work.

seadan83 · 3 months ago
IDK, my impression of the self driving car discussion 5 years ago were more akin to: "let us start designing AI only roads, get ready for no human drivers - they won't exist in 5 years! AI only cars will be so great, it will solve traffic congestion, pollution, noise, traffic deaths, and think of all the free time while you are lounging around in your commute!" Seemed like a conversation dominated by people gearing up for that VC money. Meanwhile, actual solutions for any of those problems seem to be languishing.. My perspective; was a lot of distraction away from real solutions, lead by a tech-maximalist group that had a LOT to gain by hype.

> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.

> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".

These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'

I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.

There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.

stavros · 3 months ago
I feel like this article makes the same tired point I see every time a new technology comes alone: "but if we don't know how to shoe our own horses any more because we got cars, soon nobody will know how to shoe horses!"

Yeah. And that's OK. Because nobody will need to shoe horses any more!

If I forget how to write tests, what's the problem? It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!

That's how atrophy works: the skills that atrophy are, by definition, the ones you no longer need at all, and this argument does this sleight of hand where it goes "don't let the skills you don't need atrophy, because you need them!".

Well, do I need them, or not?

ianks · 3 months ago
> I feel like this article makes the same tired point I see every time a new technology comes

I sympathize with this viewpoint, but I do think it’s important to recognize the differences here. One thing I’ve noticed from the vibe-code coalition is a push towards offloading _cognition_. I think this is a novel distinction from industrial innovations that more or less optimize manual labor.

You could argue that moving from assembly to python is a form of cognition offloading, but it’s not quite the same in my eyes. You are still actively engaged in thinking for extended periods of time.

With agentic code bots, active thinking just isn’t the vibe. I’m not so sure that style of half-engaged work has positive outcomes for mental health and personal development (if that’s one of your goals).

mixermachine · 3 months ago
The big problem is that LLMs not only replace "shoeing your horse" or some other singular task. If you let them they can replace every critical thought or every mental effort you throw at them. Often in a "good enough" (or convincing enough) way Especially for learners this is very bad because they will never learn how to come to any proper thought process on their own. How should they be able to check the output?

We basically are training prompt engineers without post validation now.

ImHereToVote · 3 months ago
You are both right. You won't need the skills. The problem is that you will want to eat and have housing.

Maybe we get UBI. But if Jeff Bezos and some friends own all of the production. What would they do with your UBI dollars? Where can he spend them? He can make his own yachts and soldiers.

stavros · 3 months ago
You're making a logical leap in "you don't need the skills, but there's someone willing to pay you for them".

Who is this person who's paying for useless skills, and doesn't that go against the definition of "need"?

ang_cire · 3 months ago
> "but if we don't know how to shoe our own horses any more because we got cars, soon nobody will know how to shoe horses!"

No, this would be more akin to saying, "if we don't know how to change our car's oil anymore because we have a robot that does it, soon nobody will know, while still being reliant on our cars."

For your analogy to work, we would have to be moving away from code entirely, as we moved away from horses.

> It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!

Except that once you forget, you now would have to re-learn it, and that includes potentially re-learning all the pitfalls and edge cases that aren't part of standard training manuals. And you won't be able to ask someone else, because now they all don't know either.

tl;dr coding is a key job function of software developers. Not knowing how to do any key part of your job without relying on an intermediary tool, is a very bad thing. This already happens too much, and AI is just firing the trend into the stratosphere.

stavros · 3 months ago
> we don't know how to change our car's oil anymore because we have a robot that does it

OK, are we worried that all the robots will somehow disappear? Why would I have to change my own oil, ever, if the robot did it as well as I did? If it doesn't do it as well as I did, I'm still doing it myself.

thefz · 3 months ago
> I forget how to write tests, what's the problem?

The day you will need troubleshoot one or simply understand it is the problem.

yubblegum · 3 months ago
Critical thinking ability is not remotely akin to a mode of transport from a menu of alternatives.

> do I need them, or not?

Based on this comment of yours, the answer is a resounding yes, you do.

stavros · 3 months ago
Then they will never atrophy and we have no problem.
sokoloff · 3 months ago
How many times per year do you hear people saying something so obviously flawed that you first wonder how they feed themselves, only to realize with horror that no one else seems to notice and instead is transmitting and discussing the idea as it was some grand insight they’d just never thought about it that way?

Critical thinking skills are skills that I think are incredibly useful and, if never taught or allowed to atrophy you can get, well, this...<gesturing dejectedly around with my cane>

People not understanding basic science, arithmetic, and reading comprehension at what was once an 8th grade level, can be easily tricked in everyday situations on matters that harm themselves, their family, and their community. If we dig deeply enough, I bet we could find some examples on this theme in modern times.

stavros · 3 months ago
If they're useful, they won't atrophy. It's as simple as that.
jplusequalt · 2 months ago
Had to drop in here to call you out for completely missing the point of the article.

The whole point is that these LLMs build up a dependence because your critical thinking skills need not apply when a data center can average together passable answer for you.

stavros · 2 months ago
Again, as I've said many times in the comments here, if you need something, you'll keep it. If you don't keep it, it means you don't need it. If your critical skills atrophy, it's because you never need to use them, and if you never need to use them, what's the problem?
JustinCS · 3 months ago
I agree with this, it reminds me of how most people don't need to write assembly anymore, but it still helps with certain projects to have that understanding of what's going on.

So some people do develop that deeper understanding, when it's helpful, and they go on to build great things. I don't see why this will be different with AI. Some will rely too much on AI and possibly create slop, and others will learn more deeply and get better results. This is not a new phenomenon.

stavros · 3 months ago
Indeed it's not a new phenomenon, so why are we fretting about it? The people who were going to understand (assembly|any code) will understand it, and go on to build great things, and everyone else will do what we've always done.
ImHereToVote · 3 months ago
This stops to make sense as soon as the prompter can be automated. Who is going to pay for your artisanal software? Who will afford it?
TOMDM · 3 months ago
I don't think the original post took issue with what people enjoyed doing, I think it took issue with people's understanding of what is even possible with the current tech.
NitpickLawyer · 3 months ago
I agree! I see a lot of people who "have tried them and they suck". But when you dig deep, they barely tried a web interface for programming, or the OG chatgpt for writing, and that's basically it. They aren't willing to try again, and keep up to date. Things are moving incredibly fast, and the skeptics are adding noise without even being informed about current SotA capabilities.
laserbeam · 3 months ago
Agreed. The previous article made me think “this tool is probably more potent than I thought and I should give it a try”. It did not make me drop my concerns about AI in general.
thefz · 3 months ago
I can't shake the belief that the more one finds LLMs useful, the less valuable their work already is.

Something that has no guarantee,not even a reassurance of being correct should not be trusted with any meaningful work

mexicocitinluez · 3 months ago
I need you to understand that software development with LLMs isnt about writing a prompt to spit out your entire app.
thefz · 3 months ago
I would not trust a single instruction that hasn't been reasoned over by a real person in my codebase. YMMV

Dead Comment

antics · 3 months ago
The legacy of the electric motor is not textile factories that are 30% more efficient because we point-replaced steam engines. It's the assembly line. The workforce "lost" the skills to operate the textile factories, but in turn, the assembly line made the workflow of goods production vastly more efficient. Industrial abstraction has been so successful that today, a small number of factories (e.g., TSMC) have become nearly-existential bottlenecks.

That is the aspiration of AI software tools, too. They are not coming to make us 30% more efficient, they are coming to completely change how the software engineering production line operates. If they are successful, we will write fewer tests, we will understand less about our stack, and we will develop tools and workflows to manage that complexity and risk.

Maybe AI succeeds at this stated objective, and maybe it does not. But let's at least not kid ourselves: this is how it has always been. We are in the business of abstracting things away so that we have to understand less to get things done. We grumbled when we switched from assembly to high-level languages. We grumbled when we switched from high-level languages to managed languages. We grumbled when we started programming enormous piles of JavaScript and traveled farther from the OS and the hardware. Now we're grumbling about AI, and you can be sure that we're going to grumble about whatever is next, too.

I understand this is going to ruffle a lot of feathers, but I don't think Thomas and the Fly team actually have missed any of the points discussed in this article. I think they fully understand that software production is going to change and expect that we will build systems to cope with abstracting more, and understanding less. And, honestly, I think they are probably right.

ordu · 3 months ago
It is just speculations. If we rely on LLMs, then something bad happens. Truly, something will happen, but will it be bad? A calculator ruins people's ability to do mental arithmetic, is it bad? I'm not sure, why I'd need to do mental arithmetic. Writing ruins memory. Is it bad? Probably, but I know no people who would reject writing to exercise their memory.

Moreover, if it will be so bad, as the author think, we can just spend like 15 minutes per day, to train the abilities that become unused with LLMs. And LLMs could help us to do it, so it will be like 15 min/day, not 2 hours/day.

> My son is in school. And the teacher literally said they need to use an LLM and the teacher is using an LLM to grade/check it. This is ridiculous.

I don't know, if it is ridiculous. I like LLMs, they really help to learn things. I mean, yes, LLMs can be detrimental to learning, if instead of doing a task at hand, the student will be asking LLM to do the task. But it can be a real help, because with the help of LLM you can do tasks, you cannot do without it. You can do tasks faster, and to do more of them. LLM can be bad for learning or it can be good, it depends on how it is used.

The teacher using LLM is very interesting. When I was at school, I thought I was smarter then my teachers at school, and I didn't like them, some of them I just hated. If they were using LLMs, I'm sure I would figure out ways to do tasks in a way, so their LLMs would hallucinate and grade my work wrongly. You can always complain and make the teacher to check it without LLM. Let them spend additional 10 minutes grading, and do it in front of other pupils, so teachers would need to admit their mistakes publicly. I would definitely do it. I did something like that with math, I would go for any length to find some non-standard solution for a problem, that cannot be checked by checking numbers in specific places of the solution. It made the teacher to make mistakes while grading, I just loved it. But if I had LLM... I would definitely do it with literature and history also. My math teacher was not so bad after all, but I hated the literature and history ones.