Readit News logoReadit News
dtnewman · 9 months ago
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

enjo · 9 months ago
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.

Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.

Zanfa · 9 months ago
ChatGPT is laughably terrible at double entry accounting. A few weeks ago I was trying to use it to figure out a reasonable way to structure accounts for a project given the different business requirements I had. It kept disappearing money when giving examples. Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
samuel · 9 months ago
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).

I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.

DSingularity · 9 months ago
This is now reality -- fighting to change the students is a losing battle. Besides in terms of normalizing grade distributions this is not that complicated to solve.

Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.

el_benhameen · 9 months ago
I wonder to what extent this is students who would have stuck it out now taking the easy way and to what extent it’s students who would have just failed now trying to stick it out.
woodrowbarlow · 9 months ago
my partner teaches high school math and regularly gets answers with calculus symbols (none of the students have taken any calculus). these students aren't putting a single iota of thought into the answers they're getting back from these tools.
iNic · 9 months ago
The solution is making all homework optional and having an old-school end of semester exam.
bko · 9 months ago
When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.

For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.

I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

kibwen · 9 months ago
The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.

Like, Socrates may have been against writing because he thought it made your memory weak, but at least I, an individual, am perfectly capable of manufacturing my own writing implements with a modest amount of manual labor and abundantly-available resources (carving into wood, burning wood into charcoal to write on stone, etc.). But I ain't perfectly capable of doing the same to manufacture an integrated circuit, let alone a digital calculator, let alone a GPU, let alone an LLM. Anyone who delegates their thought to a corporation is permanently hitching their fundamental ability to think to this wagon.

johndough · 9 months ago
Use it or lose it. With the invention of the calculator, students lost the ability to do arithmetic. Now, with LLMs, they lose the ability to think.

This is not conjecture by the way. As a TA, I have observed that half of the undergraduate students lost the ability to write any code at all without the assistance of LLMs. Almost all use ChatGPT for most exercises.

Thankfully, cheating technology is advancing at a similarly rapid pace. Glasses with integrated cameras, WiFi and heads-up display, smartwatches with polarized displays that are only readable with corresponding glasses, and invisibly small wireless ear-canal earpieces to name just a few pieces of tech that we could have only dreamed about back then. In the end, the students stay dumb, but the graduation rate barely suffers.

I wonder whether pre-2022 degrees will become the academic equivalent to low-background radiation steel: https://en.wikipedia.org/wiki/Low-background_steel

wrp · 9 months ago
"Technology can do X more conveniently than people, so why should children practice X?" has been a point of controversy in education at least since pocket calculators became available.

I try to explain by shifting the focus from neurological to musculoskeletal development. It's easy to see that physical activity promotes development of children's bodies. So although machines can aid in many physical tasks, nobody is suggesting we introduce robots to augment PE classes. People need to recognize that complex tasks also induce brain development. This is hard to demonstrate but has been measured in extensive tasks like learning languages and music performance. Of course, this argument is about child development, and much of the discussion here is around adult education, which has some different considerations.

OptionOfT · 9 months ago
The problem with GPS is that you never learn to orient yourself. You don't learn to have a sense of place, direction or elapsed distance. [0]

As to writing, just the action of writing something down with a pen, on paper, has been proven to be better for memorization than recording it on a computer [1].

If we're not teaching these basic skills because an LLM does it better, how do learn to be skeptical of the output of the LLM. How do we validate it?

How do we bolster ourselves against corporate influences when asking which of 2 products is healthier? How do we spot native advertising? [2]

[0]: https://www.nature.com/articles/531573a

[1]: https://www.sciencedirect.com/science/article/abs/pii/S00016...

[2]: Example: https://www.nytimes.com/paidpost/netflix/women-inmates-separ...

light_hue_1 · 9 months ago
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc.

I'm the polar opposite. And I'm AI researcher.

The reason you can't answer your kid when he asks about LLMs is because the original position was wrong.

Being able to write isn't optional. It's a critical tool for thought. Spelling is very important because you need to avoid confusion. If you can't spell no spell checker can save you when it inserts the wrong word. And this only gets far worse the more technical the language is. And maps are crucial too. Sometimes, the best way to communicate is to draw a map. In many domains like aviation maps are everything, you literally cannot progress without them.

LLMs are no different. They can do a little bit of thinking for us and help us along the way. But we need to understand what's going on to ask the right questions and to understand their answers.

noitpmeder · 9 months ago
This is an insane take.

The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.

The AI becomes their brain, such that they cannot function without it.

I'd never want to work with someone who is this reliant on technology.

II2II · 9 months ago
Perhaps that mode of thinking is wrong, even if it is accepted.

Take rote memorization. It is hard. It sucks in so many ways (just because you memorized something doesn't mean you can reason using that information). Yet memorization also provides the foundations for growth. At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for? How can you assess the validity of a source if you don't know the fundamentals? How can you avoid falling prey to propaganda if your only knowledge of a subject is what is in front of your face? None of that is to say that we should dismiss search and depend upon memorization. We need both.

I can't tell you what to say to your children about LLMs. For one thing, I don't know what is important to them. Yet it is important to remember that it isn't an either-or thing. LLMs are probably going to be essential to manage the profoundly unmanagable amount of information our world creates. Yet it is also important to remember that they are like the person who memorizes but lacks the ability to reason. They may be able to impress people with their fountain of facts, yet they will be unable to create a mark on the world since they will lack the ability to create anything unique.

palmotea · 9 months ago
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

And those people are wrong, in a similar way to how it's wrong to say: "There's no point in having very much RAM, as you can just page to disk."

It's the cognitive equivalent of becoming morbidly obese (another popular decision in today's world).

ViscountPenguin · 9 months ago
I think the biggest issue with LLMs is basically just the fact that we're finally coming to the end of the long tail of human intellectual capability.

With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).

Large language models don't move humans further up the value chain, though. They kick us off of it.

I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.

CivBase · 9 months ago
Why should you learn how to add when you can just use a calculator? We've had calculators for decades!

Because understanding how addition works is instrumental to understanding more advanced math concepts. And being able to perform simple addition quickly, without a calculator is a huge productivity boost for many tasks.

In the world of education and intellectual development it's not about getting the right answer as quickly as possible. It's about mastering simple things so that you can understand complicated things. And often times mastering a simple thing requires you to manually do things which technology could automate.

bcrosby95 · 9 months ago
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

It's been my experience that LLMs are only better than me at stuff I'm bad at. It's noticeably worse than me at things I'm good at. So the answer to your question depends: can your child get good at things while leaning on an LLM?

I don't know the answer to this. Maybe schools need to expect more from their students with LLMs in the picture.

delusional · 9 months ago
"More or less" is doing a lot of work there. School, at least where I am, still spends the first year getting children to memorize the order of the numbers from 1-20 and if there's an even or odd number of a thing on a picture.

Do you google if 5 is less than 6 or do you just memorize that?

If you believe that creativity is not based on a foundation of memorization and experience (which is just memorization) you need to reflect on the connection between those.

malux85 · 9 months ago
> Why should I learn to do X if I can just ask an LLM and it will do it better than me

The same way you answer - "Why should I memorise this if I can always just look it up"

Because your perceptual experience is built upon your knowledge and experiences. The entire way you see the universe is altered based on these things, including what you see through your eyes, what you decide is important and what you decide to do.

The goal of life is not always "simply to do as little as possible", or "offload as much work as possible" but lots of the time includes struggling through the fundimentals so that you become a greater version of yourself, it is not the complete task that we desire, it is who you became while you did the work that we desire.

victor106 · 9 months ago
I thought the same as you. But I think not developing those skills will come back and bite you at some point.

For instance your point about: > reading a map to get around (GPS)

https://www.statnews.com/2024/12/16/alzheimers-disease-resea...

After reading the above it dawned on me that the human brain needs to develop spatial awareness and not using that capability of the brain very slowly shuts it off. So I purposefully turn off the gps when I can.

I think not fully developing each of those abilities might have some negative effects that will be hard to diagnose.

Wowfunhappy · 9 months ago
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

However, I am going to hazard a guess that you still care about your child's ability to do arithmetic, even though calculators make that trivial.

And if I'm right, I think it's for a good reason—learning to perform more basic math operations helps build the foundation for more advanced math, the type which computers can't do trivially.

I think this applies to AI. The AI can do the basic writing for you, but you will eventually hit a wall, and if all you've ever learned is how to type a prompt into ChatGPT, you won't ever get past that wall.

----

Put another way:

> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

"Because eventually, you will be able to do X better than any LLM, but it will take practice, and you have to practice now."

_carbyau_ · 9 months ago
>>>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography. <<<

For me it is the second order benefits, notably the idea of "attention to detail" and "a feel for the principles". The principles of each activity being different: writing -> fine motor control, spelling -> word choice/connotation, map -> sense of direction, (my own insert here) money handling -> cost of things

All of them involve "attention to detail" because that's what any activity is - paying attention to it.

But having built up the experience in paying attention to [xyz], you can now be capable when things go wrong.

IE catch disputable transaction on the credit card, or note being told by the shop clerk "No Returns" when their policy says otherwise, un-losting yourself when the phone runs out of battery in the city.

Notably, you don't have to be trained for the details in traditional ways like writing the same sentence 100 times on a piece of paper. Learning can be fun and interesting.

Children can write letters to their friends well before they get their own phone. Geocaching/treasure hunts(hand drawn mud maps!)/orienteering for map use.

As for LLM ... well currently "attention to detail" is vital to spot the (handwave number) 10% of when it goes wrong. In the future LLMs may be better.

But if you want to be better than your peers at any given thing - you will need an edge somewhere outside of using an LLM. Yet still, spelling/word choice/connotations are especially linked to using an LLM currently.

Knowing how to "pay attention to detail" when it counts - counts.

dylan604 · 9 months ago
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

I don't know. I really feel like the auto-correct features are out to get me. So many times I want to say "in" yet it gets corrected to "on", or vice-versa. I also feel like it does the same to me with they're/their/there. Over the past several iOS/macOS updates, I feel like I've either gotten dumber and no longer do english gooder, or I'm getting tagged by predictive text nonsense.

quantumHazer · 9 months ago
Universities still teach you calculus and real analysis even though Wolfram Alpha exists. It boils down to your willing to learn something. An LLM can't understand things for you. I'm "early genz" and I write code without llm because I find data structure and algorithm very interesting and I want to learn the concepts not because I'm in love with the syntax of C or Rust (I love the syntax of C btw).
riohumanbean · 9 months ago
Why have children learn to walk? They're better off learning the newest technology of hoverboards and not getting left behind!
jplusequalt · 9 months ago
>children will have a different perspective

Children will lack the critical thinking for solving complex problems, and even worse, won't have the work ethic for dealing with the kinds of protracted problems that occur in the real world.

But maybe that's by design. I think the ownership class has decided productivity is more important than societal malaise.

nyeah · 9 months ago
Spell check isn't really adequate. You get a page full of correctly spelled words, but they're the wrong words.
globnomulous · 9 months ago
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

It absolutely isn't.

JSR_FDED · 9 months ago
Let your children watch the movie Idiocracy - it’s more eloquent than you’ll ever be in answering that question.
mock-possum · 9 months ago
Even if you use a tool to do work, you still have to understand how your work will be checked to see whether it meets expectations.

If the expectation is X, and your tool gives you Y, then you’ve failed - no matter if you could have done X by hand from scratch or not, it doesn’t really matter, because what counts is whether the person checking your work can verify that you’ve produced X. You agreed to deliver X, and you gave them Y instead.

So why should you learn to do X when the LLM can do it for you?

Because unless you know how to do X yourself, how will you be able to verify whether the LLM has truly done X?

Your kid needs to learn to understand what the person grading them is expecting, and deliver something that meets those expectations.

That sounds like so much bullshit when you’re a kid, but I wish I had understood it when I was younger.

AlexandrB · 9 months ago
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

What I don't like are all the hidden variables in these systems. Even GPS, for example, is making some assumptions about what kind of roads you want to take and how to weigh different paths. LLMs are worse in this regard because the creators encode a set of moral and stylistic assumptions/dictates into the model and everybody who uses it is nudged into that paradigm. This is destructive to any kind of original thought, especially in an environment where there are only a handful of large companies providing the models everyone uses.

nkrisc · 9 months ago
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

1. You won’t always have an LLM. It’s the same reason I still have at least my wife’s phone number memorized.

2. So you can learn to do it better. See point 1.

I wasn’t allowed to use calculators in first and second grade when memorizing multiplication tables, even though a calculator could have finished the exercise faster than me. But I use that knowledge to this day, every day, and often I don’t have a calculator (my phone) handy.

It’s what I tell my kids.

foxglacier · 9 months ago
Your child perhaps shouldn't learn things that computers can do. But they should learn something to make themselves more useful than every uneducated person. I'm not sure schools are doing much good anymore teaching redundant skills. Without any abilities beyond the default, they'll grow up to be poor. I don't know what that useful education is but I expect something sort of thinking skills, and perhaps even giant piles of knowledge to apply that thinking to.
aprilthird2021 · 9 months ago
> there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

It's not true even though it's accepted. Rote memorization has a place in an education. It does strengthen learning and allow one to make connections between the things seen presently and things remembered, among other things.

BOOSTERHIDROGEN · 9 months ago
you will benefit from the beauty of appreciation, lad, just hang on a little bit longer. It is beautifully explained in this essay https://www.astralcodexten.com/p/the-colors-of-her-coat
dingnuts · 9 months ago
> That's more or less accepted today.

Bullshit! You cannot do second order reasoning with a set of facts or concepts that you have to look up first.

Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.

People will think you are a wizard if you read documentation and bother to remember it, because they're still busy asking Google or ChatGPT while you're happily coding without pausing

fransje26 · 9 months ago
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

That sounds like setting-up your child for failure, to put it bluntly.

How do you want to express a thought clearly if you already fail at the stage of thinking about words clearly?

You start with a fuzzy understanding of words, which you delegated to a spellchecker, added to a fuzzy understanding of writing, which you've delegated to a computer, combined with a fuzzy memory, which you've delegated to a search engine, and you expect that not to impact your child's ability to create articulate thoughts and navigate them clearly?

To add irony to the situation, the physical navigation skills have, themselves, been delegated to a GPS..

Brains are like muscles, they atrophy when not used.

Reverse that course before it's too late, or suffer (and have someone else suffer) the consequences.

andai · 9 months ago
>Why should I learn to do X if I can just ask an LLM and it will do it better than me

This may eventually apply to all human labor.

I was thinking, even if they pass laws to mandate companies employ a certain fraction of human workers... it'll be like it already is now: they just let AI do most of the work anyway!

HDThoreaun · 9 months ago
It’s all about critical thinking. The answer to your kid is that LLMs are a tool and until they run the entire economy there will still need to be people with critical thinking skills making decisions. Not every task at school helps hone critical thinking but many of them do.

Deleted Comment

Suppafly · 9 months ago
>So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

Realistically it comes down to the idea that being an educated individual that knows how to think is important for being successful, and learning in school is the only way we know to optimize for that, even if it's likely not the most efficient way to do so.

Retric · 9 months ago
The scope of what’s useful to know changes with tools, but having a bullshit detector requires actually knowing some things and being able to reason about the basics.

It’s not that LLM’s are particularly different it’s that people are less able to determine when they are messing up. A search engine fails and you notice, an LLM fails and your boss, customer, ect notices.

whatshisface · 9 months ago
I don't think memorizing poetry fits your picture. Nobody ever memorized poetry so that they could answer questions about it.
sorokod · 9 months ago
For the same reason you should learn how to walk in a world that has utility scooters.
milesrout · 9 months ago
>When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

Au contraire! It is quite wrong and was wrong then too. "Rote memorisation" is a slur for knowledge. Knowledge is still important.

Knowledge is the basis for skill. You can't have skill or understanding without knowledge because knowledge is illustrative (it gives examples) and provides context. You can know abstract facts like "addition is abelian" but that is meaningless if you can't add. You can't actually program if you don't know the building blocks of code. You can't write a C program if you have to look up the function signature of read(2) and write(3) every time you need to use them.

You don't always have access to Google, and its results have declined procipitously in quality in recent years. Someone relying on Google as their knowledge base will be kicking themselves today, I would claim.

It is a bit like saying you don't need to learn how to do arithmetic because of calculators. It misses that learning how to do arithmetic isn't just important for the sake of being able to do it, but for the sake of building a comfort with numbers, building numerical intuition, building a feeling for maths. And it will always be faster to simply know that 6x7 is 42 than to have to look it up. You use those basic arithmetical tasks 100 times every time you rearrange an equation. You have to be able to do them immediately. It is analogous.

Note that I have used illustrative examples. These are useful. Knowledge is more than knowing abstract facts like "knowledge is more than knowing abstract facts". It is about knowing concrete things too, which highlight the boundaries of those abstract facts and illustrate their cores. There is a reason law students learn specific cases and their facts and not just collections of contextless abstract principles of law.

>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers),

Writing legibly is important for many reasons. Note taking is important and often isn't and can't be done with a computer. It is also part of developing fine motor skills generally.

>spell very well (spell check keeps us professional),

Spell checking can't help with confusables like to/two/too, affect/effect, etc. and getting those wrong is much more embarrassing than writing "embarasing" or "parralel". Learning spelling is also crucial because spelling is an insight into etymology which is the basis of language.

>reading a map to get around (GPS), etc

Reliance on GPS means never building a proper spatial understanding. Many people that rely on GPS (or being driven around by others) never actually learn where anything is. They get lost as soon as they don't have a phone.

>but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection).

Memorising poetry is a different sort of thing--it is a value judgment not a matter of practicality--but it is valuable in itself. We have robbed generations of children of their heritage by not requiring them to learn their culture.

an_aparallel · 9 months ago
This is how we end up with people who cant write legibly, cant smell bad maths (on the news/articles/ads), cant change tires, have no orienteering or sense of direction and memories like swiss cheese. Trust the oracle son. /s

I think all of the above do one thing brilliantly, built self confidence.

Its easy to get bullshitted if what youre able to hold in your head is effectively nothing.

srveale · 9 months ago
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.

Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.

I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.

The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.

sixpackpg · 9 months ago
The education model at high school and undergrad uni has not changed in decades, I hope AI leads to a fundamental change. Homework being made easy by AI is a symptom of the real issues. Being taught by uni students who learned the curriculum last year, lecturers who only lecture due to obligation and haven't changed a slide in years. Lecturers who refuse to upload lecture recordings or slides. Just a few glaring issues, the sad part these are rather superficial easy to fix cases of poor teaching.

I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments. If anything AI will lead to bigger differences in student learning. Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.

Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.

hackyhacky · 9 months ago
> it's called the "Flipped classroom" approach.

Flipped classroom is just having the students give lectures, instead of the teacher.

> Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable.

This is called "proctored exams" and it's been pretty common in universities for a few centuries.

None of this addresses the real issue, which is whether teachers should be preventing students from using AIs.

vonneumannstan · 9 months ago
>Not fully of course, they edit the output using their expertise

Surely this is sarcasm, but really your average schoolteacher is now a C student Education Major.

aj7 · 9 months ago
I’m a physicist. I can align and maximize ANY laser. I don’t even think when doing this task. Long hours of struggle, 50 years ago. Without struggle there is nothing. You can bullshit your way in. But you will be ejected.
ketzo · 9 months ago
barely related to your point but “I can align and maximize ANY laser” is such an incredibly specific flex, I love it
marksbrown · 9 months ago
A master blacksmith can shoe a horse an' all. Laser alignment is also a solved problem with a machine. Just because something can be done by hand does not mean it has any intrinsic value.
hobo_in_library · 9 months ago
The challenge is that while LLMs do not know everything, they are likely to know everything that's needed for your undergraduate education.

So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.

Is that necessarily a bad thing? It's mixed: - You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level. - For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.

It's like taking the college weed out classes and shifting those to people in the middle of their career.

Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.

Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past

psygn89 · 9 months ago
Agreed, the struggle often leads us to poke and prod an issue from many angles until things finally click. It lets us think critically. In that journey you might've learned other related concepts which further solidifies your understanding.

But when the answer flows out of thin air right in front of you with AI, you get the "oh duh" or "that makes sense" moments and not the "a-ha" moment that ultimately sticks with you.

Now does everything need an "a-ha" moment? No.

However, I think core concepts and fundamentals need those "a-ha" moments to build a solid and in-depth foundation of understanding to build upon.

porridgeraisin · 9 months ago
Yep. People love to cut down this argument by saying that a few decades ago, people said the same thing about calculators. But that was a problem too! People losing a large portion of their mental math faculty is definitely a problem. If mental math was required daily, we wouldn't see such obvious BS numbers in every kind of reporting(media/corporate/tech benchmarks) that people don't bat an eye at. How much the problem is _worth_ though, is what matters for adoption of these kinds of tech. Clearly, the problem above wasn't worth much. We now have to wait and see how much the "did not learn through cuts and scratches" problem is worth.
taftster · 9 months ago
Absolutely this. AI can help reveal solutions that weren't seen. An a-ha moment can be as instrumental to learning as the struggle that came before.

Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.

yapyap · 9 months ago
> I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

In the end the willingness to struggle will set apart the truly great Software Engineer from the AI-crutched. Now of course this will most of the time not be rewarded, when a company looks at two people and sees “passable” code from both but one is way more “productive” with it (the AI-crutched engineer) they’ll inititally appreciate this one more.

But in the long run they won’t be able to explain the choices made when creating the software, we will see the retraction from this type of coding when the first few companies’ security falls apart like a house of cards due to AI reliance.

It’s basically the “instant gratification vs delayed gratification” argument but wrapped in the software dev box.

JohnMakin · 9 months ago
I don't wholly disagree with this post, but I'd like to add a caveat, observing my own workflow with these tools.

I guess I'd qualify to you as someone "AI crutched" but I mostly use it for research and bouncing ideas (or code complete, which I've mentioned before - this is a great use of the tool and I wouldn't consider it a crutch, personally).

For instance, "parse this massive log output, and highlight anything interesting you see or any areas that may be a problem, and give me your theories."

Lots of times its wrong. Sometimes its right. Sometimes, its response gives me an idea that leads to another direction. It's essentially how I was using google + stack overflow ten years ago - see your list of answers, use your intuition, knowledge, and expertise to find the one most applicable to you, continue.

This "crutch" is essentially the same one I've always used, just in different form. I find it pretty good at doing code review for myself before I submit something more formal, to catch any embarrassing or glaringly obvious bugs or incorrect test cases. I would be wary of the dev that refused to use tools out of some principled stand like this, just as I'd be wary of a dev that overly relied on them. There is a balance.

Now, if all you know are these tools and the workflow you described, yea, that's probably detrimental to growth.

vunderba · 9 months ago
I've been calling this out since the rise of ChatGPT:

"The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the LLM to provide an answer, rather than taking a few moments to quietly ponder the problem on your own. By reaching for it to solve any problem at nearly an instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning."

prawn · 9 months ago
I've had multiple situations where AI has helped me get to the solution because it has been unable to get there itself. But that I wouldn't have realised that solution otherwise. In one case, looking for a plot, it delivered many woeful options but one sparked an alternative thought that got me on track. In other cases trying to debug code, having it talk through the logic/flow and exhaust other fixes, I have managed to solve the problem despite not being experienced at all with that language.

The dangers I've found personally are more around how it eases busywork, so I'm more inclined to be distracted doing that as though it delivers actual progress.

nonethewiser · 9 months ago
Somewhat agree.

I agree in principal - the process of problem solving is the important part.

However I think LLMs make you do more of this because of what you can offload to the LLM. You can offload the simpler things. But for the complex questions that cut across multiple domains and have a lot of ambiguity? You're still going to have to sit down and think about it. Maybe once you've broken it into sufficiently smaller problems you can use the LLM.

If we're worried about abstract problem solving skills that doesnt really go away with better tools. It goes away when we arent the ones using the tools.

Cthulhu_ · 9 months ago
> But that struggling was ultimately necessary to really learn the concepts.

This is what isn't explained or understood properly (...I think) to students; on the surface you go to college/uni to learn a subject, but in reality, you "learn to learn". The output that you're asked to submit is just to prove that you can and have learned.

But you don't learn to learn by using AI tools. You may learn how to craft stuff that passes muster, gets you a decent grade and eventually a piece of paper, but you haven't learned to learn.

Of course, that isn't anything new, loads of people try and game the system, or just "do the work, get the paper". A box ticking exercise instead of something they actually want to learn.

whatever1 · 9 months ago
The counter argument is that now you can skip boilerplate code and focus on the overall design and the few points that brainpower is really needed.

The amount of visualizations that i have made after chat gpt was released has increased exponentially. I loath looking the documentation again and again to make a slightly non standard graph. Now all of the friction is gone! Graphs and visuals are everywhere in my code!

hansvm · 9 months ago
> focus on [...] the few points that brainpower is really needed

The person you're responding to is talking about it from an educational perspective though. If your fundamentals aren't solid, you won't know that exponentially smoothed reservoir sampling backed by a splay tree is optimal for your problem, and ChatGPT has no clue either. Trying things, struggling, and failing is crucial to efficient learning.

Not to mention, you need enough brain power or expertise to know when it's bullshitting you. Just today it was telling me that a packed array was better than my proposed solution, confidently explaining why, and not once saying anything correct. No prompt changes could fix it (whether restarting or replying), and anyone who tried to use less brainpower there would be up a creek when their solution sucked.

Mind you, I use LLMs a lot, including for code-adjacent tasks and occasionally for code itself. It's a neat tool. It has its place though, and it must be used correctly.

moltar · 9 months ago
I think it’s finally time to just stop the homework.

All school work must be done within the walls of the school.

What are we teaching our children? It’s ok to do more work at home?

There are countries that have no homework and they do just fine.

jplusequalt · 9 months ago
Homework helps reinforce the material learned in class. It's already a problem where there is too much material to be fit into a single class period. Trying to cram in enough time for homework will only make that problem worse.
oerdier · 9 months ago
There are such legal, cultural and economic differences between countries that no homework might work in one country but not at all in another.
stv_123 · 9 months ago
Yeah, the concept of "productive struggle" is important to the education process and having a way to short circuit it seems like it leads to worse learning outcomes.
umpalumpaaa · 9 months ago
I am not sure all humans work the same way though. Some get very very nervous when they begin to struggle. So nervous that they just stop functioning.

I felt that during my time in university. I absolutely loved reading and working through dense math text books but the moment there was a time constraint the struggle turned into chaos.

taftster · 9 months ago
I don't think asking "what's wrong with my code" hurts the learning process. In fact, I would argue it helps it. I don't think you learn when you have reached your frustration point and you just want the dang assignment completed. But before reaching that point, if you had a tutor or assistant that you could ask, "hey, I'm just not seeing my mistake, do you have ideas" goes a long way to foster learning. ChatGPT, used in this way, can be extremely valuable and can definitely unlock learning in new ways which we probably even haven't seen yet.

That being said, I agree with you, if you just ask ChatGPT to write a b-tree implementation from scratch, then you have not learned anything. So like all things in academia, AI can be used to foster education or cheat around it. There's been examples of these "cheats" far before ChatGPT or Google existed.

SoftTalker · 9 months ago
No I think the struggle is essential. If you can just ask a tutor (real or electronic) what is wrong with your code, you stop thinking and become dependent on that. Learning to think your way through a roadblock that seems like a showstopper is huge.

It's sort of the mental analog of weight training. The only way to get better at weightlifting is to actually lift weight.

Deleted Comment

ryandrake · 9 months ago
I think teachers also need to reconsider how they are measuring mastery in the subject. LLMs exist. There is no putting the cat back into the bag. If your 1980s way to measure a student's mastery of a subject can be fooled by an LLM, then how effective is that measurement in 2020+? Maybe we need to stop using essays as a way to tell if the student has learned the material.

Don't ask me what the solution is. Maybe your product does it. If I knew, I'd be making a fortune selling it to universities.

teekert · 9 months ago
Students do something akin to vibe coding I guess. It may seem impressive at first glance but if anything breaks you are so, so lost. Maybe that’s it, break the student’s code, see how they fix it. The vibe coding student is easily separate from the real one (of course this real coder can also use AI, just not yoloing it).

I guess you can apply similar mechanics to reports. Some deeper questions and you will know if the report was self written or if an AI did it.

0xffff2 · 9 months ago
>For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.

izacus · 9 months ago
It doesn't matter who coherent the output is - the students will paste it anyway, then fail the assignment (and you need to deal with grading it) and then complain to parents and school board that you're incompetent because you're failing the majority of the class.

Your post is based in a misguided idea that students actually care about some basic quality of their work.

rufus_foreman · 9 months ago
>> Does that actually work?

Sure. Works in my IDE. "Create a linked list implementation, use that implementation in a method to reverse a linked list and write example code to demonstrate usage".

Working code in a few seconds.

I'm very glad I didn't have access to anything like that when I was doing my CS degree.

StefanBatory · 9 months ago
I have some subjects, at Masters - that are solvable by one prompt. One.

Quality of CS/Software Engineering programs vary that much.

bongodongobob · 9 months ago
Why are you asking? Go try it. And yes, depending on the task, it does.
currymj · 9 months ago
since late 2024/early 2025 it now is the case, especially with a reasoning model like Sonnet 3.7, DeepSeek-r1, o3, Gemini 2.5, etc., and especially if you upload the textbook, slides, etc alongside the homework to be cheated on.

most normal-difficulty undergraduate assignments are now doable reliably by AI with little to no human oversight. this includes both programming and mathematical problem sets.

for harder problem sets that require some insight, or very unstructured larger-scale programming projects, it wouldn't work so reliably.

but easier homework assignments serve a valid purpose to check understanding, and now they are no longer viable.

andai · 9 months ago
I spent much of the past year at public libraries, and I heard the word ChatGPT approximately once per minute, in surround sound. Always from young people, and usually in a hushed tone...
victorbjorklund · 9 months ago
In one way I'm glad I learned to code before LLM:s. It would be so hard to push through the learning now when you are just a click away from buildning the app with AI...
dyauspitr · 9 months ago
I’m pretty sure you can assume close to 100% of students are using LLMs to do their homework.
ryandrake · 9 months ago
And if you're that one person out of 100,000 who is not using LLMs to do their homework, you are at a significant disadvantage on the grading curve.
ugh123 · 9 months ago
>I built a popular product that helps teachers with this problem.

Does your product help teachers detect cheating? Because I hear none of them are accurate, with many false positives and ruined academic careers.

Are you saying yours is better?

bboygravity · 9 months ago
I don't get this reasoning. Without LLMs I would learn how to write sub-optimal code that is somewhat functional. With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster. On top of that it always makes dumb mistakes which forces you to actually understand what it's spitting out to get it to work properly. Again: that helps with learning.

The fact that you can ask it for a solution for exactly the context you're interested in is amazing and traditional learning doesn't come close in terms of efficiency IMO.

dingnuts · 9 months ago
> With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster.

No, you see a plausible set of tokens that appear similar to how it's done, and as a beginner, you're not able to tell the difference between a good example and something that is subtly wrong.

So you learn something, but it's wrong. You internalize it. Later, it comes back to bite you. But OpenAI keeps the money for the tokens. You pay whether the LLM is right or not. Sam likes that.

layer8 · 9 months ago
It’s more like looking up the solution to the math problem you’re supposed to solve on your own. It can be helpful in some situations, but in general you don’t learn the problem-solving skills if you don’t do it yourself.
hackable_sand · 9 months ago
I would recommend programming, and designing your system, on a piece of paper instead.

It's the most efficient few-shot which beats the odds on any SotA model.

yamazakiwi · 9 months ago
I'm more interested in memory and knowledge retention in general and how AI can assist. How many times have you heard from people that they are doing rote memorization and will "data dump" test information once a course is over. These tools are less to blame than the motivators and systems that are suppose to be engaging students in real learning and the benefits of a struggle.

Another problem is there is so much in technology, I just can't remember everything after years of exposure to so many spaces. Not being able to recall information you used to know is frustrating and having AI to remind you of details is very useful. I see it as an amplifying tool, not a replacement for knowledge. I'm sure there are some prolific note taking memory tricksters out there but I'm not one of them.

I frequently forget information over time and it's nice to have a tool to remind me of how UDP, RTP, and SIP routing work when I haven't been in the comm or network space for a while.

Deleted Comment

nextos · 9 months ago
My CS undergrad school used to let students look up documentation during coding exams. Most courses had a 3-5 hour coding challenge where you had to make substantial changes to a course project you had developed. I think this could also be the right response to LLMs. Let students use whatever they want to use, and test true skills and understanding.

FWIW, exams testing rote learning without the ability to look up things would have been much easier. It was really stressful to sit down and make major changes to your project to satisfy new unit tests, which often targeted edge cases and big O complexity to crash your code.

sally_glance · 9 months ago
I think this is a structural issue. Universities right now are trying to justify their existence - universities of the past used to be sites of innovation.

Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does. Don't do silly textbook challenges for exams anymore - reestablish a culture of scientific innovation!

acbart · 9 months ago
Incorrect. Fundamentals must be taught in order to provide the context for the more challenging open-ended activities. Memorization is the base of knowledge, a starting point. Cheating (whether through an LLM or hiring someone or whatever) skips the journey. You can't just take them through the exciting routes, sometimes they have to go through the boring tedious repetitive stuff because that's how human brains learn. Learning is, literally, a stressful process on the brain. Students try to avoid it, but that's not good for them. At least in the introductory core classes.
never_inline · 8 months ago
> Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does.

LLMs can't produce intellectual rigour. They get fine details wrong every time. So indeed using ChatGPT for doing your reasoning for you produces inferior results. By normalising non-rigorous yet correct sounding answers, we drive down the expectations.

To take a concrete example, if you tell a student to implement memcpy with chatgpt, and it will just give an answer which uses uint64 copying. The student has not thought from first principles (copy byte by byte? Improve performance? How to handle alignment?). This lack of insight in return to immediate gratification will bite later.

It's maybe not problem for non-STEM fields where this kind of rigor and insight is not required to excel. But in STEM fields, we write programs and prove theorems for insight. And that insight and the process of obtaining it is gone with AI.

nixpulvis · 9 months ago
You claim using AI tools doesn't dumb you down, but it very well could and is. Take the calculator for example, I'm overly dependent on it. I'm slower to perform arithmetic than I would have been without it. But knowing how to use one allows me to do more complex math more quickly. So I'm "dumber" in one way and "smarter" in others. AI could be the same... except our education system doesn't seem ready for it. We still learn arithmetic, even if we later rely in tools to do it. Right now teachers don't know how to teach so that AI doesn't trivialize things.

You need to know how to do things so you know when the AI is lying to you.

Deleted Comment

tomxor · 9 months ago
> I think the issue is that it's so tempting to lean on AI.

This is not the root cause, it's a side effect.

Student's cheat because of anxiety. Anxiety is driven by grades, because grades affect failure. To detect cheating is solving the wrong problem. If most of the grades did not directly affect failure, student's wouldn't be pressured to cheat. Evaluation and grades have two purposes:

1. Determine grade of qualification i.e result of education (sometimes called "summative")

2. Identify weaknesses to aid in and optimise learning (sometimes called "formative")

The problem arises when these two are conflated, either by combining them and littering them throughout a course, or when there is an imbalance in the ratio between them i.e too much of #1. Then the pressure to cheat arises, the measure becomes the target, and focus on learning is compromised. This is not a new problem, student's already waste time trying to undermine grades through suboptimal learning activities like "cramming".

The funny thing is that everyone already knows how to solve cheating: controlled examination, which is practical to implement for #1, so long as you don't have a disruptive number of exams filling that purpose. This is even done in sci-fi, Spok takes a "memory test" in 2286 on Vulkan as a kind of "final exam" in a controlled environment with challenges from computers - it's still using a combination of proxy knowledge based questions and puzzles, but it doesn't matter, it's a controlled environment.

What's needed is a separation and balance between summative and formative grading, then preventing cheating is almost easy, and student's can focus on learning... cheating at tests throughout the course would actually have a negative affect on their final grade, because they would be undermining their own learning by breaking their own REPL.

LLMs have only increased the pressure, and this may end up being a positive thing for education.

casey2 · 9 months ago
>I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts.

This is entirely your opinion. We don't know how the brain learns nor do we know if intelligence can be "taught"

Maskawanian · 9 months ago
Agreed, the only thing that is certain is that they are cheating themselves.

While it can be useful to use LLMs as a tutor if you're stuck. The moment that you use it to provide a solution, you stop learning and the tool becomes a required stepping stone.

Deleted Comment

borg16 · 9 months ago
here is an idea, curious what others think of this:

split the entire coursework into two parts:

part 1 - students are prohibited from using AI. Have the exams be on physical papers than digital ones requiring use of laptop/computer. I know this adds burden on corrections and evaluations of these answers, but I think this provides a raw answer to someone's understanding of concepts being taught in the class.

part2 - students are allowed, and even encouraged to use LLMs. And they are evaluated based on the overall quality of the answer, keeping in mind that a non zero portion of this was generated using an LLM. Here the credit should be given to the factual correctness of the answer (and if the student is capable of verifying the LLM output).

Have the final grade be some form of weighted average of a student's scores in these 2 parts.

note: This is a raw thought that just occurred to me while reading this thread, and I have not had the chance to ruminate on it.

toxicdevil · 9 months ago
I once had an algorithms professor who would give us written home assignments and then on the day of submission take a quiz with identical questions. A significant portion of the class did poorly on these quizes despite scoring good on the assignment.

I can't even imagine how learning is impacted by the (ab)use of AI.

musicale · 9 months ago
> “how much are students using AI to cheat?” That’s hard to answer

"It is difficult to get a man to understand something, when his salary depends on his not understanding it!"

chalst · 9 months ago
Students who do that risk submitting assignments that show they don’t understand the course so far.
chipsrafferty · 8 months ago
This is frequently stated, but is there any evidence that the "epiphany" is actually required for learning?
wordofx · 9 months ago
It’s not a wide spread “problem”. It’s just education lagging behind technology.
defgeneric · 9 months ago
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.

Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).

P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.

SamBam · 9 months ago
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.

In the article, I guess this would be buried in

> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.

"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.

(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)

vunderba · 9 months ago
Exactly. There's a big difference between a student having a back-and-forth dialogue with Claude around "the extent to which feudalism was one of the causes of the French Revolution.", versus another student using their smartphone to take a snapshot of the actual homework assignment, pasting it into Claude and calling it a day.
PeterStuer · 9 months ago
From what I could observe, the latter is endemic amongst high school students. And don't kid yourself. For many it is just a step up from copy/pasting the first Google result.

They never could be arsed to learn how to input their assignments into Wolfram Alpha. It was always the ux/ui effort that held them back.

radioactivist · 9 months ago
Most of their categories have straightforward interpretations in terms of students using the tool to cheat. They don't seem to want to/care to analyze that further and determine which are really cheating and which are more productive uses.

I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).

SamBam · 9 months ago
Indeed. I called out the second-top category, but you could look at the top category as well:

> We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, editing essays, or summarizing academic material.

Sure, throwing a paragraph of an essay at Claude and asking it to turn it into a 3-page essay could have been categorized as "editing" the essay.

And it seems pretty naked the way they lump "editing an essay" in with "designing practice questions," which are clearly very different uses, even in the most generous interpretation.

I'm not saying that the vast majority of students do use AI to cheat, but I do want to say that, if they did, you could probably write this exact same article and tell no lies, and simply sweep all the cheating under titles like "create and improve educational content."

ignoramous · 9 months ago
> feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them

You're right.

Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.

xpe · 9 months ago
> Bloom's taxonomy is a framework for categorizing educational goals, developed by a committee of educators chaired by Benjamin Bloom in 1956. ... In 2001, this taxonomy was revised, renaming and reordering the levels as Remember, Understand, Apply, Analyze, Evaluate, and Create. This domain focuses on intellectual skills and the development of critical thinking and problem-solving abilities. - Wikipedia

This context is important: this taxonomy did not emerge from artificial intelligence nor cognitive science. So its levels are unlikely to map to how ML/AI people assess the difficulty of various categories of tasks.

Generative models are, by design, fast (and often pretty good) at generation (creation), but this isn't the same standard that Bloom had in mind with his "creation" category. Bloom's taxonomy might be better described as a hierarchy: proper creation draws upon all the layers below it: understanding, application, analysis, and evaluation.

walleeee · 9 months ago
> Students primarily use AI systems for creating (using information to learn something new)

this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say

> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.

and later they report

> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection

kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something

Deleted Comment

walleeee · 9 months ago
I should say, disable you- the tone did not reflect that it can happen to anyone, and that it can not only be a wedge between people but also (and only by virtue of being) between personal trajectories, conditional on the way one uses it
zebomon · 9 months ago
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.

The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.

You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.

I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.

A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!

janalsncm · 9 months ago
How does your product prevent a person from simply retyping something that ChatGPT wrote?

I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.

zebomon · 9 months ago
It may be possible that with enough data from the two categories (copied from ChatGPT and not), your keystroke dynamics will differ. This is an open question that my co-founder and I are running experiments on currently.

So, I would say that while I wouldn't fully dispute your claim that attributing authorship from text alone is impossible, it isn't yet totally clear one way or the other (to us, at least -- would welcome any outside research).

Long-term -- and that's long-term in AI years ;) -- gaze tracking and other biometric tracking will undoubtedly be necessary. At some point in the near future, many people will be wearing agents inside earbuds that are not obvious to the people around them. That will add another layer of complexity that we're aware of. Fundamentally, it's more about creating evidence than creating proof.

We want to give writers and students the means to create something more detailed than they would get from a chatbot out-of-the-box, so that mimicking the whole act of writing becomes more complicated.

logicchains · 9 months ago
>I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable

It won't be long 'til we're at the point that embodied AI can be used for scalable face-to-face assessment that can't be cheated any easier than a human assessor.

ketzu · 9 months ago
> The writing is irrelevant.

In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.

Maybe the way universities do it is not great, but writing in itself is important.

zebomon · 9 months ago
Kindly read past the first line, friend :)
knowaveragejoe · 9 months ago
Paul Graham had a recent blogpost about this, and I find it hard to disagree with.

https://www.paulgraham.com/writes.html

aprilthird2021 · 9 months ago
What we lose if we cut humans out of the equation is the soul and heart of reflection, creativity, drama, comedy, etc.

All those have, at the base of them, the experience of being human, something an LLM does not and will never have.

zebomon · 8 months ago
I agree!
jillesvangurp · 9 months ago
Students will work in a world where they have to use AI to do their jobs. This is not going to be optional. Learning to use AIs effectively is an important skill and should be part of their education.

And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.

Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.

If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).

And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.

LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.

spongebobstoes · 9 months ago
Writing is not necessary for thinking. You can learn to think without writing. I've never had a brilliant thought while writing.

In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.

Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.

I'm sure the same effect could be achieved by having AI transcribe a conversation.

Unearned5161 · 9 months ago
I'm not settled on transcribed conversation being an adequate substitute for writing, but maybe it's better than nothing.

There's something irreplaceable about the absoluteness of words on paper and the decisions one has to do to write them out. Conversational speak is, almost by definition, more relaxed and casual. The bar is lower and as such, the bar for thoughts is lower, in order of ease of handwaving I think it goes: mental, speech, writing.

Furthermore there's the concept of editing which I'm unsure how it could be carried out in a conversational sense in graceful manner. Being able to revise words, delete, move around, can't be done with conversation unless you count "forget I said that, it's actually more like this..." as suitable.

karn97 · 9 months ago
I literally never write while thinking lol stop projecting this hard
moojacob · 9 months ago
How can I, as a student, avoid hindering my learning with language models?

I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.

In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.

I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?

dwaltrip · 9 months ago
Only use LLMs for half of your work, at most. This will ensure you continue to solidify your fundamentals. It will also provide an ongoing reality check.

I’d also have sessions / days where I don’t use AI at all.

Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.

rglynn · 8 months ago
I definitely catch myself reaching for the LLM because thinking is too much effort. It's quite a scary moment for someone who prides themself on their ability to think.
knowaveragejoe · 9 months ago
It's a hard question to answer and one I've been mindful of in using LLMs as tutoring aids for my own learning purposes. Like everything else around LLM usage, it probably comes down to careful prompting... I really don't want the answer right away. I want to propose my own thoughts and carefully break them down with the LLM. Claude is pretty good at this.

"productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.

noisy_boy · 9 months ago
I don't think the pain of losing points is a good learning incentive, powerful sure but not effective.

You would learn more if you tell Claude to not give outright answers but generate more problems where you are weak for you to solve. That reduction in errors as you go along will be the positive reinforcement that will work long term.

neves · 9 months ago
I don't know. I remember much more my failures than my successes. There are errors in important tests that I remember for life the correct answer.
bionhoward · 9 months ago
IMHO yes you’re “losing neurons” and the obvious answer is to stop using Claude. The work you do with them benefits them more than it benefits you. You’re paying them to have conversations with a chatbot which has stricter copyright than you do. That means you’re agreeing to pay to train their bot to replace you in the job market. Does that sound like a good idea in the long term? Anthropic is an actual brain rape system, just like OpenAI, Grok, and all the rest, they cannot be trusted
azemetre · 9 months ago
Can you do all this without relying on any LLM usage? If so then you’re fine.
quantumHazer · 9 months ago
As a student, I use LLMs as little as possible and try to rely on books whenever possible. I sometimes ask LLMs questions about things that don't click, and I fact-check their responses. For coding, I'm doing the same. I'm just raw dogging the code like a caveman because I have no corporate deadlines, and I can code whatever I want. Sometimes I get stuck on something and ask an LLM for help, always using the web interface rather than IDEs like Cursor or Windsurf. Occasionally, I let the LLMs write some boilerplate for boring things, but it's really rare and I tend not to use them too much. This isn't due to Luddism but because I want to learn, and I don't want slop in my way.
lunarboy · 9 months ago
This sounds fine? Copy pasting LLM output without understanding is a short term dopamine hit that only hurts you long term if you don't understand it. If you struggle first, or strategically ping-pong with the LLM to arrive at the answer, and can ultimately understand the underlying reasoning.. why not use it?

Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.

namaria · 9 months ago
> can ultimately understand the underlying reasoning

This is at the root of the Dunnin-Kruger effect. When you read an explanation you feel like you understand it. But it's an illusion, because you never developed the underlying cognition, you just saw the end result.

Learning is not about arriving at the result, or knowing the answers. These are by products of the process of learning. If you just short cut to the end by products, you get the appearance of learning. And you might be able to play the system and come out with a diploma. But you didn't actually develop cognitive skills at all.

istjohn · 9 months ago
I believe conversation is a one of the best ways to really learn a topic, so long as it is used deliberately.

My folk theory of education is that there is a sequence you need to complete to truly master a topic.

Step 1: You start with receptive learning where you take in information provided to you by a teacher, book, AI or other resource. This doesn't have to be totally passive. For examble, it could take the form of Socratic questioning to guide you towards an understanding.

Step 2: Then you digest the material. You connect it to what you already know. You play with the ideas. This can happen in an internal monologue as you read a textbook, in a question and answer period after a lecture, in a study group conversation, when you review your notes, or as you complete homework questions.

Step 3: Finally, you practice applying the knowledge. At this stage, you are testing the understanding and intuition you developed during digestion. This is where homework assignments, quizes, and tests are key.

This cycle can occur over a full semester, but it can also occur as you read a single textbook paragraph. First, you read (step 1). Then you stop and think about what this means and how it connects to what you previously read. You make up an imaginary situation and think about what it implies (step 2). Then you work out a practice problem (step 3).

Note that it is iterative. If you discover in step 3 a misunderstanding, you may repeat the loop with an emphasis on your confusion.

I think AI can be extremely helpful in all three stages of learning--in particular, for steps 2 and 3. It's invaluable to have quick feedback at step 3 to understand if you are on the right trail. It doesn't make sense to wait for feedback until a teacher's aid gets around to grading your HW if you can get feedback right now with AI.

The danger is if you don't give yourself a chance to struggle through step 3 before getting feedback. The amount of struggle that is appropriate will vary and is a subtle question.

Philosophers, mathematicians, and physicists in training obviously need to learn to be comfortable finding their way through hairy problems without any external source of truth to guide them. But this is a useful muscle that arguably everyone should exercise to some extent. On the other hand, the majority of learning for the majority of students is arguably more about mastering a body of knowledge than developing sheer brain power.

Ultimately, you have to take charge of your own learning. AI is a wonderful learning tool if used thoughtfully and with discipline.

stv_123 · 9 months ago
Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills. I could easily see conversations that they outline as "Collaborative" primarily being a user walking Claude through multi-part problems or asking it to produce justifications for answers that students add to assignments.
tmpz22 · 9 months ago
Direct quote I heard from an undergrad taking statistics:

"Snapchat AI couldn't get it right so I skipped the assignment"

moffkalast · 9 months ago
Well if statistics can't understand itself, then what hope do the rest of us have?
dvngnt_ · 9 months ago
back in my day we used snap to send spicy photos now they're using AI to cheat on homework. im not sure what's worse
mppm · 9 months ago
> Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills.

No shit. This is anecdotal evidence, but I was recently teaching a university CS class as a guest lecturer (at a somewhat below-average university), and almost all the students were basically copy-pasting task descriptions and error messages into ChatGPT in lieu of actually programming. No one seemed to even read the output, let alone be able to explain it. "Foundational skills" were near zero, as a result.

Anyway, I strongly suspect that this report is based on careful whitewashing and would reveal 75% cheating if examined more closely. But maybe there is a bit of sampling bias at play as well -- maybe the laziest students just never bother with anything but ChatGPT and Google Colab, while students using Claude have a little more motivation to learn something.

colonial · 9 months ago
CS/CE undergrad here who entered university right when ChatGPT hit. Things are bad at my large state school.

People who spent the past two years offloading their entry-level work onto LLMs are now taking 400-level systems programming courses and running face-first into a capability wall. I try my best to help, but there's only so much I can do when basic concepts like structs and pointer manipulation get blank stares.

> "Oh, the foo field in that struct should be signed instead of unsigned."

< "Struct?"

> "Yeah, the type definition of Bar? It's right there."

< "Man, I had ChatGPT write this code."

> "..."

yieldcrv · 9 months ago
> I think it downplays the incidence of students using Claude as an alternative to building foundational skills

I think people will get more utility out of education programs that allow them to be productive with AI, at the expense of foundational knowledge

Universities have a different purpose and are tone deaf to why their students use universities for the last century: which is that the corporate sector decided university degrees were necessary despite 90% of the cross disciplinary learning being irrelevant.

Its not the university’s problem and they will outlive this meme of catering to the middle class’ upwards mobility at all. They existed before and will exist after.

The university may never be the place for a human to hone the skill of being augmented with AI but a trade school or bootcamp or other structured learning environment will be, for those not self started enough to sit through youtube videos and trawl discord servers

fallinditch · 9 months ago
Yes, AI tools have shifted the education paradigm and cognition requirements. This is a 'threat' to universities, but I would also argue that it's an opportunity for universities to reinvent the experience of further education.
pugio · 9 months ago
I've used AI for one of the best studying experiences I've had in a long time:

1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.

2. (Carefully) Prompt it to create Anki flashcards to meet each goal.

3. Use Anki (duh).

4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.

Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.

It feels like a learning superpower.

jay_kyburz · 9 months ago
This sounds great! If I were learning something I would also use something like this.

I would double check every card at the start though, to make sure it didn't hallucinate anything that you then cram in your brain.

azemetre · 9 months ago
Flash cards are some of the least effective ways to learn FYI and retain info.
ramblerman · 9 months ago
I'll bite. Would you care to back that up somehow? Or at least elaborate.

Spaced repetition as it's more commonly known has been quite studied, and is anecdotally very popular on HN and reddit. Albeit more for some subject than others

tmpz22 · 9 months ago
My family member is a third year med student (US) near the top of their class and makes heavy heavy use of Anki (which is crowdsourced in the Med School community to create very very comprehensive decks).
rcxdude · 9 months ago
I've always viewed them as a good option if you just have a set of facts you need to lodge into your brain (especially with spaced repetition), not so good if you need to develop understanding.
bdangubic · 8 months ago
I used flashcards with my daughter since she was 1.5 years old. she is 12 now and religiously uses flashcards for all learning. and I’d size her up against anyone using any other technique for learning whatsoever