Readit News logoReadit News
bradley13 · 18 days ago
ChatGPT didn't ruin anything. Lazy students did.

I'm a prof, and my experience so far is that - where AI is concerned - there are two kinds of students: (1) those who use AI to support their learning, and (2) those who use AI to do their assignments, thus avoiding learning altogether.

In some classes this has flipped the bell curve on its head: lots of students at either end, and no one in the middle

ViscountPenguin · 18 days ago
Quantity is a quality all its own, it's significantly easier to be lazy with LLMs than with oldschool cheating methods, this changes the equilibrium point at which people will cheat. So if you're a student, you're now even more likely to be dragging along deadweight than you were a decade ago.
mattgreenrocks · 18 days ago
Yes, I think many aren’t realizing this. It is much easier to be lazy, much faster to get something plausible, and not culturally looked down upon. It is also easier to say, “I’ll ask ChatGPT to help me with my main points,” and fall into having it write the essay for you. Changing the defaults (as they were) changes the system, given enough time.

Perhaps the worst aspect of LLMs is they can support us in our “productivity,” when we’re actually trying to avoid the hard work of thinking. This sort of technology-assisted self-delusion feels very similar to social media: “I’m not hiding from life and people, I’m talking to them on Twitter!”

Frieren · 18 days ago
> ChatGPT didn't ruin anything. Lazy students did.

That's a very unuseful way of looking at it. It solves nothing.

ChatGPT has created a problem, we need to look ways of solving it. Unregulated technology is harmful, we need regulations that limit the bad use cases.

To just blame people so, lets be real, your shares on bit tech corporations go up is destructive for society. This tools, like social media, are harming people and should be regulated to work for us. If they cannot then shut them off.

_Algernon_ · 18 days ago
The solution is obvious:

Go back to pen-and-paper examinations at a location where students are watched. Do the same for assignments and projects.

veltas · 18 days ago
> That's a very unuseful way of looking at it. It solves nothing.

They're not trying to give you facts that solve things, they're just giving you facts, as they see it. You can do with that info what you will.

nullc · 18 days ago
This article is about adults who are paying considerable amounts of money to be educated, significantly undermining the education they're paying for by offloading the exercises onto chatbots.

And your response is that we need regulations??

Institutional policy, changes to lesson practices, cover the risk of wasting your education in intro materials... sure!

But state is not your parents, it's certainly not mine. Geesh.

pipes · 18 days ago
And who gets to decide that?
PeterStuer · 18 days ago
"Unregulated technology is harmful, we need regulations that limit the bad use cases"

Unfortunatly most often the cure is worse than the poison.

bradley13 · 18 days ago
Seriously? Regulate use cases? That's sort of like the very short-lived attempt to regulate automobiles, by requiring someone to walk in front of every vehicle, ringing a bell.

Actual, useful AI is a disruptive technology, just as the automobile was. Trying to regulate use cases is the wrong solution. We need to find out how this technology is going to be integrated into our lives.

hvb2 · 18 days ago
You seem to think that before ChatGPT there were no students cheating?

There will always be people that try to outsmart everyone else and not do the work. The problem here is those people, nothing else.

throwaway290 · 18 days ago
No, now there is more people who are not learning. Previously students had to learn. Group 2 was very small because you had to be very lucky to get your assignments done for you without learning at least something yourself. And if you half assed your assignment it would be noticeable by the lecturer. Now it is a no brainer for many people to be in group 2
meander_water · 18 days ago
Google, Anthropic and OpenAI are well aware of this. They've pushed out features like study mode and guided learning, but they're barely a band aid fix. The people who want to bypass learning and just get the degree now have a cheaper way. Instead of paying someone else to do their homework, they pay a bot.
moffkalast · 18 days ago
Well given that a large portion of the jobs today are meaningless busywork that only exist so people have something to do, I guess we now also have the education level to match them.
BOOSTERHIDROGEN · 18 days ago
For category 1, they can too fall into temptation of just not trying hard enough solving the homework/assignment. This is interesting era I think.
BrtByte · 16 days ago
Yep, the curious students suddenly have a supertool to dig deeper and experiment more, while the disengaged ones now have a shortcut that lets them skip thinking entirely
senectus1 · 18 days ago
yup, AI is next level useful when treated as a rubber duck + search engine.

Its absolutely a backstabbing saboteur if you blindly trust everything it outputs.

Inquisitive skepticism is a super power in the age of AI atm.

nomius10 · 18 days ago
> ChatGPT didn't ruin anything. Lazy students did.

This is a prime example of thinking exclusively along the lines of rugged individualism. It assigns all blame on the individual, whilst ignoring any systemic or collective causes.

It ignores the socio-economic realities of the students. Especially if they come from a challenged background. To them the important thing is getting the high paying job which represents a ticket out of the lower class, and if that can be optimized, it's a no brainer that they would take that route.

It ignores the fact that the actual credential paper is more important to recruiters than the knowledge gained though the program. Or even that networking and referrals has a much larger weight than raw skill in recruiting than we'd like to admit, from our meritocratic perception.

It ignores the fact that maybe the module itself is not that valuable? We're talking about the US here, and people literally pay out of pocket for education. And yet they cheat/skip it in a heartbeat. The only valid rationale is that there is no value there from an economics lens. They'd rather spend that time doing extracurricular activities that actually improve their chances of getting employed.

It ignores the fact that since the industrial revolution the education system has not evolved at all (merely adding a computer lab does not mean the system was reworked, it's the other way around, the new technology was adapted into the existing system).

The education system has flaws. The incentives in the job marketplace have flaws. There are many factors at play here, and simply arguing that "it's the student's fault" is the equivalent of an ostrich sticking his head in the sand.

sudohalt · 18 days ago
The vast majority of people go to school to be "employable" (or because it's what your "supposed to do"), not to learn. The only thing that matters is your grade, therefore people will optimize for that. With grades being the only thing that matters and time pressure (you must graduate by this time) it's no surprise that students offload their work to ChatGPT. They don't care about learning they care about what is actually valued: their grade and graduating. If those things didn't matter people wouldn't use ChatGPT because their incentive would be to learn. You see this with older people who go back to college or take community college classes, they aren't cheating because there is no incentive to cheat.
T4iga · 18 days ago
Except since getting my degree no one but my ego has ever cared for the grades i received (which weren't good btw). This is my European perspective. I don’t know how it is in the US.
sudohalt · 18 days ago
Ya this is mostly true, grades rarely matter after you graduate. But you must maintain a "passing grade" and matters if you are pursing grad school or certain majors, and sometimes matter right after graduation or for internships.
BrtByte · 16 days ago
When the system rewards grades over understanding, it's completely rational for students to optimize for grades
lysecret · 18 days ago
I honestly think the times where you became employable simply with decent grades and a degree are over.
skeaker · 18 days ago
Not sure this was the fault of ChatGPT as much as it was the fault of disinterested students bullshitting a class for credits. I've seen similar bad work in group projects when I was a student well before ChatGPT was a thing.
augment_me · 18 days ago
I am a PhD student who teaches part-time in some courses, and a difference I have personally ran into is that disinterested students still had agency over what they had written. When you present something, or hand something in that you have written, you still display the information that you know at that time. This makes feedback helpful because you can latch it onto SOMETHING.

When I get obvious LLM-handins, who am I correcting? What point does it have? I can say anything, and the student will not be able to integrate it, because they have no agency or connection to "their" work. Its a massive waste of everyone's time, I have another 5 student in line who actually have engaged with their work, and who will be able to integrate feedback, and their time is being taken by someone who will not be able to understand or use the feedback.

This is the big difference. For a bad student, failing and making errors is a way to progress and integrate feedback, for an LLM-student, its pointless.

what-the-grump · 18 days ago
I think only time will tell? Let me flip this on it's head.

LLMs allow me to tackle stuff I have no business tackling because the support from the LLM for the task far exceeds google / stack overflow / [insert data source for industry or task].

Does the concept sink in? Yes and no, I am moving too fast most of the time to retain the solution.

When the task is complex enough and LLM gets it wrong, oh boy is it educational, not only do I have to figure out why the LLM is wrong, I have to now correct my understanding and learn to reason against it.

I was a very bad student, most of the classes didn't make sense to me, bored me out of my mind, I failed a lot. Do I ever feel that way when talking to Chatgpt about a task I have no idea how to solve? No, and guess what we figure it out together.

Another data point, my english writing has improved by using chatgpt to refactor / reformat, more examples, mostly correct english structure. Over time stuff sinks in even if you are not writing it, you are still reading it, and editing.

Lets take code for a minute, is it easier to edit someone else's code or your own? So everyone that has to dive deep into troubleshooting chatgpt's code is somehow dumb/lazy? I don't think so, they are at least as smart as the code.

What would happen if we made a curriculum around using chatgpt, how far would I get in chem 1 if I spent 90 minutes with chatgpt prompts prepared by a professor and a machine that never gets tired of explaining / rephrasing until I get it?

tossandthrow · 18 days ago
Chatgpt introduces a new vector of being a bad collaborator.

Previously you could write a lot of text that was wrong, but claim that you at least tried.

Now, we need to get used to putting it in the same category that a peer did not contribute anything and that they contributed ai slop.

This is one of the reasons why juniors will vanish - there is no room for "I tried my best".

Edit: clarity

tossandthrow · 18 days ago
This

> Some of the sections were written to answer a subtly different question than the one we were supposed to answer. The writing wasn’t even answering the correct question.

Is my absolute biggest issue with LLMs - and it is really week written.

It is like two concepts are really close in latent space, and the LLM projects the question to the wrong representation.

jatins · 18 days ago
I am seeing this in job as well where there is increasingly a trend of submitting code that authors haven't thoroughly reviewed themselves and can't reason through
wfhrto · 18 days ago
Why should that be a problem? Code no longer needs to be understood. If there are any problems, code can be trivially regenerated with updated descriptive prompts.
jeffhuys · 18 days ago
Oh the future is going to be BRIGHT for hackers!
sfn42 · 18 days ago
Is this sarcasm or are you really that naive?
nlitsme · 18 days ago
I think your 'group' is not communicating very well. Just telling a groupmate 'Now I will take over your work' is not very supportive. When people start editing each others work out of the blue, there seems to be no healthy discussion at all.
michaelchicory · 18 days ago
This sucks, though it’s formative. An experience that I value highly from my time studying was working on a group project in a team with misaligned goals: it teaches you how much it matters to find good people to work with in the real world!
self_awareness · 18 days ago
If it wasn't ChatGPT, those students are more than likely to be the kind that buys solutions so that they still don't have to work.

Some people somehow think that having more while working less is an act of resourcefulness. To some extent it maybe is, because we shouldn't work for work's sake, but making "working less" a life goal doesn't seem right to me.

tossandthrow · 18 days ago
The difference in 20$ and 200$ + significantly more effort.

I don't think these students necessarily would have bought.

rekenaut · 18 days ago
The difference is much greater than this. It’s $20/month for a machine that can provide instant answers to any prompt in any topic hundreds of times a month vs $200/assignment that may take days to received and you have to edit yourself if you want a change made.

I think it’s quite clear that most students who are using AI now to generate assignments would not have bought.

rrgok · 18 days ago
Why "working less" should not be a life goal? I got a undergraduate degree because I can earn more, but I don't need more money. I need more free time. With increased salary I can work less.
PeterStuer · 18 days ago
Before LLMs they copy pasted straight from Google or Wolfram Alpha.