Readit News logoReadit News
a_bonobo · a year ago
TurnitIn's cheating detection tools have always been garbage - and I say that as an ex-academic who was 'forced' to use those tools extensively (the PDFs submitted by students are only visible in the Turnitin portal with the 'flags' to the right).

Even before the 'AI detection' tool (IMHO snake-oil) the features around copy-pasting/plagiarism were so bad I always ignored them. The tool would flag things like commonly used coverpage forms. That universities pay a fortune for those tools is an indictment of the industry.

papreclip · a year ago
The one student who got caught for plagiarism when I was a TA was busted by turnitin. Maybe the tool isn't perfect but it caught this student who had painstakingly replaced every single word in a copied paper with a synonym. I'm talking "perhaps the utility is not flawless, but the use of this utility allowed for the apprehension of a pupil who..." etc.

I was surprised at the level of effort

joe5150 · a year ago
So it either only caught a single cheater among some unknown number, or only a single student cheated? Neither case seems to justify the expense and hassle of subjecting everybody to this process, not to mention everything else wrong with TurnItIn.
KittenInABox · a year ago
I'm always shocked at how much students will put in effort simply not to do an assignment given. I guess this is just part of what you will get when you attach cultural and economic worth to a set of hoops to jump through [even if the intent is to have institutions of knowledge, sadly I think many students just see it as a way to get economic stability].
falcor84 · a year ago
More likely that they used something like quillbot or grammarly for automatic paraphrasing.
w4der · a year ago
I never once submitted an assignment on a course with TurnitIn turned on, it makes you agree to their terms of service, which I disagree with and I'd just sed my work to the teacher through email, explaining what was my issue with TurnitIn, and most times they'd just turn it off, which I think speaks loads about that "tool".
tarboreus · a year ago
They're 92% accurate. Sounds good until you realize that's a false plagiarism accusation in every 20 person class.
michaelmior · a year ago
As a current academic who uses Turnitin regularly, I find it works fairly well. There are a lot of false positives, but it also regularly identifies actual plagiarism for me.
Ekaros · a year ago
It is a tool like anything else. From student perspective I perfectly understood reasoning for it and how useful it would be when person doing review actually cares. Being able to easily find most obvious things and then cross-reference is very useful.

Problem comes when users do not care and blindly just believe in percentage value and some threshold

DonsDiscountGas · a year ago
"lots of false positives"

Maybe this is okay when we're talking about plagiarism and you can review manually, but humans can't reliably identify LLM written text

sabbaticaldev · a year ago
For what’s worth (for you) the results would be the same if it identified all of them as plagiarism
krunck · a year ago
My wife is an English prof at a local University. She is tired of being put in the position of being an AI cop. She is tired of 10% students whose work was flagged wasting 50% of her office hours trying to convince her that their work is real.

For her literature classes - where the class grade is composed of a participation grade and a written work grade - she is strongly considering getting rid of all written homework. Instead all writing work will be hand written in class.

gizajob · a year ago
I've been saying for a couple of years now that academic essays are going to have to go back to being handwritten. It seems like possibly the only solution.

The job of essays isn't to produce a Word document of text that can get a grade above 70 - it's to demonstrate and improve the student's writing and thinking. The computer seems to have now completely messed this up.

zer8k · a year ago
If the goal is to improve something don't make it worth 50% of your grade.

When you attach a literal degree to something being done "right" and not "demonstrated considerable improvement" you're going to get people that game the system. When you then make me pay $30,000 or more for a piece of paper you will get people that seek an edge. I did my fair share of underhanded tricks in school because I didnt want to pay another $1200 because I had to maintain a certain GPA to get where I wanted. This was over 15 years ago. Focus in the classes you care about, game the classes you don't. Its a tale as old as education itself.

Blame the OP's wife and the faculty and most universities for not only supporting but encouraging this behavior. If universities are for learning and improving the weight of grades shouldn't be career-determining and the cost should be commensurate to the expectation - both from the professor and the students.

OutOfHere · a year ago
I would add a twist to it, which is to give each student ten printed articles from which to accumulate material to handwrite the essay in class. Five or the ten would be totally unrelated, two would be strongly related, two would have a medium relation, and one would be mildly related. It's the student's job to figure out the useful sources and write the essay, all within 120 minutes. In this way, the students don't have to hallucinate content for the essay.
marcus_holmes · a year ago
I think we're looking at a symptom not a cause.

Essays are only ever used in an academic context. No-one outside of academia writes an essay. This use-case has been replaced by Powerpoint presentations.

Essays are obviously not the only method of "demonstrating and improving the student's writing and thinking".

If we were to write an essay, we would use an LLM to do the actual writing, and feed it a set of bullet points for the points we want to make. Same as our parents moved from handwriting to word processing, we're moving to LLMs as a writing tool.

We're also seeing (that the article didn't mention) that LLMs are being used by staff to grade student essays. Getting to that ridiculous point of an LLM writing content that only an LLM will ever read.

So why not just abandon the essay as a teaching tool? I realise that academia is slow to change, but this might force them to.

Replace it with student presentations, or whatever the actual industry the students are heading for uses to communicate.

As for academia moving away from publishing papers (which are also increasingly being written by LLMs) into journals... that will need to happen too. What replaces that is going to be interesting.

Ancalagon · a year ago
youll have to take away their phones/computers in this case too. ChatGPT can write an excellent essay in less than a minute

Deleted Comment

nomat · a year ago
the authors of this paper[0] were able to use AI to figure out if a block of text was written by AI, so hopefully this becomes a tool available to educators very soon.

[0] https://arxiv.org/html/2410.16107v1

rinzero · a year ago
AI ripped the cover off of a dirty secret of higher education:

The bulk of assignments are useless busywork using outdated formats, and educators and educational institutions haven't been motivated to change it.

Instead of realizing this and embracing the theories and models of newer education sciences, most institutions have become near ideologues with their inertia.

shadowerm · a year ago
I would say it is an even deeper issue of credentialism vs education.

True education doesn't need busywork but credentialism does. The busywork is the filter for the credential.

Also, if a person doesn't write essays in college how are we going to have all these authors that turn 20 pages of ideas into 300 page books? It would ruin the whole book industry. It takes a ton of practice to uncompress an idea into a coherent structure using 10X more words than needed so the book is the optimal length for sale when printed.

spwa4 · a year ago
Either that, or, you don't have respect for the fact that one TA has to correct the work of >30 students per class, for perhaps 10 different classes PLUS other work (like actual research), including checking for plagiarism and the like. This only works with certain formats that are characterized by a large asymmetry of work between student and teacher.

You could, of course, pay 30x more and get better service.

And then there's the total disrespect students have for just outright knowledge and practice. Being able to recite a math formularium from memory "is useless". It kind of is. But students who can do this work 5x faster than people who can't, and surely that's a good part of an education. The second thing that really matters for math is doing A LOT of exercises (like 10 intermediate to hard problems per day, ideally) over a prolonged period of time (months, minimum). That's every last exercise in a big calculus book. The exercises "are useless". Mathematica can do it, and in almost all cases, can do it better than a 40-year experience mathematician. But ...

The difference between students who've done both and students who haven't is night and day. Including the difference in using tools like Mathematica. Frankly, you can even see the difference clearly in other subjects.

MrMcCall · a year ago
In an world with honorable human beings raised in Wisdom, we would understand that cheating only hurts ourselves in the long run, but that is not our world majority's ethical zeitgeist at the moment.

The lack of ethics, like all selfishness, propagates inefficiencies into the greater society, as exemplified here for our overworked and underpaid teachers.

Our son has become quite a good chess player over the past 6ish years, during which online chess cheating via engines has proliferated. I explained to him that he is completely responsible for being able to honestly say, "No." in answer to the question, "Have you ever cheated in online (or over-the-board) chess?".

Having a decision matrix that leads to ethical action confers a specific peace that accompanies knowing that one has done nothing wrong. In a world full of liars and cheats, being honest is its own very special and quite rare power, looks to me.

mnky9800n · a year ago
It could represent that the current zeitgeist doesn't believe in the system any longer and thus feels like cheating is a valid answer.
antifa · a year ago
We live in an age when almost all our supposedly honest and self made heroes end up exposed as liars, grifters, cheaters, often with no shame and no consequences. The bad examples are coming from inside the house.
indymike · a year ago
This is an easy to solve problem: oral exams / interview exams.
metal_am · a year ago
That's how grad degrees are done. But there's simply not enough time to do this at an undergrad level with potentially hundreds of students.
bnchrch · a year ago
Unless... You have an AI listening and evaluating their oral exam.

I know it sounds like a farce. Because it is. But it might also be a proper solution.

User23 · a year ago
Today’s first year undergraduate is roughly as literate as a sixth grader was a century ago. College was never meant to be a mass remedial education program and by attempting to be one it’s failing at its core mission.

All this nannyware of various sorts is just more evidence, proof even, of how few of those students should even be there.

michaelmior · a year ago
Not really in my experience. That's how final PhD examinations are typically done. Outside of that, it's pretty rare. That said, PhD students are also regularly interacting with faculty members and discussing their work so it would be difficult to get by without actually knowing things.
gedy · a year ago
Agreed, but you know someone will shout bias or whatever
somnic · a year ago
Or just any normal proctored exam?
perrygeo · a year ago
Paper and pencil or an air-gapped computer lab. This ain't rocket science.
BrenBarn · a year ago
That's mentioned towards the end. The problem is it requires much more time and labor to give oral exams.
indymike · a year ago
If only there was a new labor saving technology that could be used...

Deleted Comment

UncleMeat · a year ago
Rough when you have 200 students.
tverbeure · a year ago
Rough, but perfectly doable. It's how pretty much all my finals were done way back when. Classes of 200 and more.
mupuff1234 · a year ago
Or just regular written exams.
s08148692 · a year ago
Imagine an oral c++ algorithms and data structures exam

This isn't just essays, AI will happily output any known algorithm you ask for in a few seconds. CS coursework can be almost entirely automated in many cases

mmooss · a year ago
We need validation of GPT detection tool accuracy (false/true & negative/positive), and in what circumstances, or they are useless. Without independent information about whether something is GPT-written [1], there is no way to verify it, so they can't be verified on submitted student papers where the info is from people (students) with very strong incentives to lie (not wanting to be caught).

Here are a couple references in the article, but I would like to see a review of this question:

> Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university’s own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans.

> One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study).

[1] That needs to be defined: Every word? mostly? GPT-written then edited? What about ideas from the humans and writing by GPT? First draft by human and editing by GPT - what about using it as a grammar checker? For suggestions on clarifying some tricky passages? ?

me_me_me · a year ago
Is that even feasible? You can affect LLMs output by asking them to form response as some specific person with specific style.

Apart some hidden secret sauce by the LLM provider, detection is always going to be an arms race.

mmooss · a year ago
> Is that even feasible?

If you mean, is it feasible to empirically test the accuracy of detection software when the LLM output is so varied? That's an interesting question.

Is there a way to create a representative sample of LLM output for testing purposes? That's an important question, not least because we'd also be able to define the distribution and range of outputs (unless there's a statistical way to create a sample without knowing those things?).

It goes to the question of how deterministic is the output? How predictable, even statistically?

Maybe we just work with a large collection of output. Or maybe we can create a sample of inputs (the prompts), and work from that.

codingwagie · a year ago
Maybe this will finally liberate humanity from educational prestige dominating life outcomes.
mmooss · a year ago
What about education?
spudlyo · a year ago
I’d never let school get in the way of my education!
whamlastxmas · a year ago
There are a hundred lifetimes of quality education available online for free. If you learn it and can demonstrate its value or use it to create value then awesome. Not sure why we need to involve a certification process that costs $100k
User23 · a year ago
If these students cared about education they wouldn’t be using AI to avoid learning how to do the work.
dhosek · a year ago
We’ll be free of that as well.
carabiner · a year ago
I am enormously glad I went to college before these tools existed. I don't know how I would handle myself with access to them.
ars · a year ago
It depends on your goal: Do you just need that piece of paper with your name on it? Or are you going to school to really learn material?
mmooss · a year ago
What we do depends on much more than our goals. There's also stress, time pressure and workload, emotional problems, money issues, competitive pressure, fears (of failure, of disappointing important people, etc.), exhaustion, etc.

People don't do bad things because that's their goal but because of these issues. If you give humans an easy way to do bad things, many people under pressure will do it.

carabiner · a year ago
We all start out with good intentions. Do we all know if we would follow them to the end?
mupuff1234 · a year ago
You also know that you probably shouldn't eat that extra cookie and yet...
proteal · a year ago
I did undergrad before chatGPT and now I’m back in school for an MBA at a school where the school pays for all students to have access to GPT4o. Maybe because we’re all older the tools are less of a problem, but I would say they have been a net positive. GPT3 was ass and students could easily tell when a classmate used it for their group contributions. Profs seems to be about 6-12 months behind the curve in terms of the tech. All profs are aware and thinking deeply about how AI is changing how they teach.

Group papers are usually made into an outline at first and then we divvy up the responsibilities. If you use AI, most kids only care if your work sucks. AI consistently scores like an 80-85% (profs sometimes submit and blindly grade the responses), but almost always misses the core teaching points of classwork.

In my program, grades don’t matter (so employers can’t stack rank us). People who make extensive use of AI are really only cheating themselves. If you’re a big AI user, other kids generally know and try to avoid forming groups with you if they know in advance. You learn better when learning alongside others and if all someone does is dump AI slop in the google doc, you’re wasting my time in addition to yours.

I use AI to flesh out points, especially on assignments I don’t care about. It can help for idea generation and “connecting the dots” between ideas, but I always edit the output because the AI makes stylistic choices I don’t like. It’s definitely an accelerator for me when writing papers. I stand behind all the papers I’ve submitted, though some have sucked (regardless of AI usage or not).

In undergrad, when my priorities were less about learning and more towards dating/partying, I definitely would have abused this tool. At the end of the day, using the tool mainly cheats your own learning. I hope they transition to talking about using LLMs like one uses gambling - a little is fine here and there but if it’s all you do that’s a problem.

I’m not sure what to do about elementary age kids, because the AI easily writes “better” essays. At least in college I could do better than an AI if I applied myself. But in sixth grade? Good luck me. Cats out of the bag now and we should be really empathetic to the younger generations. Imagine getting slammed with TikTok->Pandemic->ChatGPT in the span of like 6-7 formative years. They are growing up differently and I certainly have no clue what we need to do to help them be successful.

michaelmior · a year ago
> I hope they transition to talking about using LLMs like one uses gambling - a little is fine here and there but if it’s all you do that’s a problem.

If I hire someone to manage my money, I don't want them to do any gambling. Although given that investing is somewhat of a gamble, I at least want to set the terms and have them to disclose to me exactly how they're gambling.

histriosum · a year ago
> In my program, grades don’t matter (so employers can’t stack rank us).

Can you expand a bit on how that works? I have limited academic experience, so I’m fascinated. Does everyone end up with a 4.0 if they pass, or..?

carabiner · a year ago
If I could wave a wand to wish away all LLM's I would do it in a heartbeat. More harm than good.