Readit News logoReadit News
capnrefsmmat commented on Researchers find evidence of ChatGPT buzzwords turning up in everyday speech   news.fsu.edu/news/educati... · Posted by u/giuliomagnifico
yesco · 6 days ago
LLMs write in a very coherent, easy to understand way. I see no reason why someone wouldn't want to copy their style or vocabulary if they want to improve their communication skills.

Despite all the complaints about AI slop, there is something ironic about the fact that simply being exposed to it might be a net positive influence for most of society. Discord often begins from the simplest of communication errors after all...

capnrefsmmat · 6 days ago
Sure, if you're learning to write and want lots of examples of a particular style, LLMs can generate that for you. Just don't assume that is a normal writing style, or that it matches a particular genre (say, workplace communication, or academic writing, or whatever).

Our experience (https://arxiv.org/abs/2410.16107) is that LLMs like GPT-4o have a particular writing style, including both vocabulary and distinct grammatical features, regardless of the type of text they're prompted with. The style is informationally dense, features longer words, and favors certain grammatical structures (like participles; GPT-4o loooooves participles).

With Llama we're able to compare base and instruction-tuned models, and it's the instruction-tuned models that show the biggest differences. Evidently the AI companies are (deliberately or not) introducing particular writing styles with their instruction-tuning process. I'd like to get access to more base models to compare and figure out why.

capnrefsmmat commented on Don't Fall for AI: Reasons for Writers to Reject Slop   mythcreants.com/blog/dont... · Posted by u/BerislavLopac
JKCalhoun · 2 months ago
> I can spot AI writing very quickly now, after just a few sentences or paragraphs.

Not denying this is true — but like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much.

I think it was actually Brian Eno that said it (essentially): whatever you laugh about with regard to LLMs today, watch out, because next year that funny thing they did will no longer be present.

capnrefsmmat · 2 months ago
I don't think the AI companies are systematically working to make their models sound more human. They're working to make them better at specific tasks, but the writing styles are, if anything, even more strange as they advance.

Comparing base and instruction-tuned models, the base models are vaguely human in style, while instruction-tuned models systematically prefer certain types of grammar and style features. (For example, GPT-4o loves participial clauses and nominalizations.) https://arxiv.org/abs/2410.16107

When I've looked at more recent models like o3, there are other style shifts. The newer OpenAI models increasingly use bold, bulleted lists, and headings -- much more than, say, GPT-3.5 did.

So you get what you optimize for. OpenAI wants short, punchy, bulleted answers that sound authoritative, and that's what they get. But that's not how humans write, and so it'll remain easy to spot AI writing.

capnrefsmmat commented on Lightfastness Testing of Colored Pencils   sarahrenaeclark.com/light... · Posted by u/picture
wzdd · 2 months ago
Tangent, but I'm curious about how your style feature tagger got "no contractions" when the article is full of them. Just in the first couple of paras we have it's, that's, I've, I'd...
capnrefsmmat · 2 months ago
Probably because the article uses the Unicode right single quotation mark instead of apostrophes, due to some automated smart-quote machinery. I'll have to adjust the tagger to handle those.
capnrefsmmat commented on Lightfastness Testing of Colored Pencils   sarahrenaeclark.com/light... · Posted by u/picture
humblebeekeeper · 2 months ago
> First, it just reads that way. It's the default style if you ask ChatGPT to write a couple of paragraphs that explain why lightfastness is important.

It doesn't read that way to me, and I've read lots of ChatGPT text. We've come to opposite conclusions, I'm curious what qualities you are identifying/keying off of?

capnrefsmmat · 2 months ago
In our studies of ChatGPT's grammatical style (https://arxiv.org/abs/2410.16107), it really loves past and present participial phrases (2-5x more usage than humans). I didn't see any here in a glance through the lightfastness section, though I didn't try running the whole article through spaCy to check. In any case it doesn't trip my mental ChatGPT detector either; it reads more like classic SEO writing you'd see all over blogs in the 20-teens.

edit: yeah, ran it through our style feature tagger and nothing jumps out. Low rate of nominalizations (ChatGPT loves those), only a few present participles, "that" as subject at a usual rate, usual number of adverbs, etc. (See table 3 of the paper.) No contractions, which is unusual for normal human writing but common when assuming a more formal tone. I think the author has just affected a particular style, perhaps deliberately.

capnrefsmmat commented on OpenAI slams court order to save all ChatGPT logs, including deleted chats   arstechnica.com/tech-poli... · Posted by u/ColinWright
lcnPylGDnU4H9OF · 3 months ago
So then the courts need to find who is setting their chats do be deleted and order them to stop. Or find specific infringing chatters and order OpenAI to preserve these specified users’ logs. OpenAI is doing the responsible thing here.
capnrefsmmat · 3 months ago
OpenAI is the custodian of the user data, so they are responsible. If you wanted the court (i.e., the plaintiffs) to find specific infringing chatters, first they'd have to get the data from OpenAI to find who it is -- which is exactly what they're trying to do, and why OpenAI is being told to preserve the data so they can review it.
capnrefsmmat commented on Differences in link hallucination and source comprehension across different LLM   mikecaulfield.substack.co... · Posted by u/hveksr
motorest · 3 months ago
> The approach of generating something and then looking for hallucinations is just stupid. To validate the output I have to be an expert.

No. You only need to check for sources, and then verify these sources exist and they support the claims.

It's the very definition of "fact".

In some cases, all you need to do is check if a URL that was cited does exist.

capnrefsmmat · 3 months ago
If the output is interpreting sources rather than just regurgitating quotes from them, you need to exert judgment to verify they support its claims. When the LLM output is about some highly technical subject, it can require expert knowledge just to judge whether the source supports the claims.
capnrefsmmat commented on OpenAI slams court order to save all ChatGPT logs, including deleted chats   arstechnica.com/tech-poli... · Posted by u/ColinWright
sinuhe69 · 3 months ago
Why could a court favor the interest of the New York Times in a vague accusation versus the interest and right of hundred millions people?

Billion people use the internet daily. If any organization suspects some people use the Internet for illicit purposes eventually against their interests, would the court order the ISP to log all activities of all people? Would Google be ordered to save the search of all its customers because some might use it for bad things? And once we start, where will we stop? Crimes could happen in the past or in the future, will the court order the ISP and Google to retain the logs for 10 years, 20 years? Why not 100 years? Who should bear the cost for such outrageous demands?

The consequences of such orders are of enormous impact the puny judge can not even begin to comprehend. Privacy right is an integral part of the freedom of speech, a core human right. If you don’t have private thoughts, private information, anybody can be incriminated against them using these past information. We will cease to exist as individuals and I argue we will cease to exist as human as well.

capnrefsmmat · 3 months ago
Courts have always had the power to compel parties to a current case to preserve evidence. (For example, this was an issue in the Google monopoly case, since Google employees were using chats set to erase after 24 hours.) That becomes an issue in the discovery phase, well after the defendant has an opportunity to file a motion to dismiss. So a case with no specific allegation of wrongdoing would already be dismissed.

The power does not extend to any of your hypotheticals, which are not about active cases. Courts do not accept cases on the grounds that some bad thing might happen in the future; the plaintiff must show some concrete harm has already occurred. The only thing different here is how much potential evidence OpenAI has been asked to retain.

capnrefsmmat commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
jedberg · 3 months ago
If I were a professor, I would make my homework start the same -- here is a problem to solve.

But instead of asking for just working code, I would create a small wrapper for a popular AI. I would insist that the student use my wrapper to create the code. They must instruct the AI how to fix any non-working code until it works. Then they have to tell my wrapper to submit the code to my annotator. Then they have to annotate every line of code as to why it is there and what it is doing.

Why my wrapper? So that you can prevent them from asking it to generate the comments, and so that you know that they had to formulate the prompts themselves.

They will still be forced to understand the code.

Then double the number of problems, because with the AI they should be 2x as productive. :)

capnrefsmmat · 3 months ago
For introductory problems, the kind we use to get students to understand a concept for the first time, the AI would likely (nearly) nail it on the first try. They wouldn't have to fix any non-working code. And annotating the code likely doesn't serve the same pedagogical purpose as writing it yourself.

Students emerge from lectures with a bunch of vague, partly contradictory, partly incorrect ideas in their head. They generally aren't aware of this and think the lecture "made sense." Then they start the homework and find they must translate those vague ideas into extremely precise code so the computer can do it -- forcing them to realize they do not understand, and forcing them to make the vague understanding concrete.

If they ask an AI to write the code for them, they don't do that. Annotating has some value, but it does not give them the experience of seeing their vague understanding run headlong into reality.

I'd expect the result to be more like what happens when you show demonstrations to students in physics classes. The demonstration is supposed to illustrate some physics concept, but studies measuring whether that improves student understanding have found no effect: https://doi.org/10.1119/1.1707018

What works is asking students to make a prediction of the demonstration's results first, then show them. Then they realize whether their understanding is right or wrong, and can ask questions to correct it.

Post-hoc rationalizing an LLM's code is like post-hoc rationalizing a physics demo. It does not test the students' internal understanding in the same way as writing the code, or predicting the results of a demo.

capnrefsmmat commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
dimal · 3 months ago
> a novice who outsources their thinking to an LLM or an agent (or both) will never develop those skills on their own. So where will the experts come from?

Well, if you’re a novice, don’t do that. I learn things from LLMs all the time. I get them to solve a problem that I’m pretty sure can be solved using some API that I’m only vaguely aware of, and when they solve it, I read the code so I can understand it. Then, almost always, I pick it apart and refactor it.

Hell, just yesterday I was curious about how signals work under the hood, so I had an LLM give me a simple example, then we picked it apart. These things can be amazing tutors if you’re curious. I’m insatiably curious, so I’m learning a lot.

Junior engineers should not vibe code. They should use LLMs as pair programmers to learn. If they don’t, that’s on them. Is it a dicey situation? Yeah. But there’s no turning back the clock. This is the world we have. They still have a path if they want it and have curiosity.

capnrefsmmat · 3 months ago
> Well, if you’re a novice, don’t do that.

I agree, and it sounds like you're getting great results, but they're all going to do it. Ask anyone who grades their homework.

Heck, it's even common among expert users. Here's a study that interviewed scientists who use LLMs to assist with tasks in their research: https://doi.org/10.1145/3706598.3713668

Only a few interviewees said they read the code through to verify it does what they intend. The most common strategy was to just run the code and see if it appears to do the right thing, then declare victory. Scientific codebases rarely have unit tests, so this was purely a visual inspection of output, not any kind of verification.

capnrefsmmat commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
mgraczyk · 3 months ago
Deliberate practice, which may take a form different from productive work.

I believe it's important for students to learn how to write data structures at some point. Red black trees, various heaps, etc. Students should write and understand these, even though almost nobody will ever implement one on the job.

Analogously electrical engineers learn how to use conservation laws and Ohm's law to compute various circuit properties. Professionals use simulation software for this most of the time, but learning the inner workings is important for students.

The same pattern is true of LLMs. Students should learn how to write code, but soon the code will write itself and professionals will be prompting models instead. In 5-10 years none of this will matter though because the models will do nearly everything.

capnrefsmmat · 3 months ago
I agree with all of this. But it's already very difficult to do even in a college setting -- to force students to get deliberate practice, without outsourcing their thinking to an LLM, you need various draconian measures.

And for many professions, true expertise only comes after years on the job, building on the foundation created by the college degree. If students graduate and immediately start using LLMs for everything, I don't know how they will progress from novice graduate to expert, unless they have the self-discipline to keep getting deliberate practice. (And that will be hard when everyone's telling them they're an idiot for not just using the LLM for everything)

u/capnrefsmmat

KarmaCake day2050April 4, 2011
About
https://www.refsmmat.com

Associate teaching professor of Statistics & Data Science, Carnegie Mellon University

Author of Statistics Done Wrong (https://www.statisticsdonewrong.com), the woefully complete guide to statistical errors

email: alex at refsmmat dot com

View Original