Readit News logoReadit News
doe_eyes · a year ago
> We show that the appearance of LLM-based writing assistants has had an unprecedented impact in the scientific literature, surpassing the effect of major world events such as the Covid pandemic.

I'm not quite sure this follows. At the very least, I think they should also consider the possibility of social contagion: if some of your colleagues start using a new word in work-related writing, you usually pick that up. The spread of "delve" was certainly bootstrapped by ChatGPT, but I'm not sure that the use of LLMs is the only possible explanation for its growing popularity.

Even in the pre-ChatGPT days, it was common for a new term to come out of nowhere and then spread like a wildfire in formal writing. "Utilize" for "use", etc.

lm28469 · a year ago
I have friends in academia (all around Europe and in different fields, from Spain to Danemark, Slovakia, Sweden, Germany, in economics, solid state physics, &c.), I can 100% guarantee you're wrong and chatgpt is now the default tool for the vast majority of people writing these papers.

It's not people picking up new words, you have to understand academia is mostly about shitting out as many papers as you can and make them as verbose as possible, it's the perfect use case for LLMs, this field was rotten before, chatgpt just makes it more obvious to the non academia crowd

For them not using chatgpt would be like sticking to sailboats while the world moved to steam engines

mzl · a year ago
> academia is mostly about shitting out as many papers as you can

This is the classic case of publish-or-perish, since publication metrics are ubiquitous in all aspects of academic life unfortunately. Measuring true impact is the goal, but it is hard problem to solve for.

> and make them as verbose as possible

This is just laughably wrong. The page-limits are always too low to fit all the information one wants to include, so padding a text is just not of interest at all.

With that said, I wouldn't be surprised if people use ChatGPT a lot. If for no other reason most academics are writing in a language (English) that is not their native language and that is hard to do. Anything that makes the process of communicating ones results easier and more efficient is a good thing. Of course, it can also be used to create incomprehensible word salads, but I've seen a lot of those in the pre-LLM times as well.

UncleMeat · a year ago
Shitting out as many papers as you can, yes.

Making them as verbose as possible? My experience from grad school and with friends who are now faculty is that literally everybody's first draft is above the page limit and content needs to be cut.

luyu_wu · a year ago
It sounds like you have anecdotal evidence from a bad side of academia. I also have many friends in academia across NA and Asia and have an impression closer to parent's.
ChainOfFools · a year ago
> chatgpt is now the default tool for the vast majority of people writing these papers.

I'm in a similar sort of circle and I can say that this, though I've not measured rigorously, does strongly square with my own anecdotal experience, and it's especially prevalent with people whose first language is not English but have no choice but to publish (frequently) and apply for grants in English.

So many of the writing assistant tools customarily used for first-pass proofreading have gone straight into full LLM integration and no longer simply just check grammar and basic "elements of style" issues.

Also professional (human) proofreaders are fantastically expensive for the very limited amount of assistance date provide. A couple thousand dollars more (source for this number, a proofreader recommended by Oxford U Pub) for someone to fix some minor semantic redundancies in your 35 page book chapter wording is financially unreasonable for a lot of people in academia.

darepublic · a year ago
tangential, but I remember working in a school computer lab hours before my big paradise lost essay was due and putting an extra newline between all paragraphs to beef up the page length.
stavros · a year ago
There are a few language shifts that happened in the past few years. The use of the singular "they", "there's a few" instead of "there are a few", the "like" filler word, etc. It's not that unusual.
saghm · a year ago
Singular they has been around much more than the "past few years": https://www.merriam-webster.com/wordplay/singular-nonbinary-...

> We will note that they has been in consistent use as a singular pronoun since the late 1300s; that the development of singular they mirrors the development of the singular you from the plural you, yet we don’t complain that singular you is ungrammatical; and that regardless of what detractors say, nearly everyone uses the singular they in casual conversation and often in formal writing.

ben_w · a year ago
And some of the post-LLM cliches are almost certainly purely human in their memetic spread — "moat" and "stochastic parrot" in particular come to mind.
Animats · a year ago
Their list:

    delves
    crucial
    potential
    these
    significant
    important
They're not the first to make this observation. Others have picked up that LLM's like the word "delves".

LLMs are trained on texts which contain much marketing material. So they tend to use some marketing words when generating pseudo-academic content. No surprise there. I'm surprised it's not worse.

What happens if you use a prompt containing "Write in a style that maximizes marketing impact"?"

('You can't always use "Free", but you can always use "New"' - from a book on copywriting.)

Hizonner · a year ago
I use at least four of those pretty heavily, and I suspect that I'm in the top quartile in terms of using at least three of them. I guess I'm going to be queued for deactivation now.
JumpCrisscross · a year ago
If you only note your significant thoughts, marking them as as much is okay. That filter is what AIs lack.

On the upside, the advent of AI has made both vendor selection and project winning simpler. For the former, I can filter within seconds. For the latter, showing examples of competitors’ AI-derived speech is enough to get them eliminated.

analog31 · a year ago
>>> "Write in a style that maximizes marketing impact"

Probably exactly what academics are being told by their university PR departments.

quartesixte · a year ago
The only one here that actually stands out from usage is "delves". Every other word on this list is common usage among anyone who has a decent vocabulary and a literary mind.

But I guess I'll just get flagged for being a GPT now.

nicce · a year ago
Traditional grammar correction applications usually drop the "important" words as well and suggests something else.
SoftTalker · a year ago
If you're interested in writing more clearly, avoiding jargon and verbosity, have a look at Essays on CIA Writing

https://s3.documentcloud.org/documents/3894798/CIA-RDP78-009...

Despite being published in 1962, I find has a lot of advice that still works today.

3abiton · a year ago
The article title is a cheeky stab at the current trend.
bastawhiz · a year ago
I've noticed "on this journey" and it's becoming painful to read. It's infuriatingly common and such a tell
refibrillator · a year ago
One confounding factor here is the proliferation of autocorrect and grammar “advisors” in popular apps like gmail etc. One algorithm tweak could change a lot of writing at that scale.

While the word frequency stats are damning, there doesn’t seem to be any evidence presented that directly ties the changes to LLMs specifically.

daemonologist · a year ago
I think they address this to some degree by checking different year pairings (end of page 2), where the only excess usage they found was of words related to current events (ebola, coronavirus, etc.) and even then not to the same degree as the 2022-24 pair.

It would be interesting to analyze how well a language model is able to predict each abstract. In theory if the text was largely written by a model then a similar model might be able to predict it more accurately than it would a human-written abstract. (Of course the variety of models and frequency at which they're updated makes this more difficult.)

Deleted Comment

lioeters · a year ago
> We study vocabulary changes in 14 million PubMed abstracts from 2010-2024, and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words.

"Delving".. Sounds like the authors might have used an LLM while writing this paper as well.

smelendez · a year ago
It’s a joke. Delve is the first example they show.
klipt · a year ago
I associate the word strongly with Lord of the Rings.

"The AI researchers delved too greedily and too deep, and awoke the BaLLMrog."

CJefferson · a year ago
This is clearly a joke in the paper, they mention Delving is on of the words with the biggest frequency increase.
esafak · a year ago
It's the Voight-Kampff test for LLMs :)
usef- · a year ago
I'm surprised people make such a big deal out of "delving", it doesn't seem rare. Maybe it was less common in the US than elsewhere?

(and do I have to deliberately avoid it now, to avoid sounding like AI?)

daemonologist · a year ago
I think it's not an extraordinarily rare word in the US, but it does feel to me more at home in a blog post or business strategy-ish document than a PubMed abstract. People are making a big deal of it just because the signal is so strong (25x more common now than in 2022).
atarkmani · a year ago
We in the US we like to DiG
MattGaiser · a year ago
That’s the joke.

Deleted Comment

Retr0id · a year ago
It's also possible that humans are starting to sound more like LLMs too, due to reading (wittingly or otherwise) more LLM output.
iLemming · a year ago
Whenever I see a comment that talks about programming with the uppercased "S" in "Typescript", "Javascript", "Clojurescript", my immediate guess is that the person used LLM , at least for spellchecking and improving the original text. I predict that soon, LLMs would learn to make tiny, insignificant mistakes, to sound "more human" in writing.
hju22_-3 · a year ago
Guess I'll have to stop writing JavaScript and the like then. :/
leumassuehtam · a year ago
It's easy to notice, even on this website, when a comment starts with "It's worth noting that...". It sounds very LLM-esque.
lolc · a year ago
I still don't know whether there was an actual increase in comments that start with "You're correct". Maybe it's just me noticing it more after Chatgpt came to prominence with its subservient ways.
alehlopeh · a year ago
To be fair, that’s not the only way to start a comment to make it sound LLM-esque.
mcmcmc · a year ago
This just makes me think how now, more than ever, it really pays to develop your own distinctive writing voice. It’s not just a way to make yourself stand out, but given how much the way you write can influence your thought process, I worry that all the students ChatGPTing their way through language arts classes will be ultimately more susceptible to groupthink and online influence campaigns.
kccqzy · a year ago
Developing such a voice really requires you to read other writing voices minimally lest you be influenced by other voices. I think outside of news, I would be perfectly happy reading only pre-2022 publications. I just don't know how sustainable this is.
markerz · a year ago
I disagree, I think a successful writing voice involves reading a lot of different styles and picking the elements that really speak to you.
electrodank · a year ago
I don’t agree with this. Quick assessment: what makes someone more susceptible to groupthink? To propaganda? Is it the way they write? That doesn’t sound right. It is not an LLM/ChatGPT-borne ailment. So I would not paint these tools as such a boogeyman.
utkuumur · a year ago
I don't see why many people complaining on this issue. Not everyone mastered English unfortunately. I am especially very weak at writing a paper, and to be honest, find it taxing. I love research but after having results, turning it into a paper is not fun. I edit almost everything important I write like emails and papers with LLMs because even though the content is nice my writing feels very bland and lacks lots of transition. I believe many people do this and actually, this helps you learn over time. However, what you learn is to write like LLMs since basically we are supervised by the LLM.

Dead Comment

silver_silver · a year ago
It’s possible this is caused by the editors rather than the authors.

An old partner edited papers for a large publisher - largely written by non-native speakers and already heavily machine translated - and would almost always use ChatGPT for the first pass when extensive changes were needed.

She was paid by the word and also had a pretty intense daily minimum quota so it was practically required to get enough done to earn a liveable wage and avoid being replaced by another remote “contractor”.

cortesoft · a year ago
This seems strange… this is an old partner, who had a job long enough to know that the only way to hit a quota to was to use ChatGPT, which came out less than two years ago?

It seems strange that something that experimental became essential that quickly.

silver_silver · a year ago
The company is a bit predatory in that they typically employ editors from poorer countries, so her difficulty was more making enough from it than meeting the quota - but the quota was strict enough that you would very quickly lose the position if you kept missing it. I think the situation before ChatGPT was that they employed people who worked very long hours to not earn very much. Apparently turnover was quite high.
noizejoy · a year ago
> this is an old partner

“Old” might be intended to mean “former” or “ex”, which could be as recent as merely weeks ago. (i.e. ESL / lost in translation issue).

userabchn · a year ago
I am an associate editor of a journal and I have suggested that the journal strongly encourage authors to pass their papers through an LLM before submitting if they think it might need it. A large fraction of the submissions we receive have terrible grammar and unnatural phrasing. The only options for editors are to reject or send out for review. Rejecting because of the grammar seems overly harsh, but I know that it is a lot more laborious to review a paper with bad grammar. The easiest fix seems to be to try to ensure that the grammar is reasonable before papers are submitted.
progval · a year ago
Do editors even change the text in academic publishing?
Maken · a year ago
They do in journals that charge such an exorbitant rate it would look bad if they did not try to justify it.
silver_silver · a year ago
Most of the papers she worked on were either translations or not written by a native speaker.
JohnKemeny · a year ago
I have never had an editor change a single word in my papers.