Readit News logoReadit News
misja111 · 6 months ago
"He (the author) did not answer our questions asking if he used an LLM to generate text for the book. However, he told us, “reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’ This challenge is only expected to grow, as LLMs … continue to advance in fluency and sophistication.”

Lol, that answer sounds suspiciously much like LLM generated as well ..

DebtDeflation · 6 months ago
It's true that "AI detection algorithms" are not particularly reliable.

It's also true that if you have fake CITATIONS in your works that such algorithms aren't necessary to know the work is trash - either it was written by AI or you knowingly faked your research and it doesn't really matter which.

haffi112 · 6 months ago
You would think that Springer did the due diligence here, but what is the value of a brand such as Springer if they let these AI slops through their cracks?

This is an opportunity for brands to sell verifiability, i.e., that the content they are selling has been properly vetted, which was obviously not the case here.

WillAdams · 6 months ago
Back when I was doing academic publishing I'd use a regex to find all the hyperlinks, then a script (written by a co-worker, thanks again Dan!) to determine if they were working or no.

A similar approach should work w/ a DOI.

RossBencina · 6 months ago
In the past I've had GPT4 output references with valid DOIs. Problem was the DOIs were for completely different (and unrelated) works. So you'd need to retrieve the canonical title and authors for the DOI and cross check it.
bumby · 6 months ago
Not all journals require a DOI link for each reference. Most good ones do seem to have a system to verify the reference exists and is complete; I assume there’s some automation to that process but I’d love to hear from journal editorial staff if that’s really the case.
cess11 · 6 months ago
Why would one think that? All of the big journal publishers have had paper millers and fraudsters and endless amounts of "tortured phrases" under their names for a long, long time.
ludicrousdispla · 6 months ago
>> LLM-generated citations might look legitimate, but the content of the citations might be fabricated.

Friendy reminder that the entire output from an LLM is fabricated.

bryanrasmussen · 6 months ago
probably a better word here would be fabulated.

on edit: that is to say the content of the citations might be fabulated, while the rest is merely fabricated.

leereeves · 6 months ago
I didn't realize "fabulated" was a word. TIL, thank you. But in this case it doesn't sound like the right word; it means: "To tell invented stories, often those that involve fantasy, such as fables."

I think "confabulated" is more appropriate: "To fill in gaps in one's memory with fabrications that one believes to be facts."

xg15 · 6 months ago
Technically yes, but not all of it has lost grounding with reality?
mapleoin · 6 months ago
You could say that about Alice in Wonderland.
amelius · 6 months ago
Fabricate is a word with ambiguous meaning. It can mean both "make up", but also simply "produce".
ktallett · 6 months ago
I think in this situation both meanings are needing to be used. It produced made up content.
dwayne_dibley · 6 months ago
I fabricated this reply out of my brain.
Isamu · 6 months ago
One of the potential uses of AI that I have most wanted is automated citation lookup and validation.

First check if the citation references a real thing. Then actually read and summarize the referenced text and give a confidence level that it says what was claimed.

But no, we have AI that are compounding the problem. That says something about unaligned incentives.

pyrale · 6 months ago
> One of the potential uses of AI that I have most wanted is automated citation lookup and validation.

Also one of the things AI is likely the least suited for.

best I could imagine an AI can do is offer sources for you to check for a given citation.

Isamu · 6 months ago
>Also one of the things AI is likely the least suited for.

I agree, if we are using the current idea of AI as language models.

But that’s very limiting. I’m old enough to remember when AI meant everything a human could do. Not just some subset that is being deceptively marketed as potentially the whole thing.

dandanua · 6 months ago
We are approaching publishers' heaven, where AI reviewers review AI written books and articles (with AI editors fixing their style), allowing publishers to keep collecting billions from essentially mandatory subscriptions from institutions.
flohofwoe · 6 months ago
It's fine, because human readers will also be replaced with AI which produce a quick summary ;)
rbanffy · 6 months ago
Or answer specific questions when needed.
veltas · 6 months ago
Unfortunately not surprising, the quality of a lot of textbooks has been bad for a long time. Students aren't discerning and lecturers often don't try the book out themselves.
gammalost · 6 months ago
I agree. I feel that Springer is not doing enough to uphold their reputation. One example of this being a book on RL that I found[1]. It is clear that no one seriously reviewed the content of this book. They are, despite its clear flaws charging 50+ euro.

https://link.springer.com/book/10.1007/978-3-031-37345-9

WillAdams · 6 months ago
Yeah, ages ago, when I was doing typesetting, it was disheartening how unaware authors were of the state of things in the fields which they were writing about --- I'm still annoyed that when I pointed out that an article in an "encyclopedia" on the history of spreadsheets failed to mention Javelin or Lotus Improv it was not updated to include those notable examples.

Magazines are even worse --- David Pogue claimed Steve Jobs used Windows 95 on a ThinkPad in one of his columns, when a moment's reflection, and a check of the approved models list at NeXT would have made it obvious it was running NeXTstep.

Even books aren't immune, a recent book on a tool cabinet held up as an example of perfection:

https://lostartpress.com/products/virtuoso

mis-spells H.O. Studley's name on the inside front cover "Henery" as well as making many other typos, myriad bad breaks, pedestrian typesetting which poorly presents numbers and dimensions (failing to use the multiplication symbol or primes) and that they are unwilling to fix a duplicated photo is enshrined in the excerpt which they publish online:

https://blog.lostartpress.com/wp-content/uploads/2016/10/vir...

where what should be photo of an iconic pair of jewelers pliers on pg. 70 is replaced with that of a pair of flat pliers from pg. 142. (any reputable publisher would have done a cancel and fixed that)

Sturgeon's Law, 90% of everything is crap, and I would be a far less grey, and far younger person if I had back all the time and energy I spent fixing files mangled by Adobe Illustrator, or where the wrong typesetting tool was used for the job (the six-weeks re-setting the book re-set by the vendor in Quark XPress when it needed to be in LaTeX was the longest of my life).

EDIT: by extension, I guess it's now 90% of everything is AI-generated crap, 90% of what's left is traditional crap, leaving 1% of worthwhile stuff.

cess11 · 6 months ago
What reputation would that be?

It was, in part, Springer that enabled Robert Maxwell.

antegamisou · 6 months ago
Understandably I'm becoming a bit dogmatic but I'll say it again, AIMA/PRML/ESL are still the best reference textbooks for foundational AI/ML and will be for a long time.

AIMA is Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig

PRML is Pattern Recognition and Machine Learning by Christopher Bishop.

ESL is Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman.

techas · 6 months ago
I saw this recently on some congress abstracts. I think it is just AI generated content. References look real and don’t exist.
PicassoCTs · 6 months ago
To imagine this driving a singularity, meanwhile its putting the final nail in science, together with paper-spam and research-reward decline. They are going to hang us tech-priests from the lamp-posts when the consequences of this bullshit artistry hit home.
Vinayak_A_B · 6 months ago
If I make a citation verifier, will conference/journal guys pay for it? First verify if the citation is legit, like the paper actually exists, after that another LLM that reads the paper cited and gives a rating out of 10, whether it fits the context or not. [ONLY FOR LIT SURVEY]
SiempreViernes · 6 months ago
No, they aren't paying the reviewers in the first place.
PeterStuer · 6 months ago
Given that the existence of a reference is fairly trivial to check, I'd wager the authors would not care enough to pay for this. As for 'fit', this is very much in the eye of the beholder and a paper can be cited for the most trivial part. Overcitation is usually not seen as a problem. Omitting citations the reviewer considers 'essential', often from their own lab or circles, is seen as non-negotiability.

So the better 'idea' would be to produce a CYA citation assistant that for a given paper adds all the remotely plausible references for all the known potential reviewers of a journal or conference. I honestly think this is not a hard problem, but doubt even that can be commercialized beyond Google Ads monetization.

codewench · 6 months ago
So given that the output of an LLM is unreliable at best, your plan is to verify that a LLM didn't bullshit you by asking another LLM?

That sounds... counterproductive

thoroughburro · 6 months ago
You’re offering to doublecheck measurements made with a bad ruler by using that same ruler.