Readit News logoReadit News
crazygringo · 2 years ago
The actual article generating all 190 citations:

https://link.springer.com/article/10.1007/s12195-022-00754-8

> "Author's note: This article was written by the ChatGPT chatbot, in response to prompts from MK. That human-chatbot conversation is presented here, without editing."

tokai · 2 years ago
Embarrassing that they published that. Should not have been more than a blog post.
epolanski · 2 years ago
I actually think the opposite.

This underlines several things at once, the power of LLMs applied to biology and the future of scientific papers integrity on the other hand.

WolfOliver · 2 years ago
[Author’s note: These are not real references, unfortunately.]
isaacfrond · 2 years ago
That's actually a very interesting paper.
daveguy · 2 years ago
This is one paper with 190 citations and one paper with no citations.

The paper is only 2 pages in a prompt-response format, un-edited. The prompts are specific and ask for paragraphs about chatbots, AI and plagiarism. The author's note at the end, after ChatGPT was asked for references and gave several, is:

> Author’s note: These are not real references, unfortunately.

Edit: One more point -- Even though it was published in Cellular and Molecular Bioengineering it really doesn't have to do with either. It's a quick read.

cod3rboy · 2 years ago
Is it right to recognize ChatGPT as a scholar? Shouldn't we just treat it like any other software tool?
liliumregale · 2 years ago
We absolutely should treat it just like any other software tool.
contrarian1234 · 2 years ago
- software tools are cited (you often will site a paper describing the tool). If a tool generates a value and then later a bug is discovered, it's important that the tool was cited.

- one purpose of a citation is to make a clear distinction which work/words are your own and which are not. Are you making a statement because it's based on past work, your own inference, or the hallucination of a LLM? With ChatGPT it's easy to get confused

- there is also the more general issue of academic integrity. You shouldn't submit other people's/machine's words as your own

Maybe more correctly ChatGPT should be on the authors' list ;)

Deleted Comment

Cenk · 2 years ago
They’re all from this paper, which is written by ChatGPT with some prompts from the author: https://oa.mg/work/10.1007/s12195-022-00754-8
ziyao_w · 2 years ago
190 citations sounds impressive but considering they mostly come from possibly the two fields that gathers citations the quickest - biomedical sciences and artificial intelligence - this shouldn't be too surprising?

Shalosh B. Ekhad, OTOH, is the real deal -

https://sites.math.rutgers.edu/~zeilberg/ekhad.html

https://en.wikipedia.org/wiki/Doron_Zeilberger

candiddevmike · 2 years ago
Is this the beginning of the LLM ouroboros? Does ChatGPT do anything different with training data it produced?
ethanbond · 2 years ago
How would it know? If anything this is one of the only places where it’s not an ouroboros because of the citations.

But yeah, I have to imagine LLM-produce material is already on its way back into the next models. I wonder how this problem is perceived among the people building these systems. Given that they’re not yet near ASI or AGI yet, it’s gotta be close to an existential issue, no?

Maybe not “your company will collapse,” but “you will not be able to get further improvements.”

Would love to hear thoughts from someone who is actually addressing this (or knows why we don’t need to).

Devasta · 2 years ago
There is a demand for Low Background Steel, steel produced before the Nuclear Bombs were dropped, for use in Geiger counters and the like. It's usually salvaged from the wrecks of sunken WW1 ships, as that is the only way to guarantee no radiation.

The same will happen to data sets from pre-2022, everything beyond that will be rapidly polluted with AI nonsense.

candiddevmike · 2 years ago
In this instance, it would know because of the citations.
dkjaudyeqooe · 2 years ago
A paper produced by ChatGPT is plagiarism taken to its logical extremity.
WolfOliver · 2 years ago
I've wrote some thoughts on this topic down a few months ago:

https://www.monsterwriter.app/chatgpt-in-academic-writing.ht...