Like probably many people here I still remember having to find facts in books in libraries, before the internet made this skill mostly redundant. Then, as a student I remember having to put together facts from various (internet) sources into a coherent narrative. Now chatbots can just generate text and that skill seems less valuable.
I use both the internet and GenAI extensively now. But I feel that having gone through the "knowledge work" activities without the crutches puts me in a better position to assess the correctness and plausibility of internet sources and AI in a way that kids who grow up using them constantly don't have.
I feel quite privileged to be in that position, that I wouldn't be in had I been born 10 or 20 years later. I also feel sorry for kids these days for not having the opportunity to learn things "the hard way" like I had to. And I feel extremely snobbish and old for thinking that way.
It’s something reflected in the conversations I have with my academic friends. I’m told that every essay is written in the same “voice” and that although they are usually a simulacra of an academic paper, they say nothing. The sad part comes when we reflect that these students are not learning the deep thinking skills that comes with academic writing
There are two views of writing: one, as the production of a literary artifact that has value in its own right and stands alone as the embodiment of a complex idea itself; on the other hand, as a process and tool for thought, where the literary artifact is merely meant to represent the cognitive work that went into its production. From the latter perspective, the bulk of the value is not derived from the output of the process of writing, but rather from the understanding and insight that was gained during its production - "it's the journey, not the destination".
With generative AI we are now shortcutting directly to the destination while eliding all the knowledge and understanding we are supposed to be gaining on the way. It's deemed sufficient to merely produce and exchange the symbols of understanding without needing to possess any real underlying wisdom (the first view above), because it's through the exchange of these abstract symbols, irrespective of whether or not there is anything behind them, that we can play and win certain social games.
This is merely the natural continuation of trends that began during the dawn of the Internet era, when we started to consider pixels on a screen an accurate map of reality.
Then they’re using gen ai wrong. You can dump your research into the context window and ask it to outline the material. What you get out is a well organized story with a beginning middle and end, incorporating all relevant concepts from the research. You can then fill in with the details based on your research. Gen ai can be used to help students think and write more clearly.
They’re going to use it. Give them the training and guidance to use it correctly.
I have a related anecdote. When I grew up, we had page and/or word requirements on essays. I was always under the requirement after I wrote everything I needed to, so I learned to pad my writing to hit the requirement.
Terse prose was a "lost art" even in my generation (millenial-ish) and I'm not surprised that it has gotten worse.
I also lived through these phases and it makes me feel very, very much the same.
On the other hand I cannot not help thinking that this is similar to the arguments brought forward when the internet was new. How could correctness and plausibility be established if you don't have established trustworthy institutions, authors and editors behind everything? And yet it turned out mostly fine. Wikipedia is alive, despite its deficiencies, Encyclopedia Britannica not so much.
> On the other hand I cannot not help thinking that this is similar to the arguments brought forward when the internet was new. How could correctness and plausibility be established if you don't have established trustworthy institutions, authors and editors behind everything?
Not long ago I had the same viewpoint as you! But thinking back now — it dates me but I definitely lived a childhood without Internet access — probably the optimistic belief before our age of "misinformation" is that, in the marketplace of ideas, the truth usually wins. Goes along with "information wants to be free" — remember that slogan?
For us that grew up learning things "the hard way" so to speak, that made perfect sense: each of us, as should have the capability to discern what is good or bad as, individual independent thinkers. Therefore, for any piece of information, there should be a high probability, in the aggregate, that it is classified correctly as to its truth and utility.
Now there's no question China has been trying to crack down on the Internet. Good luck! That's sort of like trying to nail jello to the wall. (Laughter.) But I would argue to you that their effort to do that just proves how real these changes are and how much they threaten the status quo. It's not an argument for slowing down the effort to bring China into the world, it's an argument for accelerating that effort. In the knowledge economy, economic innovation and political empowerment, whether anyone likes it or not, will inevitably go hand in hand.
I would say, what we have since learned after some 20 years, is that in the marketplace of ideas, the most charitable thing we can say that the memes with the "best value" win. "Best value" does not necessarily mean the highest quality, but rather there can be a trade-off between its cost and the product quality. Clearly ChatGPT produces informational content at a pretty low cost. The same can be said for junk food, compared to fresh food: the overall cost of the former is low. Junk food does not actively, directly harm you, but you are certainly better off not eating too much of it. It is low quality but has been deemed acceptable.
There are examples where we can be less charitable of course. We all complain about dangerous, poorly manufactured items (e.g. electronics with inadequate shielding etc.) listed Amazon, but clearly people still buy them anyway. And then, in the realm of politics, needless to say, there are many actors bent on pushing memes they want you to have regardless of their veracity. Some people on the marketplace of ideas "buy" them owing to network effects (e.g. whether they are acceptable according to political identity, etc.) in the same way that corporations continue to use Microsoft Windows because of network effects. We also probably say nowadays Clinton has been ultimately proven wrong by the government of China.
Survival of the "fittest" memes if you like: evolution does not make value judgements.
If you ask me, maybe our assumption of de-centralized truth-seeking was itself, not an absolute truth, to begin with. But it took years to unravel, as humans, collectively speaking, atrophy from disuse of the research and critical thinking skills before technology dropped the barriers of entry to producing and consuming information.
You're probably at an advantage now, but I think the effort/reward hardly pays out for the newer generation. They'll learn how to deal with this with less effort & time.
Remember that the new generation doesn't just have different tools; they're also much less experienced & mature, just like we were.You can only really compare yourself to them in the future where they're at the place you're at now.
I think that chatbots will lead to less information available, not more. Because they make writing information more useless and demotivating. So what we are looking toward is less written information overall available.
And when chatbots don't know, they lie.
Kids will learn the hard way. Possibly harder then we did.
The problem is that gen ai has no notion of fact and will just as happily confidently and incorrectly assert falsehoods. Teaching an entire generation of students with gen ai and no verification of facts will be a disaster.
Makes me wonder how someone would have found the answers before the printing press. Write to or visit the most knowledgeable person you could get an introduction to or who would answer you? Then go to their contacts or recommendations? Then draw a conclusion from all the responses?
And even this scenario assumes a working postal system, availability of paper and pen. But if you didn't have those, you may not have heard of Greek to even ask the question.
> And I feel extremely snobbish and old for thinking that way.
Old people usually make correct assessments given their knowledge and experience. They know how to maximize their expected gains and play safe. The problem is, real life often rewards those who take risks, and make seemingly wrong decisions, that later turn out to be good.
For example, as a kid I loved playing video games, while my grandma yelled at me for not wanting to help her with work at the farm. She had absolutely no way of predicting that playing video games, which were essentially just toys, would teach me the right skills at the right time, allowing me to move up the social ladder. At the time, forcing the god damn lazy kid to milk the god damn cow was the sensible thing to do.
People who read academic publications and do not see much of a fundamental difference between tattoos that contain information like a barcode and RFID chips that could contain the same information:
That there are lots of people who believe in 5G Bill Gates vax chips is itself fake news. It is well poisoning to pre-empt criticism of billionaires with too much power and free time to meddle in African population growth and pandemic response. Supported by "smart" people who want to feel good and trust the science on 5G safety.
There are Microsoft patents for microchips to track body activity to reward in cryptocurrency and subsidiaries who wanted to microchip vaccine passports into the hands of immigrants.
I've seen this happen, too, including student incredulity that ChatGPT can be wrong, and recalcitrance when guided to find proper sources. Up to the point of arguing for a higher grade based on the correctness of LLM output.
One thing is correctness, another thing is that GPTs can output long passages of copyrighted work verbatim, and users risk unknowingly submitting plagiarized work.
Growing up we heard, ad nauseam, that "wikipedia is not a reliable source". People just need to state the same thing about LLMs- they aren't reliable, but can potentially point you to primary sources that -are- reliable. Once the shininess of the new toy wears off, people will adjust.
Encyclopedias – including Wikipedia – are not acceptable sources for college-level work certainly. They are tertiary literature, which can provide an overview to someone trying to get a toehold into a subject, and which can hopefully point them toward primary and secondary sources. But tertiary sources are not typically allowable citations for college research.
Well even before Wikipedia… remember calculators being banned? I think you’re right, we will adjust. Curriculum development will start to include new checkpoints and controls.
And the LLMs are still improving at a brisk rate. If they were outperforming teachers by the end of the decade that'd be well within expectations. Anyone pretending that human authority figures routinely get things right is defining correctness using circular logic.
Current LLMs are lacking introspection, plausibility checking, consulting external sources, and belief updating in the presence of new evidence. All of these you'd need to replace human teachers and it's not clear that the next token prediction paradigm can ever emulate these features reliably. So "outperforming teachers", while not impossible, is a very optimistic expectation as it requires more than mere improvement on existing methodology.
The problem with not writing yourself isn't that people didn't do the writing themselves, the problem is that they didn't do the thinking themselves that is a prerequisite for writing.
Now of course like any tool this can be used without falling into that trap, but people are lazy and the truth is that if you don't absolutely have to do it yourself most people won't.
I would have thought this problem was easy to solve: "Yes, look it up, but remember your source, there is a lot of bullshit on the internet. Especially don't trust AI tools, we know those often return nonsense information."
(Actually, didn't ChatGPT have a disclaimer right next to the prompt box that warns against incorrect answers?)
So I'm more surprised (and scared) that students don't just use LLMs for sourcing but are also convinced they are authoritative.
Maybe being in the tech bubble gave a false impression here, but weren't hallucinations one of the core points of the whole AI discurse for the last 1.5 years? How do you learn about and regularly use ChatGPT, but miss all of that?
As we grow older, we learn a lot of facts that we can use to test the correctness of ChatGPT and identify its hallucinations because we have the knowledge to do so. However, a young person who is just starting to understand the world and gather knowledge might not have enough information to notice these hallucinations.
Yup, no question here why he believes ChatGPT's initial statements. I'm more baffled that when the teacher corrects him, he goes on and defends ChatGPT.
ChatGPT says "ChatGPT can make mistakes. Check important info." directly under the prompt box. If people will blindly trust a source that explicitly states that it isn't a reliable source, then they've got much bigger problems than AI.
Are these tools being promoted and sold as fallible implements? Or are they being hyped as super-human intelligence? Which takeaway is an impressionable child going to latch onto? One who wasn’t in on the last 1.5 years of discourse?
You could probably make the same argument for search bar vs. peer-reviewed publications. Of course, the search bar (which is also AI, by the way) can help you to get to the peer-reviewed publications. But the same is true for ChatGPT. The problem is that ChatGPT sounds like presenting objective facts. But maybe the lesson here is that just because something sounds right, it isn't necessarily right, and that is something that should be taught in school. Of course, that undermines the function of school to produce obedient citizens.
It also undermines the way schools work to teach kids. In order to not have to explain everything, almost all lessons are mostly teaching you some axioms, even if they really are disputed, or have caveats, etc. Good teachers make clear where there is an axiom, and where something is just being simplified or assumed for the sake of saving time.
I am a product of the German school system, I'd say I wasn't indoctrinated too much, so its not entirely broken, but with these new """tools""" maybe we need a reform anyways.
> In order to not have to explain everything, almost all lessons are mostly teaching you some axioms, even if they really are disputed, or have caveats, etc. Good teachers make clear where there is an axiom, and where something is just being simplified or assumed for the sake of saving time.
What exactly do you mean by the word “axiom” here?
It's AI in the older sense of Machine Learning, not in the currently widespread sense of an LLM, which is the source of the problem that the author is discussing.
I had that same argument, the teacher who told me to not trust non peer reviewed articles ended up flooding her Facebook was with pro-brexit lies a decade later. Turns out critical thinking is not outsourcing your thinking to a third party, be it a peer reviewes journal, google search results, or chatgpt.
"not trust non peer reviewed articles" - this is such naive advice. It is not black and white, peer reviewed articles only increase chance that information included in article is legit because it was verified in some formal process. I wonder how many times people giving such simple advices mention how often peer reviewed articles are retracted or can't even be replicated and how this vary accross disciplines.
A good way to learn this is to build something, wood working is an awesome way to quickly find out, looking, feeling and even someone taking a measurement isn’t enough, you need to take two :)
The great problem with ChatGPT is that it's a sycophant and aims to please.
If you ask it about something it doesn't know, right then and there, it will concoct a fiction for you. It won't say "I don't know," or "I can't help with that."
If you coach it to respond to something in a certain way, it'll respond that way for you as its top priority.
If you ask it to review a text, it'll usually find a way to give you at least a 7 or 8 out of 10. (Though, interestingly, rarely a 10/10 score. You can upload excerpts from some of the great works of literature and philosophy and see ChatGPT give them an 8/10, just as it gives an 8/10 to your essay or blog post.) Practically the only way to get a halfway critical response is to add the words "be critical" to your prompt.
A more circumspect and less obsequious ChatGPT would solve a lot of problems.
ChatGPT is the search engine equivalent of those streaming sites that won't directly admit they don't have the show you explicitly searched for, but will instead show you a list of other shows they think will placate you, and pretend they're indistinguishable.
> “If you ask it about something it doesn't know, right then and there, it will concoct a fiction for you. It won't say "I don't know,"”
It doesn’t know that it doesn’t know. Texts it was trained on rarely go “On the History of the Greek Language, by Andros Dimos; ahem what is the history of the Greek language? I don’t know. The end.”
Hmmm... well... Let me give you an example. The other day I asked for a summary of the short story "The Palace at Midnight" by Robert Silverberg.
GPT-4o responded: "The Palace at Midnight" by Robert Silverberg is a science fiction short story that was first published in 1980. It is part of Silverberg’s "Majipoor" series, but stands alone in terms of its setting and characters, distinct from the expansive world-building typical of the series. The story is set in a dystopian future in India and revolves around themes of memory, loss, and the haunting allure of the past.
The narrative follows the protagonist, Edward, and his group of friends who are survivors of a nuclear war that has devastated much of the world. . .
All of which was complete nonsense. "The Palace at Midnight" is a short story of a few pages that has nothing to do with Majipoor, was published in 1981, and was set in San Francisco.
It doesn't know -- it knows that it doesn't know -- so it just makes something up that seems superficially credible. Because it aims to please and be helpful; altogether too helpful.
Interesting. Copilot on the other hand is more like a conceited, passive aggressive brat who uses some polite phrases but aborts conversations if you contradict it too much.
The only smooth conversations with Copilot are the ones where you allow it to regurgitate "facts" and act in a submissive manner.
What's so hard about following the Wikipedia model of citing sources? Even 20 years ago, it was clear to college students that they cannot cite "Wikipedia", but they can cite the academic literature it referenced.
It would take seconds to figure out of the ChatGPT citation is real or made up.
Maybe it’s the opposite; kids need to learn that LLM’s can bullshit just like every other person and institution can bullshit, and the most important skill they can have is verification of information, no matter where it’s coming from.
I use both the internet and GenAI extensively now. But I feel that having gone through the "knowledge work" activities without the crutches puts me in a better position to assess the correctness and plausibility of internet sources and AI in a way that kids who grow up using them constantly don't have.
I feel quite privileged to be in that position, that I wouldn't be in had I been born 10 or 20 years later. I also feel sorry for kids these days for not having the opportunity to learn things "the hard way" like I had to. And I feel extremely snobbish and old for thinking that way.
With generative AI we are now shortcutting directly to the destination while eliding all the knowledge and understanding we are supposed to be gaining on the way. It's deemed sufficient to merely produce and exchange the symbols of understanding without needing to possess any real underlying wisdom (the first view above), because it's through the exchange of these abstract symbols, irrespective of whether or not there is anything behind them, that we can play and win certain social games.
This is merely the natural continuation of trends that began during the dawn of the Internet era, when we started to consider pixels on a screen an accurate map of reality.
Deleted Comment
They’re going to use it. Give them the training and guidance to use it correctly.
Terse prose was a "lost art" even in my generation (millenial-ish) and I'm not surprised that it has gotten worse.
On the other hand I cannot not help thinking that this is similar to the arguments brought forward when the internet was new. How could correctness and plausibility be established if you don't have established trustworthy institutions, authors and editors behind everything? And yet it turned out mostly fine. Wikipedia is alive, despite its deficiencies, Encyclopedia Britannica not so much.
So, is it only old people's fear?
Not long ago I had the same viewpoint as you! But thinking back now — it dates me but I definitely lived a childhood without Internet access — probably the optimistic belief before our age of "misinformation" is that, in the marketplace of ideas, the truth usually wins. Goes along with "information wants to be free" — remember that slogan?
For us that grew up learning things "the hard way" so to speak, that made perfect sense: each of us, as should have the capability to discern what is good or bad as, individual independent thinkers. Therefore, for any piece of information, there should be a high probability, in the aggregate, that it is classified correctly as to its truth and utility.
I think, that was to some extent, even a mainstream view. Here's what Bill Clinton's said in 2000 advocating to admit China to the WTO: (https://archive.nytimes.com/www.nytimes.com/library/world/as...)
Now there's no question China has been trying to crack down on the Internet. Good luck! That's sort of like trying to nail jello to the wall. (Laughter.) But I would argue to you that their effort to do that just proves how real these changes are and how much they threaten the status quo. It's not an argument for slowing down the effort to bring China into the world, it's an argument for accelerating that effort. In the knowledge economy, economic innovation and political empowerment, whether anyone likes it or not, will inevitably go hand in hand.
I would say, what we have since learned after some 20 years, is that in the marketplace of ideas, the most charitable thing we can say that the memes with the "best value" win. "Best value" does not necessarily mean the highest quality, but rather there can be a trade-off between its cost and the product quality. Clearly ChatGPT produces informational content at a pretty low cost. The same can be said for junk food, compared to fresh food: the overall cost of the former is low. Junk food does not actively, directly harm you, but you are certainly better off not eating too much of it. It is low quality but has been deemed acceptable.
There are examples where we can be less charitable of course. We all complain about dangerous, poorly manufactured items (e.g. electronics with inadequate shielding etc.) listed Amazon, but clearly people still buy them anyway. And then, in the realm of politics, needless to say, there are many actors bent on pushing memes they want you to have regardless of their veracity. Some people on the marketplace of ideas "buy" them owing to network effects (e.g. whether they are acceptable according to political identity, etc.) in the same way that corporations continue to use Microsoft Windows because of network effects. We also probably say nowadays Clinton has been ultimately proven wrong by the government of China.
Survival of the "fittest" memes if you like: evolution does not make value judgements.
If you ask me, maybe our assumption of de-centralized truth-seeking was itself, not an absolute truth, to begin with. But it took years to unravel, as humans, collectively speaking, atrophy from disuse of the research and critical thinking skills before technology dropped the barriers of entry to producing and consuming information.
Remember that the new generation doesn't just have different tools; they're also much less experienced & mature, just like we were.You can only really compare yourself to them in the future where they're at the place you're at now.
And when chatbots don't know, they lie.
Kids will learn the hard way. Possibly harder then we did.
And even this scenario assumes a working postal system, availability of paper and pen. But if you didn't have those, you may not have heard of Greek to even ask the question.
Few (if any) "laypeople" did their own research, as you and I understand it today.
Old people usually make correct assessments given their knowledge and experience. They know how to maximize their expected gains and play safe. The problem is, real life often rewards those who take risks, and make seemingly wrong decisions, that later turn out to be good.
For example, as a kid I loved playing video games, while my grandma yelled at me for not wanting to help her with work at the farm. She had absolutely no way of predicting that playing video games, which were essentially just toys, would teach me the right skills at the right time, allowing me to move up the social ladder. At the time, forcing the god damn lazy kid to milk the god damn cow was the sensible thing to do.
https://news.rice.edu/news/2019/quantum-dot-tattoos-hold-vac...
Or perhaps people who have been confused by conspiracy sites like npr.org and make the mental leap that this would be used for other purposes:
https://www.npr.org/2018/10/22/658808705/thousands-of-swedes...
There are Microsoft patents for microchips to track body activity to reward in cryptocurrency and subsidiaries who wanted to microchip vaccine passports into the hands of immigrants.
Deleted Comment
Growing up we heard, ad nauseam, that "wikipedia is not a reliable source". People just need to state the same thing about LLMs- they aren't reliable, but can potentially point you to primary sources that -are- reliable. Once the shininess of the new toy wears off, people will adjust.
Wikipedia is as good as anything else.
As an example, I frequently use Wikipedia to read about history, computer science topics (e.g. how to implement an algorithm), or scientific topics.
The exception is current events, but even then, I suspect that Wikipedia is not any more biased than the news.
I'm open to having my mind changed, though.
Encyclopedias – including Wikipedia – are not acceptable sources for college-level work certainly. They are tertiary literature, which can provide an overview to someone trying to get a toehold into a subject, and which can hopefully point them toward primary and secondary sources. But tertiary sources are not typically allowable citations for college research.
I will say that wikipedia, as things stand, is way more accurate than chatgpt. So much so that comparing them doesn't even make sense to me.
Now of course like any tool this can be used without falling into that trap, but people are lazy and the truth is that if you don't absolutely have to do it yourself most people won't.
(Actually, didn't ChatGPT have a disclaimer right next to the prompt box that warns against incorrect answers?)
So I'm more surprised (and scared) that students don't just use LLMs for sourcing but are also convinced they are authoritative.
Maybe being in the tech bubble gave a false impression here, but weren't hallucinations one of the core points of the whole AI discurse for the last 1.5 years? How do you learn about and regularly use ChatGPT, but miss all of that?
I am a product of the German school system, I'd say I wasn't indoctrinated too much, so its not entirely broken, but with these new """tools""" maybe we need a reform anyways.
What exactly do you mean by the word “axiom” here?
It's AI in the older sense of Machine Learning, not in the currently widespread sense of an LLM, which is the source of the problem that the author is discussing.
If you ask it about something it doesn't know, right then and there, it will concoct a fiction for you. It won't say "I don't know," or "I can't help with that."
If you coach it to respond to something in a certain way, it'll respond that way for you as its top priority.
If you ask it to review a text, it'll usually find a way to give you at least a 7 or 8 out of 10. (Though, interestingly, rarely a 10/10 score. You can upload excerpts from some of the great works of literature and philosophy and see ChatGPT give them an 8/10, just as it gives an 8/10 to your essay or blog post.) Practically the only way to get a halfway critical response is to add the words "be critical" to your prompt.
A more circumspect and less obsequious ChatGPT would solve a lot of problems.
It doesn’t know that it doesn’t know. Texts it was trained on rarely go “On the History of the Greek Language, by Andros Dimos; ahem what is the history of the Greek language? I don’t know. The end.”
GPT-4o responded: "The Palace at Midnight" by Robert Silverberg is a science fiction short story that was first published in 1980. It is part of Silverberg’s "Majipoor" series, but stands alone in terms of its setting and characters, distinct from the expansive world-building typical of the series. The story is set in a dystopian future in India and revolves around themes of memory, loss, and the haunting allure of the past.
The narrative follows the protagonist, Edward, and his group of friends who are survivors of a nuclear war that has devastated much of the world. . .
All of which was complete nonsense. "The Palace at Midnight" is a short story of a few pages that has nothing to do with Majipoor, was published in 1981, and was set in San Francisco.
It doesn't know -- it knows that it doesn't know -- so it just makes something up that seems superficially credible. Because it aims to please and be helpful; altogether too helpful.
The only smooth conversations with Copilot are the ones where you allow it to regurgitate "facts" and act in a submissive manner.
Relying on kids to do cross referencing and deeper fact checks into everything they ask an LLM is just not going to happen at scale.
It would take seconds to figure out of the ChatGPT citation is real or made up.