>What else would ChatGPT do to protect itself from being discovered as a liar? Would it use the logic that AI is incredibly important for the progression of human kind, and therefore anyone who criticises it or points out risks should be eliminated for the greater good. Would that not, based on the Non-maleficence framework, be considered as minimizing the harm?
It's a language model. It's essentially roleplaying as a chatbot. All the issues listed in the article, especially hallucinations, are well-known pitfalls of such LMs. It needs improvement, not to be "destroyed".
Exactly. It’s a great article if you read none of the disclaimers, know nothing about the underlying technologies, and just want to write a sensationalist article for pageviews.
Can't a similar analogy be applied to the majority of society? Start ups, despite a lack of foundational understanding use sensationalism to drive investment. Media in general doesn't understand anything, and uses sensationalism to drive ad revenue. Governments without really (appearing) to have a decent understanding of reality, use populist sensationalism to drive votes.
It's a tad melodramatic, but my point is this approach is hardly unique, or unexpected.
I think the latest iteration of Bing AI is better. More accurate, more firm to avoid the persuasion issues with ChatGPT, is current with the web rather than a cutoff date two years ago, and of course most importantly it provides sources like an interactive Wikipedia.
As an example, if I try to convince Bing with something wrong, it not only explains why that’s incorrect, it also explains why it claims what it did. It has also correctly forgiven itself when it has indeed been wrong. In one case it was about a fact based on some website, I corrected it, and it realized how this regulation had changed a few years ago due to some source it had missed (which it of course now linked to).
This is way ahead of the ChatGPT situation. It’s already rather primitive. ChatGPT says it’s wrong about everything if you push it. Bing gets annoyed and upset if you do and it can’t find a source where the claim is supported or if you can’t link to one.
Still, sometimes the old LLM issues rear their ugly head there too. But it has made me wonder if accuracy and firmness is an emergent property. That’s an example of an unknown factor today (?) which would wildly affect articles like these and how much of a problem this is, in case these issues will simply fade away. And even with current level of mistakes at Bing, I think it’s now roughly human level of them. We aren’t perfect either. Why would this be expected of an AI? Because of the AI in the Alien film? Cultural preconceptions?
I also frequently get the feeling we judge AI of today despite it being in its infancy. It’s almost like judging pre-schoolers for not acting like adult humans. That they are still not there, does not imply they are not going to be. We are only getting started.
What does the author think would happen to research if we killed ChatGPT and called our first major generic product malevolent? First of all, malevolence implies intent. Is there even intent? A wish to do evil to others. Is this author even using the right words?
>We aren’t perfect either. Why would this be expected of an AI?
When a human errs and causes negative consequences, a human is punished. When an algorithm does the same thing, either nobody is punished or a human is punished.
If we're going to elevate code above humanity, maybe it should be held to a higher standard. After all, it's supposed to be better, faster, more objective, and cheaper than us. Demanding more is literally the point. So demand perfection.
We aren’t perfect either. Why would this be expected of an AI? Because of the AI in the Alien film? Cultural preconceptions?
This is a really interesting question I ask myself too, why do we expect perfection if the original knowledge base and is built on in perfect data, and the bot is fed spam?
It seems to me that the article combines syllogisms with sensationalism and displays a significant lack of understanding of technology. A logical examination of these factors could form the basis of an AI-generated piece: "Journalism should be considered malevolent and destroyed."
yes, but soon there will be many people who don't understand the technology happily using new shiny tools built on top of it.
Knowing how the technology works won't be of any help if someone uses a false ChatGPT generated statement about you as a fact that can have real-life consequences.
Has anyone actually caused Chat GPT to persist wrong answers and bleed them into someone else's session?
I keep reading these breathless "OMG, I made Chat GPT say '2 + 2 != 4'" But it doesn't cause Chat GPT to generate that result for everyone. It takes gaming the session to get these results.
Congratulations, you won the high score at your latest video game. What would be news is if Alice made Chat GPT inform Bob that the current year is 2022. I don't believe one user can push hallucinations they cause on some other user... Unless you're Microsoft or OpenAI...
Journalist , who apparently had 30 years as a computer scientists, begins to think treat it as though it’s some real-time fact machine; a cleaner, simpler Google with only “factual” results.
Journalist has zero clue how an LLM works.
Journalist becomes scared and wants all LLM progress to be stopped.
It's a language model. It's essentially roleplaying as a chatbot. All the issues listed in the article, especially hallucinations, are well-known pitfalls of such LMs. It needs improvement, not to be "destroyed".
It's a tad melodramatic, but my point is this approach is hardly unique, or unexpected.
As an example, if I try to convince Bing with something wrong, it not only explains why that’s incorrect, it also explains why it claims what it did. It has also correctly forgiven itself when it has indeed been wrong. In one case it was about a fact based on some website, I corrected it, and it realized how this regulation had changed a few years ago due to some source it had missed (which it of course now linked to).
This is way ahead of the ChatGPT situation. It’s already rather primitive. ChatGPT says it’s wrong about everything if you push it. Bing gets annoyed and upset if you do and it can’t find a source where the claim is supported or if you can’t link to one.
Still, sometimes the old LLM issues rear their ugly head there too. But it has made me wonder if accuracy and firmness is an emergent property. That’s an example of an unknown factor today (?) which would wildly affect articles like these and how much of a problem this is, in case these issues will simply fade away. And even with current level of mistakes at Bing, I think it’s now roughly human level of them. We aren’t perfect either. Why would this be expected of an AI? Because of the AI in the Alien film? Cultural preconceptions?
I also frequently get the feeling we judge AI of today despite it being in its infancy. It’s almost like judging pre-schoolers for not acting like adult humans. That they are still not there, does not imply they are not going to be. We are only getting started.
What does the author think would happen to research if we killed ChatGPT and called our first major generic product malevolent? First of all, malevolence implies intent. Is there even intent? A wish to do evil to others. Is this author even using the right words?
When a human errs and causes negative consequences, a human is punished. When an algorithm does the same thing, either nobody is punished or a human is punished.
If we're going to elevate code above humanity, maybe it should be held to a higher standard. After all, it's supposed to be better, faster, more objective, and cheaper than us. Demanding more is literally the point. So demand perfection.
This is a really interesting question I ask myself too, why do we expect perfection if the original knowledge base and is built on in perfect data, and the bot is fed spam?
Knowing how the technology works won't be of any help if someone uses a false ChatGPT generated statement about you as a fact that can have real-life consequences.
Deleted Comment
I keep reading these breathless "OMG, I made Chat GPT say '2 + 2 != 4'" But it doesn't cause Chat GPT to generate that result for everyone. It takes gaming the session to get these results.
Congratulations, you won the high score at your latest video game. What would be news is if Alice made Chat GPT inform Bob that the current year is 2022. I don't believe one user can push hallucinations they cause on some other user... Unless you're Microsoft or OpenAI...
Journalist , who apparently had 30 years as a computer scientists, begins to think treat it as though it’s some real-time fact machine; a cleaner, simpler Google with only “factual” results.
Journalist has zero clue how an LLM works.
Journalist becomes scared and wants all LLM progress to be stopped.