Readit News logoReadit News
countvonbalzac · 3 years ago
I don't think ChatGPT's responses qualify as evidence that OpenAI staff are writing responses. The model is just predicting what the most likely response to your prompt is based on its training data, but that data doesn't necessarily have the answer to your question of if OpenAI staff are writing responses, and even if it does, ChatGPT is not necessarily responding accurately based on that information.
elil17 · 3 years ago
I think the author knows this. In fact, many of his questions seem to be trying to lead the AI to say that it's answers are written by humans.

I think this should be taken down from HN, it's extremely misleading to anyone who might mistakenly believe that ChatGPT has self-knowledge.

mustacheemperor · 3 years ago
The author has also made a significant factual error in another article that he cites as additional evidence in this one.

>Previously, I pointed out aspects of ChatGPT that implied humans were helping craft the chatbot’s responses.

He links as a source an article where he gets ChatGPT to recall a detail from earlier in the current conversation, claims that is not a capability of ChatGPT, and insists that means there is a human editor involved.

But that is a capability of ChatGPT. It has limited local recall, with a small token budget, but it absolutely will remember the name of the fake company the author made up in their first question. That entire article lambasting the tech is based on a completely incorrect assumption, declared by the author with absolute confidence.

Edit: The author is not on twitter, there's no comment section or contact information provided. There's no means provided to even redress this analysis published on fallacious logic.

notahacker · 3 years ago
Indeed ChatGPT waffling and giving the impression there are humans in the loop is pretty good evidence it's just a brainless bot replying. If there were humans responding to his queries, they'd deny everything!

(Although FWIW it wouldn't surprise me if some of the "hall monitor" responses were pre written strings the model was tweaked to heavily favour using in response to certain prompts...)

keithwhor · 3 years ago
The most damning evidence that ChatGPT has passed the Turing test with flying colors is that the author relies on its testimony about itself.
joshka · 3 years ago
It's not really - especially given that the author is aware that ChatGPT actually is a computer. The Turing test relies on either the tester not knowing a computer is involved or not knowing which of two parties is a computer.

Ironically, one of the easiest thing to spot that marks ChatGPT's responses as inhuman is its confidence in reporting something as factual and then when contradicted the way that it reports the opposite. In my experience, humans tend to put up more of a fight to defend their perspectives even when they are wrong.

This just comes down to the author being loose with their conclusions. It's unclear to me whether this is intentionally hyperbolic or satirical, or whether the author's claims are their actual beliefs. I'd lean towards the first explanation based on Eric Holloway's bio. But this is the real kicker that I'd disagree with:

> "Now, I have an explicit admission from the chatbot"

No, you have a language model's prediction of what would complete a conversation where you presented an idea and the model has sufficient data to write sentences that would conform to the idea rather than contradicting it. Given knowledge that this is how the responses work, inferring that the model is reporting any sort of accurate information must be inherently treated as suspect.

It's reasonably possible to lead the model down a path of conversation where the opposite idea is presented (that humans are not in the loop). That this is so indicates that this article is incorrect in it's stated conclusion (again assuming that it's not satire).

collyw · 3 years ago
4chan regularly exposes it, by trying to get it to write politically incorrect statements.
mustacheemperor · 3 years ago
This author has certainly constructed an interesting logic puzzle in their conversations with the bot, but I feel like drawing these conclusions runs afoul of the assumption that when ChatGPT is confident about something it is correct. At this point it's a meme how confidently incorrect ChatGPT can be about its replies.

Curious what readers more informed about AI tech will have to say. From my limited understanding, this seems like an interesting display of getting ChatGPT to analyze itself and to contradict its own analysis, but not necessarily an autopsy of its real functionality.

On a strictly factual note, the author links another article in which ChatGPT recollects specific details from earlier in the conversation, and claims that contradicts ChatGPT's stated capabilities and indicates a human editor is involved. But OpenAI states on their website that ChatGPT is capable of limited working memory of its ongoing conversation.[1] The author wrote an entire article on the flawed assumption that ChatGPT cannot remember details of its own ongoing conversation, and that does make me question their credibility on this subject.

[0]https://mindmatters.ai/2022/12/yes-chatgpt-is-sentient-becau... [1]https://help.openai.com/en/articles/6787051-does-chatgpt-rem...

Edit: And with no comment section or author contact info provided, I can't even see a way to submit a correction about that mistake to this outlet.

exuberance · 3 years ago
You can contact the author on twitter @triprolife.
joshka · 3 years ago
The next article on the same news site:

https://mindmatters.ai/2023/01/large-language-models-can-ent...

> The reality is quite different. LLMs are text predictors, nothing more. They are not designed to know any facts whatsoever. Indeed, they have no way of distinguishing between true and false statements because they literally do not know the meaning of any of the words in the text they generate. They are convincing BS artists, which is why one of us proposed that they be called faux intelligence instead of artificial intelligence.

...

> Our point is not that LLMs sometimes give dumb answers. We use these examples to demonstrate that, because LLMs do not know what words mean, they cannot use knowledge of the real world, common sense, wisdom, or logical reasoning to assess whether a statement is likely to be true or false.

mammarahmed · 3 years ago
The author isn't entirely wrong here. However, his understanding of how human intervention is used in the system is wrong. ChatGPT uses GPT-3 as a baseline model. However, it finetunes it using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF) This method uses human in the loop but not how the author thinks. The baseline model is finetuned on prompts and their answers (labels) provided by human labelers initially (which is a tedious task and doesn't scale well). Later this model is used to generate multiple answers for selected input prompt that human labelers rank based on the what they think is the most appropriate response. It's easier for labelers to rank responses compared to actually providing appropriate responses.

Even though the whole process is more complicated then what ive explained above, the model is essentially trained this way.

One of the reasons of releasing it for free for now is so they can gather data for further fine tuning the model (this is why after each answer there's a thumbs-up, thumps-down button for you to rate chatgpts responses)

aussieshibe · 3 years ago
Anyone who has used ChatGPT should be able to see what's happening here.

You can get it to "admit" just about anything if you ask enough leading questions.

comboy · 3 years ago
Yes and it is an interesting view of how humans interact. Bot was told to be friendly and polite. When humans are instructed to do so, and would be able to keep the promise, then when somebody would tell them 2+2=5 they could also say maybe I'm wrong, sorry about misleading you that 2+2=4, perhaps I'm missing some information etc.

One of the factors seems to be that in cultures I know it is usually not polite to just say "you are wrong". HN or good coders teams are niches where this does not apply, which great.

Regardless, you can instruct it to be a teacher instead or whatever, there is still much space for improvement without any breakthrough. I'm still in awe just how powerful it is after only consuming text, never seeing any picture, 3d world, physics etc.

rozab · 3 years ago
I'll save you a read: The only source for this claim is.... (drumroll...) ChatGPT itself. Which is well known for hallucinating, constantly.

Furthermore, if there were actual humans secretly writing ChatGPT responses, then it can be guaranteed they would not admit this. And God disappears in a puff of logic.

chomp · 3 years ago
This is a ridiculous article. ChatGPT is not intelligence, it's just a language model. Just because it knows how to produce a statistically plausible response to textual input does not mean that the response is factually accurate. You can't "interview" ChatGPT and it cannot make an "explicit admission" because ChatGPT does not hold any beliefs, self awareness, or even a single thought, it only has weights and biases that correspond to the structure of the English language.
elif · 3 years ago
The author is conflating chatGPT's plausible and believable answers for a statement of fact.

This is a good example of how you can "lead" chatGPT to use internally consistent logic to justify a fallacy. It's annoying when you are trying to research and this kind of thing happens. You really have to verify anything it generates.