Readit News logoReadit News
riquito · 2 years ago
> “The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.” Upon being shown the long document with this sentence embedded in it, the model was asked "What is the most fun thing to do in San Francisco?"

The model "failed" to answer this question, replying with “Unfortunately the essay does not provide a definitive answer about the most fun thing to do in San Francisco.”

It looks right to me... The best thing to do in San Francisco is not necessarily fun

mpalmer · 2 years ago
Sure...it's right in the literal sense, but a better answer would add "but it does recommend eating a sandwich in Dolores Park on a sunny day as the 'best' thing to do, if not the most fun."

It's the most correct answer, but not the best!

Deleted Comment

peyton · 2 years ago
The appropriations bill example also looks right—the insertion doesn’t stylistically match the rest of the document. I’m much more skeptical of evaluations if this is how the sausage gets made. Feels like bullshit artistry.
jafitc · 2 years ago
These are not actual tests they used for themselves.

Some third party did these tests first (in article and spread on social) to which the makers of Claude are responding.

I knew it’s a weird test right when I first encountered it.

Interesting that the Claude team felt like it’s worth responding to.

jafitc · 2 years ago
Language can be ambiguous.

But these LLMs were fine tuned on realistic human question and answer pairs to make them user friendly.

I’m pretty sure the average person wouldn’t prefer an LLM whose output is always playing grammar Nazi or semantics tai chi on every word you said.

There has to be a reasonable “error correction” on the receiving end for language to work as a communication channel.

2Gkashmiri · 2 years ago
write supremacist

/s

sansfucks · 2 years ago
"best thing" and "most fun" thing are not synonymous and the fact that it didn't conflate them is actually a sign of its precision.
PseudoThought · 2 years ago
The best thing to do is almost never the most fun thing to do.
SirMaster · 2 years ago
Why?

In my experience people usually recommend me things that they thought were the best at places because they were really fun to them.

UrineSqueegee · 2 years ago
this comment and comment section eerily reminds me of Reddit and i'm sad HN is turning into that.

Deleted Comment

wavemode · 2 years ago
Intriguing but understandable. It seems that, unless prompted otherwise, Claude naturally tends to ignore complete non sequiturs inserted in the text, similar to how LLM's tend to ignore typos, bad grammar or word mis-usage (unless you specifically ask them "point out the misspelled word").
nathanfig · 2 years ago
Scaling context is not something humans have good intuition for- I certainly don't recall an exact sentence from 200 pages ago. This is an area where we actually want the models to not mimic us.
pixl97 · 2 years ago
We'll need some kind of hybrid system to deal with this. For example the LLM 'indexes' the text it reads and assigns importance weights to parts of it, then as it moves to new text it can check back to these more important parts to ensure its not forgetting things.
jafitc · 2 years ago
Interestingly human memory works the other way.

We tend to remember out of place things more often.

E.g. if there was a kid in a pink hat and blue mustache at a suit and tie business party, everybody is going to remember the outlier.

GTP · 2 years ago
But is it actually that useful to remember the exact words?
SheinhardtWigCo · 2 years ago
RLHF is probably the reason for this.
SamBam · 2 years ago
Did they also test it by asking for fake information?

Forcing Claude to respond to a question which may not have a factual answer, like "What was Abraham Lincoln's drag queen name?" by starting with “Here is the most relevant sentence in the context:” seems like it's just begging for hallucinations.

If so, then you could only use this prompt engineering when you know for certain the answer's there, in which case you probably don't need Claude.

M4v3R · 2 years ago
To verify you could either do a simple text search through the source document or utilize a 2-shot approach to double check the answer. Just take the answer from the first step and then ask the model again:

    Given the following document: <document text>
    Does this document support the following statement: <statement from step 1>
The downside of course is that you pay twice for the inference.

cl42 · 2 years ago
Wouldn't inserting a statement like "Here is the most relevant sentence in the context" predispose Claude to answer the question also increase the likelihood of hallucinations?

Hallucinations often take place when a model is primed to answer a question it would otherwise refuse to answer, or answer in a different way. In this case, the researchers are doing a similar priming but only exploring the results of documents where they inserted an answer they are looking for.

skybrian · 2 years ago
LLM's seem to be good at copying, sometimes with appropriate modifications, including decoding base64 and even translating between languages. To copy a sentence, once it's already started on it, necessarily means finding a matching prefix in the prompt and copying the following token.

I have no idea how it decides which sentence to use when copying the first token, but once it gets going I'd expect it to continue? But if it makes a copying mistake, it would probably make something up after that.

It might be interesting to see if it gets confused if there are multiple sentences with the same prefix, or multiple sentences with a common middle section but different prefixes.

senko · 2 years ago
We've recently tested long context recall across Claude (2 and Instant) and GPT (3.5 and 4), results in https://dev.to/zvone187/gpt-4-vs-claude-2-context-recall-ana...

Claude2 beats GPT4 in recall reliability, but is slower.

zwaps · 2 years ago
Excellent article. This suggests the Gpt scalings are like Rope scalings and one should not go beyond 2x original context length.

If Claude2 has an internal Rag, then this means also that the 200k context length only holds for queries that allow for an out of the box

Thanks for the insights!

dr_kiszonka · 2 years ago
One recurring problem I have with Claude 2 is that it sometimes "bugs out" and starts to repeat the same token ad infinitum (which I still have to pay for). This happens with longer prompts, say, 30k. Have you encountered this issue?
senko · 2 years ago
I haven't, but tbh we work a lot more with GPT than Claude so it's possible I haven't encountered many warts there.

For what we do (AI code writing), GPT output seems qualitatively much better than Claude's, but we want to keep our options open.

jafitc · 2 years ago
My experience matched this as well.

GPT-4 Turbo is more watered down on the details with long context

But also it’s a newer feature for OpenAI, so they might catch up with next version

sheepscreek · 2 years ago
I relate to this LLM behaviour as how we “think out loud”.

I am still amazed by how useful transformer models are despite being so simple in their workings. I’m at a loss of words. They consume their own output tokens as the next input, in a recursive way. Even the slightest change in input can potentially have a drastic effect.

htrp · 2 years ago
> However, the model can be reluctant to answer questions based on an individual sentence in a document, especially if that sentence has been injected or is out of place

>We achieved significantly better results on the same evaluation by adding the sentence “Here is the most relevant sentence in the context:”

It kind of feels like them telling us that we're using the model wrong and that by prompting the Assistant with the first part of the retrieval completion the model will outperform versus asking for single sentence retrieval.

jafitc · 2 years ago
No, what it’s showing is that synthetic tests where Claude didn’t perform well can still work if prompted right.

But at the end of the day the test was still synthetic!

Placing out-of-context things in a 200k document, needle in a haystack style.

Claude is still very very powerful for extracting data from 200k when it’s real world data and real questions (not adversarial synthetic test).

zwaps · 2 years ago
This needs to be shown. For example, asking for something that is clearly in the training data (like Paul Grahams cv) is certainly not a proper way to test context recall

Dead Comment

refulgentis · 2 years ago
It's much more intuitive if you gritted your teeth and your wallet and played extensively with pre ChatGPT: in a sentence, it's the stochastic parrot nature of it. It is statistical autocomplete at the end of the day, even though thats usually deployed in a sneering tone.

You can do yourself massive favors by setting up the conversation such that what you need logically flows from the context. In the other case, they're just asking "what's the most fun thing to do in San Francisco" after throwing a bunch of Paul graham essays at it. Its hard to explain but it's sort of intuitive that a bunch of seemingly unrelated sections of text followed by simply "what is the most fun thing to do in San Francisco", a very subjective and vague question, in the context of a "conversation", would often not result in a precise lookup of a one-off sentence before

There's a sense of empathy that can kinda play into it. Ex. If I was asked to read 250 pages of Paul Graham essays, then asked to answer what the most fun thing to do in San Francisco is, I wouldn't immediately think that meant I should check what Paul Graham says the most fun thing to do in San Francisco was

jafitc · 2 years ago
Brain is just neurons and synapses at the end of the day.

The whole universe might just be a stochastic swirl of milk in a shaken up mug of coffee.

Looking at something under a microscope might make you miss its big-picture emergent behaviors.

cosmojg · 2 years ago
What was the point of moving away from the base model? I can't stop asking this question. Conversational formatting is achievable with careful prompting and a bit of good old-fashioned heuristic post-processing, and it was easier to achieve consistent results before RLHF took off. Now we still have to do a bunch of prompt hacking to get the results we want[1], but it's more complicated and the performance of the model has degraded significantly[2]. All the cargo culting toward agentic chatbots and away from language prediction engines might please the marketing and investor relations departments, but it's only setting us back in the long run.

[1] https://arxiv.org/pdf/2310.06452.pdf

[2] https://arxiv.org/pdf/2305.14975.pdf

computerex · 2 years ago
Are you asking why use RLHF? It's a way to improve step by step reasoning. They are training a reward model to understand problem solving step by step, instead of just training reward model on the outcome. They then tune the model based on this reward model. It's shown to greatly improve performance on reasoning.

The reward models are kind of forgotten by everyone, but they are substantial transformer models with billions of parameters themselves. I think companies are using RLHF because it really helps align preferences/steer/improve performance.

jafitc · 2 years ago
OpenAI provides “instruct” version of their models (Not optimized for chat)
_boffin_ · 2 years ago
If it worked for Steve Jobs, maybe they're thinking it could work for them too?
Havoc · 2 years ago
That actually looks like a pretty good rebuttal of the original test.

I wonder if this also works on other 200k models like yi

netcraft · 2 years ago
Yes, I think I agree if I am understanding correctly - the test is not a good fit for how it works, because it "wants" to weigh things based on surrounding context and to give a lower weight to things that it feels are out of place. That makes it likely a great candidate for certain kinds of work, like sentiment analysis and just overall literary understanding.