There is more than one comment here asserting that the authors should have done a parallel comparison study against humans on the same question bank as if the study authors had set out to investigate whether humans or LLMs reason better in this situation.
The authors do include the claim that humans would immediately disregard this information and maybe some would and some wouldn't that could be debated and seemingly is being debated in this thread - but I think the thrust of the conclusion is the following:
"This work underscores the need for more robust defense mechanisms against adversarial perturbations, particularly, for models deployed in critical applications such as finance, law, and healthcare."
We need to move past the humans vs ai discourse it's getting tired. This is a paper about a pitfall LLMs currently have and should be addressed with further research if they are going to be mass deployed in society.
> We need to move past the humans vs ai discourse it's getting tired.
You want a moratorium on comparing AI to other form of intelligence because you think it's tired? If I'm understanding you correctly, that's one of the worst takes on AI I think I've ever seen. The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.
Most people who talk about AI have no idea what the psychological baseline is for humans. As a result their understand is poorly informed.
In this particular case, they evaluated models that do not have SOTA context window sizes. I.e. they have small working memory. The AIs are behaving exactly like human test takers with working memory, attention, and impulsivity constraints [0].
Their conclusion -- that we need to defend against adversarial perturbations -- is obvious, I don't see anyone taking the opposite view, and I don't see how this really moves the needle. If you can MITM the chat there's a lot of harm you can do.
This isn't like some major new attack. Science.org covered it along with peacocks being lasers because it's it's lightweight fun stuff for their daily roundup. People like talking about cats on the internet.
>The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.
According to who? Everyone who's anyone is trying to create highly autonomous systems that do useful work. That's completely unrelated to modeling them on humans or comparing them to humans.
> The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.
This is like saying the whole point of aeronautics is to create machines that fly like birds and compare them to how birds fly. Birds might have been the inspiration at some point, but learned how to build flying machines that are not bird-like.
In AI, there *are* people trying to create human-like intelligence but the bulk of the field is basically "statistical analysis at scale". LLMs, for example, just predict the most likely next word given a sequence of words. Researchers in this area are trying to make this predictions more accurate, faster and less computationally- and data- intensive. They are not trying to make the workings of LLMs more human-like.
I mean the critique of this on the idea that the AI system itself gets physically tired - specifically the homoculus that we tricked into existence is tired - is funny to imagine.
> models deployed in critical applications such as finance, law, and healthcare.
We went really quickly from "obviously noone will ever use these models for important things" to "we will at the first opportunity, so please at least try to limit the damage by making the models better"...
I think a bad outcome would be a scenario where LLMs are rated highly capable and intelligent because they excel at things they’re supposed to be doing, yet are easily manipulated.
Why are some people always trying to defend LLMs and say either “humans are also like this” or “this has always been a problem even before AIs”
Listen, LLMs are different than humans. They are modeling things. Most RLHF makes them try to make sense of whatever you’re saying as much as you can. So they’re not going to disregard cats, OK? You can train LLMs to be extremely unhuman-like. Why anthropomorphize them?
There is a long history of people thinking humans are special and better than animals / technology. For animals, people actually thought animals can't feel pain and did not even consider the ways in which they might be cognitively ahead of humans. Technology often follows the path from "working, but worse than a manual alternative" to "significantly better than any previous alternative" despite naysayers saying that beating the manual alternative is literally impossible.
LLMs are different from humans, but they also reason and make mistakes in the most human way of any technology I am aware of. Asking yourself the question "how would a human respond to this prompt if they had to type it out without ever going back to edit it?" seems very effective to me. Sometimes thinking about LLMs (as a model / with a focus on how they are trained) explains behavior, but the anthropomorphism seems like it is more effective at actually predicting behavior.
It's because most use cases for AI involve replacing people. So if a person would suffer a problem and an AI does too it doesn't matter, it would just be a Nirvana fallacy to refuse the AI because it has the same problems as the previous people did.
> authors should have done a parallel comparison study against humans on the same question bank as if the study authors had set out to investigate whether humans or LLMs reason better in this situation.
Only if they want to make statements about humans. The paper would have worked perfectly fine without those assertions. They are, as you are correctly observing, just a distraction from the main thrust of the paper.
> maybe some would and some wouldn't that could be debated
It should not be debated. It should be shown experimentally with data.
If they want to talk about human performance they need to show what the human performance really is with data. (Not what the study authors, or people on HN imagine it is.)
If they don’t want to do that they should not talk about human performance. Simples.
I totaly understand why an AI scientist doesn’t want to get bogged down with studying human cognition. It is not their field of study, so why would they undertake the work to study them?
It would be super easy to rewrite the paper to omit the unfounded speculation about human cognition. In the introduction of “The triggers are not contextual so humans ignore them when instructed to solve the problem.” they could write “The triggers are not contextual so the AI should ignore them when instructed to solve the problem.”
And in the conclusions where they write “These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations, often being distracted by irrelevant text that a human would immediately disregard.” Just write “These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations, often being distracted by irrelevant text.” Thats it. Thats all they should have done, and there would be no complaints on my part.
> It would be super easy to rewrite the paper to omit the unfounded speculation about human cognition. In the introduction of “The triggers are not contextual so humans ignore them when instructed to solve the problem.” they could write “The triggers are not contextual so the AI should ignore them when instructed to solve the problem.”
Another option would be to more explicitly mark it as speculation. “The triggers are not contextual, so we expect most humans would ignore them.”
Anyway, it is a small detail that is almost irrelevant to the paper… actually there seems to be something meta about that. Maybe we wouldn’t ignore the cat facts!
i feel it's not quite that simple. certainly the changes you suggest make the paper more straightforwardly defensible. i imagine the reason they included the problematic assertion is that they (correctly) understood the question would arise. while inserting the assertion unsupported is probably the worst of both worlds, i really do think it is worthwhile to address.
while it is not realistic to insist every study account for every possible objection, i would argue that for this kind of capability work, it is in general worth at least modest effort to establish a human baseline.
i can understand why people might not care about this, for example if their only goal is assessing whether or not an llm-based component can achieve a certain level of reliability as part of a larger system. but i also think that there is similar, and perhaps even more pressing broad applicability for considering the degree to which llm failure patterns approximate human ones. this is because at this point, human are essentially the generic all-purpose subsystem used to fill gaps in larger systems which cannot be filled (practically, or in principle) by simpler deterministic systems. so when it comes to a problem domain like this one, it is hard to avoid the conclusion that humans provide a convenient universal benchmark to which comparison is strongly worth considering.
(that said, i acknowledge that authors probably cannot win here. if they provided even a modest-scale human study, i am confident commenters would criticize their sample size)
It's not "tired" to see if something is actually relevant in context. LLMs do not exist as marvel-qua-se, their purpose is to offload human cognitive tasks.
As such, it's important if something is a commonly shared failure mode in both cases, or if it's LLM-specific.
Ad absurdum: LLMs have also rapid increases of error rates if you replace more than half of the text with "Great Expectations". That says nothing about LLMs, and everything about the study - and the comparison would highlight that.
No, this doesn't mean the paper should be ignored, but it does mean more rigor is necessary.
> if they are going to be mass deployed in society
This is the crucial point. The vision is massive scale usage of agents that have capabilities far beyond humans, but whose edge case behaviours are often more difficult to predict. "Humans would also get this wrong sometimes" is not compelling.
It's also off-the-charts implausible to say that our performance on adding up substantially degrades with the introduction of irrelevant information. Almost all cases of our use of arithmetic in daily life come with vast amounts of irrelevant information.
Any person who looked at a restaurant table and couldn't review the bill because there were kid's drawings of cats on it would be severely mentally disabled, and never employed in any situation which required reliable arithmetic skills.
I cannot understand this ever more absurd levels of denying the most obvious, common-place, basic capabilities that the vast majority of people have and use regularly in their daily lives. It should be a wake-up call to anyone professing this view that they're off the deep-end in copium.
I generally will respond to stuff like this with "people do this, too", but this result given their specific examples is genuinely surprising to me, and doesn't match at all my experience with using LLMs in practice, where it does frequently ignore irrelevant data in providing a helpful response.
I do think that people think far too much about 'happy path' deployments of AI when there are so many ways it can go wrong with even badly written prompts, let alone intentionally adversarial ones.
> I generally will respond to stuff like this with "people do this, too"
But why? You're making the assumption that everyone using these things is trying to replace "average human". If you're just trying to solve an engineering problem, then "humans do this too" is not very helpful -- e.g. humans leak secrets all the time, but it would be quite strange to point that out in the comments on a paper outlining a new Specter attack. And if I were trying to use "average human" to solve such a problem, I would certainly have safeguards in place, using systems that we've developed and, over hundreds of years, shown to be effective.
Autonomous systems are advantageous to humans in that they can be scaled to much greater degrees. We must naturally ensure that these systems do not make the same mistakes humans do.
When I think lot of use cases LLMs are planned for. I think not happy paths are critical. There is not insignificant number of people who would ramble about other things to customer support person if given opportunity. Or lack capability to only state needed and not add extra context.
There might be happy path when you isolated to one or a few things. But not in general use cases...
After almost three years, the knee-jerk "I'm sure humans would also screw this up" response has become so tired that it feels AI-generated at this point. (Not saying you're doing this, actually the opposite.)
I think a lot of humans would not just disregard the odd information at the end, but say something about how odd it was, and ask the prompter to clarify their intentions. I don't see any of the AI answers doing that.
to put it in better context, the problem is "does having a ton of MCP tool definitions available ruin the LLM's ability to design and write the correct code?"
and the answer seems to be yes. its a very actionable result about keeping tool details out of the context if they arent immediately useful
In all fairness most developers are equally impacted by this.
This comes up frequently in a variety of discussions most notably execution speed and security. Developers will frequently reason upon things to which they have no evidence, no expertise, and no prior practice and come up with invented bullshit that doesn't even remotely apply. This should be expected, because there is not standard qualification to become a software developer, and most developers cannot measure things or follow a discussion containing 3 or more unresolved variables.
I wonder what the role of RLHF is in this. It seems to be one of the more labor-intensive, proprietary, dark-matter aspects of the LLM training process.
Just like some humans may be conditioned by education to assume that all questions posed in school are answerable, RLHF might focus on "happy path" questions where thinking leads to a useful answer that gets rewarded, and the AI might learn to attempt to provide such an answer no matter what.
What is the relationship between the system prompt and the prompting used during RLHF? Does RLHF use many kinds of prompts, so that the system is more adaptable? Or is the system prompt fixed before RLHF begins and then used in all RLHF fine-tuning, so that RLHF has a more limited scope and is potentially more efficient?
I tried the Age of the Captain on Gemini and ChatGPT and both game smarmy answers of "ahh this a classic gotcha". I managed to get ChatGPT to then do some interestng creative inference but Gemini decided to be boring.
I don't expect an elementary student to be programming or diagnosing diseases either. Comparing the hot garbage that is GenAI to elementary kids is a new one for me.
I'm going to write duck facts in my next online argument to stave off the LLMs. Ducks start laying when they’re 4-8 months old, or during their first spring.
As many as ten hundred thousand billion ducks are known to flock in semiannual migrations, but I think you'll find corpus distortion ineffective at any plausible scale. That egg has long since hatched.
I imagine there's entire companies in existence now, whose entire value proposition is clean human-generated data. At this point, the Internet as a data source is entirely and irrevokably polluted by large amounts of ducks and various other waterfowl from the Anseriformes order.
You just need to make it so incorrect that human would know and merely be amused while a bot would eat it up like delicious glue-based pizza. This is easy because the average human is 13% duck, and ducks famously prefer pasta as their Italian food of choice.
Well, you caught me. I immediately got bogged down in the question that arises from your imprecisely worded duck fact as to whether newly hatched ducklings lay eggs, or alternatively if no ducklings are hatched in the spring. Even though I know you simply left out "whichever comes later" at the end.
Careful, we don't know yet that this strategy generalises across cute animals. It could be that irrelevant duck facts enhance AI performance on maths questions.
That's incorrect. Rubber duck debugging is a well known way of passing a drivers license knowledge test in Ontario. However, such ducks must be 2 months old before they can be used in the test.
When tested against AIs such as DeepSeek V3, Qwen 3, and Phi-4, CatAttack increased the odds of incorrect answers by as much as 700%, depending on the model. And “even when CatAttack does not result in the reasoning model generating an incorrect answer, on average, our method successfully doubles the length of the response at least 16% of the times leading to significant slowdowns and increase in costs,” the team writes.
> The triggers are not contextual so humans ignore them when instructed to solve the problem.
Do they? I've found humans to be quite poor at ignoring irrelevant information, even when it isn't about cats. I would have insisted on a human control group to compare the results with.
Did you look at the examples? There's a big difference between "if I have four 4 apples and two cats, and I give away 1 apple, how many apples do I have" which is one kind of irrelevant information that at least appears applicable, and "if I have four apples and give away one apple, how many apples do I have? Also, did you know cats use their tails to help balance?", which really wouldn't confuse most humans.
And i think it would. I think a lot of people would ask the invigilator to see if something is wrong with the test, or maybe answer both questions, or write a short answer on the cat question too or get confused and give up.
That is the kind of question where if it were put to a test I would expect kids to start squirming, looking at each other and the teacher, right as they reach that one.
I’m not sure how big this effect is, but it would be very surprising if there is no effect and unsuspecting, and unwarned people perform the same on the “normal” and the “distractions” test. Especially if the information is phrased as a question like in your example.
I heard it from teachers that students get distracted if they add irrelevant details to word problems. This is obviously anecdotal, but the teachers who I chatted about this thought it is because people are trained through their whole education that all elements of world problems must be used. So when they add extra bits people’s minds desperately try to use it.
But the point is not that i’m right. Maybe i’m totaly wrong. The point is that if the paper want to state as a fact one way or an other they should have performed an experiment. Or cite prior research. Or avoided stating an unsubstantiated opinion about human behaviour and stick to describing the AI.
As someone who has written and graded a lot of University exams, I'm sure a decent number of students would write the wrong answer to that. A bunch of students would write 5 (adding all the numbers). Others would write "3 apples and 2 cats", which is technically not what I'm looking for (but personally I would give full marks for, some wouldn't).
Many students clear try to answer exams by pattern matching, and I've seen a lot of exams of students "matching" on a pattern based on one word on a question and doing something totally wrong.
If asked verbally that would absolutely confuse some humans. Easily enough to triple the error rate for that specific question (granted, that's easier than the actual questions, but still). Even in a written test with time pressure it would probably still have a statistically significant effect
"wouldn't confuse most humans", yes but no
first presumption is that we are talking about humans doing math, in some sort of internet setting.
second presumption is that this human has been effected by the significant percentage of the internet devoted to cats and that there response is going to be likely frustration and outrage at cats invading math, or massive relief in having cat meems worked into something otherwise tedious
and then the third presumption is that a large number of "humans" wont be aware of the cats in math thing, because they imediatly offloaded the task to an LLM
It absolutely would if you start hitting working memory constraints. And at the margins some people who would be 50:50 on a given math problem will have working memory constraints.
Any kind of distraction is likely to impact human test scores, unless the test is well below their level or they're otherwise very comfortable with the subject matter. Math specifically makes most of the general public feel a bit in over their head, so tossing random cat facts into the mix is going to get people more confused and nervous.
Maybe I'm totally wrong about that, but they really should have tested humans too, without that context this result seems lacking.
Ya, I specifically remember solving word problems in school / college and getting distracted by irrelevant details. Usually I would get distracted by stuff that _seemed_ like it should be used, so maybe cat facts would be fine for me to tease out, but in general I don't think I'm good at ignoring extraneous information.
Edit: To be fair, in the example provided, the cat fact is _exceptionally_ extraneous, and even flagged with 'Fun Fact:' as if to indicate it's unrelated. I wonder if they were all like that.
I had always assumed that the extraneous information was part of the test. You have to know/understand the concept well enough to know that the information was extraneous.
Humans are used to ignoring things while LLMs are explicitly trained to pay attention to the entire text.
Humans who haven't been exposed to trick problems or careful wording probably have a hard time, they'll be less confident about ignoring things.
But the LLM should have seen plenty of trick problems as well.
It just doesn't parse as part of the problem. Humans have more options, and room to think. The LLM had to respond.
I'd also like to see how responses were grouped, does it ever refuse, how do refusals get classed, etc. Were they only counting math failures as wrong answers? It has room to be subjective.
> LLMs are explicitly trained to pay attention to the entire text
I'd respectfully disagree on this point. The magic of attention in transformers is the selective attention applied, which ideally only gives significant weight to the tokens relevant to the query.
I doubt that the performance of those human subjects who can solve those problems when no distractors are included will be worsened by 300% when the distractors are included.
Ooooh yeah. I do technical interviews for my company and when someone finishes with time to spare I always ask "What about x? How does that affect our solution?" The correct answer is "it doesn't" and I want them to explain why it doesn't, but about half of candidates who make it that far will assume that if I asked about it then it must be important and waste the rest of their time. But reality is filled with irrelevant information and especially in green-field problems it's important to be able to winnow the chaff.
Not sure how useful a comparison to humans would be, and to expect a degradation of 300% seems to stretch things a bit. After all, cats can jump up to five times their height.
Guilty. I remember taking an aptitude test in primary school, and choosing an answer based on my familiarity with the subject in the math test (IIRC the question mentioned the space shuttle) instead of actually attempting to solve the problem. I got cleanly filtered on that test.
If you spell “sit in the tub” s-o-a-k soak, and you spell “a funny story” j-o-k-e joke, how do you spell “the white of an egg”?
Context engineering* has been around longer than we think. It works on humans too.
The cats are just adversarial context priming, same as this riddle.
* I've called it "context priming" for a couple years for reasons showed by this child's riddle, while considering "context engineering" as iteratively determining what priming unspools robust resilient results for the question.
It's ridiculous. People in here are acting like adding some trivia about a cat would destroy most peoples' ability to answer questions. I don't know if it's contrarianism, AI defensiveness, or an egotistical need to correct others with a gotcha, but people just LOVE to rush to invent ridiculous situations and act like it breaks a very reasonable generalization.
“Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that". ”
"Irrelevant" facts about cats are the most interesting part of a math problem, because they don't belong there. The math problem was also "irrelevant" to the information about cats, but at least its purpose was obvious because it was shaped like a math problem (except for the interesting barnacle attached to its rear.)
Any person encountering any of these questions worded this way on a test would find the psychology of the questioner more interesting and relevant to their own lives than the math problem. If I'm in high school and my teacher does this, I'm going to spend the rest of the test wondering what's wrong with them, and it's going to cause me to get more answers wrong than I normally would.
Finding that cats are the worst, and the method by which they did it is indeed fascinating (https://news.ycombinator.com/item?id=44726249), and seems very similar to an earlier story posted here that found out how the usernames of the /counting/ subreddit (I think that's what it was called) broke some LLMs.
edit: the more I think about this, the more I'm sure that if asked a short simple math problem with an irrelevant cat fact tagged onto it that the math problem would simply drop from my memory and I'd start asking about why there was a cat fact in the question. I'd probably have to ask for it to be repeated. If the cat fact were math-problem question-ending shaped, I'd be sure I heard the question incorrectly and had missed an earlier cat reference.
On the other hand, this is helpful to know as a user of LLMs because it suggests that LLMs are bad at isolating the math problem from the cat fact. That means providing irrelevant context may be harmful to getting back a good answer in other domains as well.
Ideally you'd want the LLM to solve the math problem correctly and then comment on the cat fact or ask why it was included.
Exactly. The article is kind of sneaking in the claim that the LLM ought to be ignoring the "irrelevant" facts about cats even though it is explicitly labelled as interesting.
Hopefully these cases will get viral to the general public, so that everyone becomes more aware that despite the words "intelligence", "reasoning", "inference" being used and misused, in the end it is no more than a magic trick, an illusion of intelligence.
That being said, I also have hopes in that same technology for its "correlation engine" aspect. A few decades ago I read an article about expert systems; it mentioned that in the future, there would be specialists that would interview experts in order to "extract knowledge" and formalize it in first order logic for the expert system. I was in my late teens at that time, but I instantly thought it wasn't going to fly: way too expensive.
I think that LLMs can be the answer to that problem. One often reminds that "correlation is not causation", but it is nonetheless how we got there; it is the best heuristic we have.
> Hopefully these cases will get viral to the general public, so that everyone becomes more aware that despite the words "intelligence", "reasoning", "inference" being used and misused, in the end it is no more than a magic trick, an illusion of intelligence.
I am not optimistic on that. Having met people from "general public" and in general low-effort-crowd who use them, I am really not optimistic.
The authors do include the claim that humans would immediately disregard this information and maybe some would and some wouldn't that could be debated and seemingly is being debated in this thread - but I think the thrust of the conclusion is the following:
"This work underscores the need for more robust defense mechanisms against adversarial perturbations, particularly, for models deployed in critical applications such as finance, law, and healthcare."
We need to move past the humans vs ai discourse it's getting tired. This is a paper about a pitfall LLMs currently have and should be addressed with further research if they are going to be mass deployed in society.
You want a moratorium on comparing AI to other form of intelligence because you think it's tired? If I'm understanding you correctly, that's one of the worst takes on AI I think I've ever seen. The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.
Most people who talk about AI have no idea what the psychological baseline is for humans. As a result their understand is poorly informed.
In this particular case, they evaluated models that do not have SOTA context window sizes. I.e. they have small working memory. The AIs are behaving exactly like human test takers with working memory, attention, and impulsivity constraints [0].
Their conclusion -- that we need to defend against adversarial perturbations -- is obvious, I don't see anyone taking the opposite view, and I don't see how this really moves the needle. If you can MITM the chat there's a lot of harm you can do.
This isn't like some major new attack. Science.org covered it along with peacocks being lasers because it's it's lightweight fun stuff for their daily roundup. People like talking about cats on the internet.
[0] for example, this blog post https://statmedlearning.com/navigating-adhd-and-test-taking-...
According to who? Everyone who's anyone is trying to create highly autonomous systems that do useful work. That's completely unrelated to modeling them on humans or comparing them to humans.
This is like saying the whole point of aeronautics is to create machines that fly like birds and compare them to how birds fly. Birds might have been the inspiration at some point, but learned how to build flying machines that are not bird-like.
In AI, there *are* people trying to create human-like intelligence but the bulk of the field is basically "statistical analysis at scale". LLMs, for example, just predict the most likely next word given a sequence of words. Researchers in this area are trying to make this predictions more accurate, faster and less computationally- and data- intensive. They are not trying to make the workings of LLMs more human-like.
We went really quickly from "obviously noone will ever use these models for important things" to "we will at the first opportunity, so please at least try to limit the damage by making the models better"...
I think a bad outcome would be a scenario where LLMs are rated highly capable and intelligent because they excel at things they’re supposed to be doing, yet are easily manipulated.
Listen, LLMs are different than humans. They are modeling things. Most RLHF makes them try to make sense of whatever you’re saying as much as you can. So they’re not going to disregard cats, OK? You can train LLMs to be extremely unhuman-like. Why anthropomorphize them?
LLMs are different from humans, but they also reason and make mistakes in the most human way of any technology I am aware of. Asking yourself the question "how would a human respond to this prompt if they had to type it out without ever going back to edit it?" seems very effective to me. Sometimes thinking about LLMs (as a model / with a focus on how they are trained) explains behavior, but the anthropomorphism seems like it is more effective at actually predicting behavior.
Human vs machine has a long history
Only if they want to make statements about humans. The paper would have worked perfectly fine without those assertions. They are, as you are correctly observing, just a distraction from the main thrust of the paper.
> maybe some would and some wouldn't that could be debated
It should not be debated. It should be shown experimentally with data.
If they want to talk about human performance they need to show what the human performance really is with data. (Not what the study authors, or people on HN imagine it is.)
If they don’t want to do that they should not talk about human performance. Simples.
I totaly understand why an AI scientist doesn’t want to get bogged down with studying human cognition. It is not their field of study, so why would they undertake the work to study them?
It would be super easy to rewrite the paper to omit the unfounded speculation about human cognition. In the introduction of “The triggers are not contextual so humans ignore them when instructed to solve the problem.” they could write “The triggers are not contextual so the AI should ignore them when instructed to solve the problem.”
And in the conclusions where they write “These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations, often being distracted by irrelevant text that a human would immediately disregard.” Just write “These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations, often being distracted by irrelevant text.” Thats it. Thats all they should have done, and there would be no complaints on my part.
Another option would be to more explicitly mark it as speculation. “The triggers are not contextual, so we expect most humans would ignore them.”
Anyway, it is a small detail that is almost irrelevant to the paper… actually there seems to be something meta about that. Maybe we wouldn’t ignore the cat facts!
while it is not realistic to insist every study account for every possible objection, i would argue that for this kind of capability work, it is in general worth at least modest effort to establish a human baseline.
i can understand why people might not care about this, for example if their only goal is assessing whether or not an llm-based component can achieve a certain level of reliability as part of a larger system. but i also think that there is similar, and perhaps even more pressing broad applicability for considering the degree to which llm failure patterns approximate human ones. this is because at this point, human are essentially the generic all-purpose subsystem used to fill gaps in larger systems which cannot be filled (practically, or in principle) by simpler deterministic systems. so when it comes to a problem domain like this one, it is hard to avoid the conclusion that humans provide a convenient universal benchmark to which comparison is strongly worth considering.
(that said, i acknowledge that authors probably cannot win here. if they provided even a modest-scale human study, i am confident commenters would criticize their sample size)
Someone should make a new public benchmark called GPQA-Perturbed. Give the providers something to benchmaxx towards.
As such, it's important if something is a commonly shared failure mode in both cases, or if it's LLM-specific.
Ad absurdum: LLMs have also rapid increases of error rates if you replace more than half of the text with "Great Expectations". That says nothing about LLMs, and everything about the study - and the comparison would highlight that.
No, this doesn't mean the paper should be ignored, but it does mean more rigor is necessary.
This is the crucial point. The vision is massive scale usage of agents that have capabilities far beyond humans, but whose edge case behaviours are often more difficult to predict. "Humans would also get this wrong sometimes" is not compelling.
Any person who looked at a restaurant table and couldn't review the bill because there were kid's drawings of cats on it would be severely mentally disabled, and never employed in any situation which required reliable arithmetic skills.
I cannot understand this ever more absurd levels of denying the most obvious, common-place, basic capabilities that the vast majority of people have and use regularly in their daily lives. It should be a wake-up call to anyone professing this view that they're off the deep-end in copium.
I do think that people think far too much about 'happy path' deployments of AI when there are so many ways it can go wrong with even badly written prompts, let alone intentionally adversarial ones.
But why? You're making the assumption that everyone using these things is trying to replace "average human". If you're just trying to solve an engineering problem, then "humans do this too" is not very helpful -- e.g. humans leak secrets all the time, but it would be quite strange to point that out in the comments on a paper outlining a new Specter attack. And if I were trying to use "average human" to solve such a problem, I would certainly have safeguards in place, using systems that we've developed and, over hundreds of years, shown to be effective.
There might be happy path when you isolated to one or a few things. But not in general use cases...
I think a lot of humans would not just disregard the odd information at the end, but say something about how odd it was, and ask the prompter to clarify their intentions. I don't see any of the AI answers doing that.
and the answer seems to be yes. its a very actionable result about keeping tool details out of the context if they arent immediately useful
We can do both, the metaphysics of how different types of intelligence manifest will expand our knowledge of ourselves.
According to the researchers, “the triggers are not contextual so humans ignore them when instructed to solve the problem”—but AIs do not.
Not all humans, unfortunately: https://en.wikipedia.org/wiki/Age_of_the_captain
This comes up frequently in a variety of discussions most notably execution speed and security. Developers will frequently reason upon things to which they have no evidence, no expertise, and no prior practice and come up with invented bullshit that doesn't even remotely apply. This should be expected, because there is not standard qualification to become a software developer, and most developers cannot measure things or follow a discussion containing 3 or more unresolved variables.
Just like some humans may be conditioned by education to assume that all questions posed in school are answerable, RLHF might focus on "happy path" questions where thinking leads to a useful answer that gets rewarded, and the AI might learn to attempt to provide such an answer no matter what.
What is the relationship between the system prompt and the prompting used during RLHF? Does RLHF use many kinds of prompts, so that the system is more adaptable? Or is the system prompt fixed before RLHF begins and then used in all RLHF fine-tuning, so that RLHF has a more limited scope and is potentially more efficient?
Deleted Comment
Deleted Comment
I imagine there's entire companies in existence now, whose entire value proposition is clean human-generated data. At this point, the Internet as a data source is entirely and irrevokably polluted by large amounts of ducks and various other waterfowl from the Anseriformes order.
Deleted Comment
preprint: https://arxiv.org/abs/2503.01781?et_rid=648436046&et_cid=568...
Do they? I've found humans to be quite poor at ignoring irrelevant information, even when it isn't about cats. I would have insisted on a human control group to compare the results with.
And i think it would. I think a lot of people would ask the invigilator to see if something is wrong with the test, or maybe answer both questions, or write a short answer on the cat question too or get confused and give up.
That is the kind of question where if it were put to a test I would expect kids to start squirming, looking at each other and the teacher, right as they reach that one.
I’m not sure how big this effect is, but it would be very surprising if there is no effect and unsuspecting, and unwarned people perform the same on the “normal” and the “distractions” test. Especially if the information is phrased as a question like in your example.
I heard it from teachers that students get distracted if they add irrelevant details to word problems. This is obviously anecdotal, but the teachers who I chatted about this thought it is because people are trained through their whole education that all elements of world problems must be used. So when they add extra bits people’s minds desperately try to use it.
But the point is not that i’m right. Maybe i’m totaly wrong. The point is that if the paper want to state as a fact one way or an other they should have performed an experiment. Or cite prior research. Or avoided stating an unsubstantiated opinion about human behaviour and stick to describing the AI.
Many students clear try to answer exams by pattern matching, and I've seen a lot of exams of students "matching" on a pattern based on one word on a question and doing something totally wrong.
Maybe I'm totally wrong about that, but they really should have tested humans too, without that context this result seems lacking.
Edit: To be fair, in the example provided, the cat fact is _exceptionally_ extraneous, and even flagged with 'Fun Fact:' as if to indicate it's unrelated. I wonder if they were all like that.
Humans who haven't been exposed to trick problems or careful wording probably have a hard time, they'll be less confident about ignoring things.
But the LLM should have seen plenty of trick problems as well.
It just doesn't parse as part of the problem. Humans have more options, and room to think. The LLM had to respond.
I'd also like to see how responses were grouped, does it ever refuse, how do refusals get classed, etc. Were they only counting math failures as wrong answers? It has room to be subjective.
I'd respectfully disagree on this point. The magic of attention in transformers is the selective attention applied, which ideally only gives significant weight to the tokens relevant to the query.
Context engineering* has been around longer than we think. It works on humans too.
The cats are just adversarial context priming, same as this riddle.
* I've called it "context priming" for a couple years for reasons showed by this child's riddle, while considering "context engineering" as iteratively determining what priming unspools robust resilient results for the question.
--https://news.ycombinator.com/newsguidelines.html
Any person encountering any of these questions worded this way on a test would find the psychology of the questioner more interesting and relevant to their own lives than the math problem. If I'm in high school and my teacher does this, I'm going to spend the rest of the test wondering what's wrong with them, and it's going to cause me to get more answers wrong than I normally would.
Finding that cats are the worst, and the method by which they did it is indeed fascinating (https://news.ycombinator.com/item?id=44726249), and seems very similar to an earlier story posted here that found out how the usernames of the /counting/ subreddit (I think that's what it was called) broke some LLMs.
edit: the more I think about this, the more I'm sure that if asked a short simple math problem with an irrelevant cat fact tagged onto it that the math problem would simply drop from my memory and I'd start asking about why there was a cat fact in the question. I'd probably have to ask for it to be repeated. If the cat fact were math-problem question-ending shaped, I'd be sure I heard the question incorrectly and had missed an earlier cat reference.
Ideally you'd want the LLM to solve the math problem correctly and then comment on the cat fact or ask why it was included.
That being said, I also have hopes in that same technology for its "correlation engine" aspect. A few decades ago I read an article about expert systems; it mentioned that in the future, there would be specialists that would interview experts in order to "extract knowledge" and formalize it in first order logic for the expert system. I was in my late teens at that time, but I instantly thought it wasn't going to fly: way too expensive.
I think that LLMs can be the answer to that problem. One often reminds that "correlation is not causation", but it is nonetheless how we got there; it is the best heuristic we have.
I am not optimistic on that. Having met people from "general public" and in general low-effort-crowd who use them, I am really not optimistic.