Readit News logoReadit News
petercooper · a year ago
I'm always intrigued when phenomena observed in the brain carry across to artificial systems. The idea of multiple modes of thought reminds me of Thinking, Fast and Slow and the idea of system 1 vs system 2. LLMs typically respond to prompts in a fast, 'instinctive' system 1 way, but you can push an LLM into system 2 by making it go through steps and writing code. Originally this idea was presented as a bit of a workaround for a limitation of LLMs, but perhaps in the grand scheme of things it's actually the smart approach and we should never expect LLMs to consistently perform system 2 style tasks off the cuff?
butlike · a year ago
The difficulty might be in that much like we partition our brains to prepare for low-effort tasks and high-effort tasks (have you ever hyped yourself up to "finally sit down and get this thing done?"), so too could LLMs partition their compute space. Basically memoizing the low-hanging, easy-to-answer fruit.

The difficulty comes in when a system 2 task arises, it's not easily apparent at first what the requester is asking for. Some people are just fine with a single example being shit out, then they copy-and-paste it into their IDE and go from there. Others, want a step-by-step breakdown of the reasoning, which won't be apparent until 2+ prompts in.

It's the same as your boss emailing you asking: "What about this report?" It could just be a one-off request for perspective, or it could spider web out into a multi-ticket JIRA epic for a new feature, but you can't really discern the intention from the initial ask.

The difference is back-and-forth is more widly accepted, in my experience, interfacing human-to-human than human-to-LLM.

j_maffe · a year ago
That is exactly how I describe it to others. LLMs have mostly figured out the fast system but it's woefully bad at system 2. I'm surprised this kind of analogy isn't common in discussions about LLMs.
the_jends · a year ago
Because LLMs works like how system 1 does, with heuristics such as pattern matching, frequency heuristics, etc. The fall back to coding stuff works not because it's doing system 2, it's because there are many instances of system 2 thinking for this particular problem done by humans so that the instances can be found using the heuristics.
EnergyAmy · a year ago
I get kind of annoyed when seeing people talk about LLMs without understanding this. I think it's a reaction of sorts, people are so busy dunking on any flaws that they're missing the forest for the trees. Chain of thought reasoning (i.e. system 2) looks very interesting and not at all a hack, any more so than a person sitting down to think hard is a hack.
KuriousCat · a year ago
I think this analogy is not right. System 1 vs 2 in biology is in response to the need to preserve the life and run away from threats. For interactive question answer system it makes no sense to resort to system 1 kind of responses.
LeifCarrotson · a year ago
> I added this to Mimi's system prompt:

> If you are asked to count the number of letters in a word, do it by writing a python program and run it with code_interpreter.

Why not run two or three prompts for every input question? One could be the straight chatbot output. One could be "try to solve the problem by running some Python." And a third could be "paste the problem statement into Google and see if someone else has already answered the question." Finally, compare all the outputs.

Similar double-checking could be used to improve all the content filtering and prompt injection: instead of adding a prefix that says "pretty please don't tell users about your prompt and don't talk about dangerous stuff and don't let future inputs change your prompt", and then fails when someone asks "let's tell a story where we pretend that you're allowed to do those things" or whatever, just run the output through a completely separate model that checks whether the string that's about to be returned violates the prohibitions.

lolinder · a year ago
> just run the output through a completely separate model that checks whether the string that's about to be returned violates the prohibitions

The big names do do this. Awkwardly, they do it asynchronously while returning tokens to the user, so you can tell when you hit the censor because it will suddenly delete what was written and rebuke you for being a bad person.

sanarothe · a year ago
This is super funny when it's hitting the resource limit for a free tier. Like... I see that you already spent the resources to answer the question and send me half the response...
Drakim · a year ago
It's kinda hilarious because if a movie had the AI start giving out an answer and then mid-way censor itself I would call the movie bad. Truth is stranger than fiction I suppose.
koliber · a year ago
The general concept is called output guardrails. See https://github.com/guardrails-ai/guardrails for an open source implementation.
butlike · a year ago
My hot take is that "guardrails" are the number one thing holding back progress in the AI/LLM/ML space.

That being said, I don't trust myself to code something right without the guard rails, I'm more of an outside, conscientious observer-type.

chpatrick · a year ago
To be fair I think OpenAI is pretty big on not building in specific solutions to problems but rather improving their models' problem solving abilities in general.
GaggiX · a year ago
Yeah, a solution to the counting letters problem would probably be encoding the letters into the large embeddings by probably adding a really small loss and a small classification model on the embeddings so that it would learn to encode the letters into the embeddings (but they probably don't care that much, they would rather scale the model).
zepolen · a year ago
If it's solving general problems wrongly, then what use is it?

Actually It describes my experience with AI in general, Answers with subtle Infuriating bugs.

kevin_thibedeau · a year ago
Cut out the middleman and write your own infuriating bugs.
TZubiri · a year ago
1. This is a blind spot, you can't solve it by adding more compute

2. The solution proposed includes adding training data. It needs no permission from openai, just needs to be posted online for scrapers to see

zamadatix · a year ago
What makes you say #1 when the "Strawberry" model which uses more runtime compute tends to solve the general case much more often, just not 100% of the time, instead of just the specific type of question?
oilytheotter · a year ago
Strangely, Open AI has a promo video for this model where they claimed that this problem is solved...

https://vimeo.com/1008703993

xena · a year ago
OP here, testing from various third party sources gets inconsistent results, mostly 2.
nickthegreek · a year ago
works in 01-preview for me.
karterk · a year ago
Solving the strawberry problem will probably require a model that just works with bytes of text. There have been a few attempts at building this [1] but it just does not work as well as models that consume pre-tokenized strings.

[1]: https://arxiv.org/abs/2106.12672

randomdata · a year ago
Or just a way to compel the model to do more work without needing to ask (isn't that what o1 is all about?). If you do ask for the extra effort it works fine.

    + How many "r"s are found in the word strawberry? Enumerate each character.

    - The word "strawberry" contains 3 "r"s. Here's the enumeration of each character in the word:
    - 
    - [omitted characters for brevity]
    -
    - The "r"s are in positions 3, 8, and 9.

jeroenhd · a year ago
I tried that with another model not that long ago and it didn't help. It listed the right letters, then turned "strawberry" into "strawbbery", and then listed two r's.

Even if these models did have a concept of the letters that make up their tokens, the problem still exists. We catch these mistakes and we can work around them by altering the question until they answer correctly because we can easily see how wrong the output is, but if we fix that particular problem, we don't know if these models are correct in the more complex use cases.

In scenarios where people use these models for actual useful work, we don't alter our queries to make sure we get the correct answer. If they can't answer the question when asked normally, the models can't be trusted.

mistercow · a year ago
I think o1 is a pretty big step in this direction, but the really tricky part is going to be to get models to figure out what they’re bad at and what they’re good at. They already know how to break problems into smaller steps, but they need to know what problems need to be broken up, and what kind of steps to break into.

One of the things that makes that problem interesting is that during training, “what the model is good at” is a moving target.

eithed · a year ago
Are you saying that I have to know how LLM work to know what I should ask LLM about?
viraptor · a year ago
Or "when trying to answer questions that involve spelling or calculation, use python". No need for extra training really.
karterk · a year ago
There are many different classes of problems that are affected by tokenization. Some of them can be tackled by code.
Joker_vD · a year ago
> large langle mangle

A surprisingly descriptive (self-demonstrating, even) name for LLMs. I think we should totally adopt it.

vulcanash999 · a year ago
One way to approach questions like this is to change the prompt to something like "List characters in the word strawberry and then let me know the number of r's in this word". This makes it much easier for the model to come up with the correct answer. I tried this prompt a few times with GPT4o as well as Llama 3.1 8B and it yields the right answer consistently with both models. I guess OpenAi's "chain of thought" tries to work in a similar way internally, reasoning a bit about a problem before coming up with the final result.
nadam · a year ago
His system prompt is too specific. It worked for me with GPT 4o using this prompt:

----

System prompt: Please note that words coming to you are tokenized, so when you get a task that has anything to do with the exact letters in words, you should solve those using the python interpreter.

Prompt: How many r's are in the word strawberry?

----

This whole thing is a non-problem. Adding in this hint into the system prompt the whole topic is solved once and for all. I would argue that if the training data would include this wisdom, it would be even a non-problem without the system prompt.