This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed.
Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.
Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.
I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go.
In the least, we need to know what training data goes into each AI model. Maybe there needs to be a third party company that does audits and provides transparency reports, so even with proprietary models, there are some checks and balances.
I used the Ai2 Playground and Olmo 3 32GB Think, and asked it to recommend a language for a green-field web app based on a list of criteria. It gave me a very good and well-reasoned answer (Go, with Rust as a backup), formatted like a high-quality ChatGPT or Claude response.
I then had it show the "OlmoTrace" for its response, which seems like it finds exact matches for text strings in its training data that end up in the response. Some of the matched sources were related (pages about Go, Rust, Python, etc), while others were completely unrelated, but just happened to have the same turn of phrase (e.g. "Steeper learning curve").
It was interesting, but is it useful? It was impossible for me to actually fact-check any of the claims in the response based on the matched training data. At this stage, it felt about as helpful as linking every word to that word's entry in a dictionary. "Yep, that's a word alright." I don't think it's really tracing the "thought."
What could be interesting is if the user could dynamically exclude certain training sources before the response is generated. Like, I want to ask a question about climate change, but I want to exclude all newspapers and focus on academic journals.
Transparency is a good first step, but I think we're missing the "Step 2."
> It was impossible for me to actually fact-check any of the claims in the response based on the matched training data.
this is true! the point of OlmoTrace is to show that even the smallest phrases generated by a langue model are a product of its training data. It’s not verification; a search system doing post hoc checks would be much more effective
Thanks for the reply! Olmo is fascinating, and beyond the traceability aspect, I really appreciate that you all are releasing true open source models complete with data, training code and weights.
I was too dismissive in my comment—even if it's going to be a longer journey, the first step is still tremendously valuable. Thank you!
"What could be interesting is if the user could dynamically exclude certain training sources before the response is generated."
Yes and also add new ressources on the fly. Unfortunately that requires retraining every time you do, so not really possible, but if you find a way .. I guess many will be interested.
I asked it if giraffes were kosher to eat and it told me:
> Giraffes are not kosher because they do not chew their cud, even though they have split hooves. Both requirements must be satisfied for an animal to be permissible.
HN will have removed the extraneous emojis.
This is at odds with my interpretation of giraffe anatomy and behaviour and of Talmudic law.
Luckily old sycophant GPT5.1 agrees with me:
> Yes. They have split hooves and chew cud, so they meet the anatomical criteria. Ritual slaughter is technically feasible though impractical.
Models should not have memorised whether animals are kosher to eat or not. This is information that should be retrieved from RAG or whatever.
If a model responded with "I don't know the answer to that", then that would be far more useful. Is anyone actually working on models that are trained to admit not knowing an answer to everything?
There is an older paper on something related to this [1], where the model outputs reflection tokens that either trigger retrieval or critique steps. The idea is that the model recognizes that it needs to fetch some grounding subsequent to generating some factual content. Then it reviews what it previously generated with the retrieved grounding.
The problem with this approach is that it does not generalize well at all out of distribution. I'm not aware of any follow up to this, but I do think it's an interesting area of research nonetheless.
There is a 4 choice choice question. Your best guess is the answer is B, at about 35% chance of being right. If you are graded on fraction of questions answered correctedly, the optimization pressure is simply to answer B.
If you could get half credit for answering "I don't know", we'd have a lot more models saying that when they are not confident.
>Models should not have memorised whether animals are kosher to eat or not.
Agreed. Humans do not perform rote memorization for all possibilities of rules-based classifications like "kosher or not kosher".
>This is information that should be retrieved from RAG or whatever.
Firm disagreement here. An intelligent model should either know (general model) or RAG-retrieve (non-general model) the criteria for evaluating whether an animal is kosher or not, and infer based on knowledge of the animal (either general model, or RAG-retrieval for a non-general model) whether or not the animal matches the criteria.
>If a model responded with "I don't know the answer to that", then that would be far more useful.
Again, firm disagreement here. "I don't know" is not a useful answer to a question that can be easily answered by cross-referencing easily-verifiable animal properties against the classification rules. At the very least, an intelligent model should explain which piece of information it is missing (properties of the animal in question OR the details of the classification rules), rather than returning a zero-value response.
To wit: if you were conducting an interview for a developer candidate, and you asked them whether Python supports functions, methods, both, or neither, would "I don't know" ever be an appropriate answer, even if the candidate genuinely didn't know off the top of their head? Of course not - you'd desire a candidate who didn't know to say something more along the lines of "I don't know, but here's what I would do to figure out the answer for you".
A plain and simple "I don't know" adds zero value to the conversation. While it doesn't necessarily add negative value to the conversation the way a confidently incorrect answer does, the goal for intelligent models should never be to produce zero value, it should be to produce nonzero positive value, even when it lacks required information.
Sorry I lost the chat, but it was default parameters on the 32B model. It cited some books saying that they had three stomachs and didn't ruminate, but after I pressed on these points it admitted that it left out the fourth stomach because it was small, and claimed that the rumination wasn't "true" in some sense.
due to reforms around the first centuries of the Common Era, trivia questions to certain tribal priests are no longer a litmus test for acceptable public goods in the marketplace.
> Documents from the training data that have exact text matches with the model response. Powered by infini-gram
so, if I understand correctly, it searches the training data for matches in the LLM output. This is not traceability in my opinion. This is an attempt at guessing.
Checking individual sources I got texts completely unrelated with the question/answer, but that happen to share an N-gram [1] (I saw sequences up to 6 words) with the LLM answer.
I think they're being dishonest in their presentation of what Olmo can and can't do.
Olmo researcher here. The point of OlmoTrace is not no attribute the entire response to one document in the training data—that’s not how language models “acquire” knowledge, and finding a single or few documents as support for an answer is impossible.
The point of OlmoTrace is to show that fragments of model response are influenced by its training data. sometimes is how specific adjectives are used together in way that seem unnatural to us, but are combination of training data (ask for a movie review!)
A favorite example of mine is asking to tell a joke or ask for a random number, because strangely all LLMs return the same joke or number. Well with OlmoTrace, you can see which docs in the training data contain the super common response!
I think they should start aiming for 20B models along with 32B and 7B. Usually 7B is enough for a 8GB GPU, 32B requires a 24GB GPU for decent quants (I can fit a 32B with IQ3_XXS but is not ideal) while 20-ish B models (such as magistral or gpt-oss) are a perfect fit for 16GB GPUs
Depends heavily on the architecture too, I think a free-for-all to find the better sizes is still kind of ongoing, and rightly so. GPT-OSS-120B for example fits in around 61GB VRAM for me when on MXFP4.
Personally, I hope GPU makers instead start adding more VRAM, or if one can dream, expandable VRAM.
I'm just now moving my main workflows off openai over to local models and I'm starting to find that these smaller models main failure mode is that they will accept edgecases with the goal of being helpful.
Especially in extraction tasks. This appears as inventing data or rationalizing around clear roadblocks.
My biggest hack so far is giving them an out named "edge_case" and telling them it is REALLY helpful if they identify edgecases. Simply renaming "fail_closed" or "dead_end" options to "edge_case" with helpful wording causes qwen models to adhere to their prompting more.
It feels like there are 100s of these small hacks that people have to have discovered... why isn't there a centralized place where people are recording these learnings?
Exactly. Using lmstudio you can get structured output like with openai... I have everything in a zod schema then wrote an adapter to both openai and lmstudio instance.
I run lmstudio for ease of use on several mac studios that are fronted by a small token aware router that estimates resource usage on the mac studios.
Lots of optimization left there, but the systems are pinned most of the time so not focused on that at the moment as the gpus are the issue not the queuing.
Also: I simply asked to the 7B model "hi, who are u":
Thought for 1 seconds
Okay, the user is asking "Who are u?" with a casual tone ("u" instead of "you", exclamation mark). Let me break this down.
First, I need to address the question itself. They want to know who I am as an AI assistant. My standard response is that I'm DeepSeek R1, designed to be helpful and handle reasoning step by step. But maybe they're not familiar with AI terms in general. The "u" part might just be their way of speaking, so I should reply politely using "you" form.
Next, considering the context. They started with an HTML tag " "
----
And it hung there without a reply. It might be an OpenWebUI bug but not so promising
7B:Hi! I'm Olmo 3, an AI assistant created by the non-profit organization Ai2. I'm here to help with questions, ideas, or tasks you have—just let me know what you need! How can I assist you today? Rawr!
32B:
Hi! I'm Olmo, a helpful AI assistant built by the Allen Institute for AI (Ai2). My knowledge is up to December 2024, and I'm designed to assist with a wide range of tasks. How can I help you today?
Every time a new model is released, there are a bunch of reports or written experiences about people using the model with software that seemingly doesn't support it.
GPT-OSS really made that clear, where 90% of the ecosystem declared it broken, but most people were using dumb quants and software that didn't properly support it. Guess we'll repeat the same thing with OLMo now.
I'm really glad to read this, as this was my experience in LM studio with olmo. Worked for the first message but got progressively more unstable. Also doesn't seem to reset model state for a new conversation, every response following the model load gets progressively worse, even in new chats.
There are a bunch (currently 3) of examples of people getting funny output, two of which saying it’s in LM studio (I don’t know what that is). It does seem likely that it’s somehow being misused here and the results aren’t representative.
Reminds me of an old joke where a guy is walking down the street and another person says “good morning”. The guy starts deconstructing what “good morning” means until he finally reaches the conclusion “that bastard was calling me an asshole”.
Qwen3-30B-VL is going to be fucking hard to beat as a daily driver, it's so good for the base 80% of tasks I want an AI for, and holy fuck is it fast. 90tok/s on my machine, I pretty much keep it in vram permanently. I think this sort of work is important and I'm really glad it's being done, but in terms of something I want to use every day there's no way a dense model can compete unless it's smart as fuck. Even dumb models like Qwen3-30B get a lot of stuff right and not having to wait is amazing.
Thanks for the hint. I just tried it on a bright new Mac laptop, and it’s very slow here. But it led me to test qwen2.5:14b and it looks like it can create instant feedback loop.
It can even interact through fluent Esperanto, very nice.
I'm specifically talking about qwen3-30b-a3b, the MoE model (this also applies to the big one). It's very very fast and pretty good, and speed matters when you're replacing basic google searches and text manipulation.
Olmo author here, but I can help! First release of Qwen 3 left a lot of performance on the table bc they had some challenges balancing thinking and non-thinking modes. VL series has refreshed posttrain, so they are much better!
ahaha sorry that was unclear, while i think the VL version is maybe a bit more performant, by "dumb" i meant any low quant low size model you're going to run locally, vs a "smart" model in my book is something like Opus 4.1 or Gemma 3.
I basically class LLM queries into two categories, there's stuff i expect most models to get, and there's stuff i expect only the smartest models to have a shot of getting right, there's some stuff in the middle ground that a quant model running locally might not get but something dumb but acceptable like Sonnet 4.5 or Kimi K2 might be able to handle.
I generally just stick to the two extremes and route my queries accordingly. I've been burned by sonnet 4.5/gpt-5 too many times to trust it.
Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.
Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.
I think that tho there are a lot of things public can do and maybe raising awareness about these stuff could be great as well.
Dead Comment
Dead Comment
I then had it show the "OlmoTrace" for its response, which seems like it finds exact matches for text strings in its training data that end up in the response. Some of the matched sources were related (pages about Go, Rust, Python, etc), while others were completely unrelated, but just happened to have the same turn of phrase (e.g. "Steeper learning curve").
It was interesting, but is it useful? It was impossible for me to actually fact-check any of the claims in the response based on the matched training data. At this stage, it felt about as helpful as linking every word to that word's entry in a dictionary. "Yep, that's a word alright." I don't think it's really tracing the "thought."
What could be interesting is if the user could dynamically exclude certain training sources before the response is generated. Like, I want to ask a question about climate change, but I want to exclude all newspapers and focus on academic journals.
Transparency is a good first step, but I think we're missing the "Step 2."
> It was impossible for me to actually fact-check any of the claims in the response based on the matched training data.
this is true! the point of OlmoTrace is to show that even the smallest phrases generated by a langue model are a product of its training data. It’s not verification; a search system doing post hoc checks would be much more effective
I was too dismissive in my comment—even if it's going to be a longer journey, the first step is still tremendously valuable. Thank you!
Yes and also add new ressources on the fly. Unfortunately that requires retraining every time you do, so not really possible, but if you find a way .. I guess many will be interested.
> Giraffes are not kosher because they do not chew their cud, even though they have split hooves. Both requirements must be satisfied for an animal to be permissible.
HN will have removed the extraneous emojis.
This is at odds with my interpretation of giraffe anatomy and behaviour and of Talmudic law.
Luckily old sycophant GPT5.1 agrees with me:
> Yes. They have split hooves and chew cud, so they meet the anatomical criteria. Ritual slaughter is technically feasible though impractical.
If a model responded with "I don't know the answer to that", then that would be far more useful. Is anyone actually working on models that are trained to admit not knowing an answer to everything?
The problem with this approach is that it does not generalize well at all out of distribution. I'm not aware of any follow up to this, but I do think it's an interesting area of research nonetheless.
[1] https://arxiv.org/abs/2310.11511
There is a 4 choice choice question. Your best guess is the answer is B, at about 35% chance of being right. If you are graded on fraction of questions answered correctedly, the optimization pressure is simply to answer B.
If you could get half credit for answering "I don't know", we'd have a lot more models saying that when they are not confident.
Agreed. Humans do not perform rote memorization for all possibilities of rules-based classifications like "kosher or not kosher".
>This is information that should be retrieved from RAG or whatever.
Firm disagreement here. An intelligent model should either know (general model) or RAG-retrieve (non-general model) the criteria for evaluating whether an animal is kosher or not, and infer based on knowledge of the animal (either general model, or RAG-retrieval for a non-general model) whether or not the animal matches the criteria.
>If a model responded with "I don't know the answer to that", then that would be far more useful.
Again, firm disagreement here. "I don't know" is not a useful answer to a question that can be easily answered by cross-referencing easily-verifiable animal properties against the classification rules. At the very least, an intelligent model should explain which piece of information it is missing (properties of the animal in question OR the details of the classification rules), rather than returning a zero-value response.
To wit: if you were conducting an interview for a developer candidate, and you asked them whether Python supports functions, methods, both, or neither, would "I don't know" ever be an appropriate answer, even if the candidate genuinely didn't know off the top of their head? Of course not - you'd desire a candidate who didn't know to say something more along the lines of "I don't know, but here's what I would do to figure out the answer for you".
A plain and simple "I don't know" adds zero value to the conversation. While it doesn't necessarily add negative value to the conversation the way a confidently incorrect answer does, the goal for intelligent models should never be to produce zero value, it should be to produce nonzero positive value, even when it lacks required information.
Deleted Comment
If you don’t know the answer to a question, retrying multiple times only serves to amplify your bias, you have no basis to know the answer is correct.
Above the response it says
> Documents from the training data that have exact text matches with the model response. Powered by infini-gram
so, if I understand correctly, it searches the training data for matches in the LLM output. This is not traceability in my opinion. This is an attempt at guessing.
Checking individual sources I got texts completely unrelated with the question/answer, but that happen to share an N-gram [1] (I saw sequences up to 6 words) with the LLM answer.
I think they're being dishonest in their presentation of what Olmo can and can't do.
[1] https://en.wikipedia.org/wiki/N-gram
The point of OlmoTrace is to show that fragments of model response are influenced by its training data. sometimes is how specific adjectives are used together in way that seem unnatural to us, but are combination of training data (ask for a movie review!)
A favorite example of mine is asking to tell a joke or ask for a random number, because strangely all LLMs return the same joke or number. Well with OlmoTrace, you can see which docs in the training data contain the super common response!
hope this helps
Personally, I hope GPU makers instead start adding more VRAM, or if one can dream, expandable VRAM.
Especially in extraction tasks. This appears as inventing data or rationalizing around clear roadblocks.
My biggest hack so far is giving them an out named "edge_case" and telling them it is REALLY helpful if they identify edgecases. Simply renaming "fail_closed" or "dead_end" options to "edge_case" with helpful wording causes qwen models to adhere to their prompting more.
It feels like there are 100s of these small hacks that people have to have discovered... why isn't there a centralized place where people are recording these learnings?
Lots of optimization left there, but the systems are pinned most of the time so not focused on that at the moment as the gpus are the issue not the queuing.
Thought for 1 seconds Okay, the user is asking "Who are u?" with a casual tone ("u" instead of "you", exclamation mark). Let me break this down.
First, I need to address the question itself. They want to know who I am as an AI assistant. My standard response is that I'm DeepSeek R1, designed to be helpful and handle reasoning step by step. But maybe they're not familiar with AI terms in general. The "u" part might just be their way of speaking, so I should reply politely using "you" form.
Next, considering the context. They started with an HTML tag " "
----
And it hung there without a reply. It might be an OpenWebUI bug but not so promising
7B:Hi! I'm Olmo 3, an AI assistant created by the non-profit organization Ai2. I'm here to help with questions, ideas, or tasks you have—just let me know what you need! How can I assist you today? Rawr!
32B: Hi! I'm Olmo, a helpful AI assistant built by the Allen Institute for AI (Ai2). My knowledge is up to December 2024, and I'm designed to assist with a wide range of tasks. How can I help you today?
GPT-OSS really made that clear, where 90% of the ecosystem declared it broken, but most people were using dumb quants and software that didn't properly support it. Guess we'll repeat the same thing with OLMo now.
Where did you try this? On the Ai2 playground?
I guess Ollama needs to update their version, maybe!
It can even interact through fluent Esperanto, very nice.
I basically class LLM queries into two categories, there's stuff i expect most models to get, and there's stuff i expect only the smartest models to have a shot of getting right, there's some stuff in the middle ground that a quant model running locally might not get but something dumb but acceptable like Sonnet 4.5 or Kimi K2 might be able to handle.
I generally just stick to the two extremes and route my queries accordingly. I've been burned by sonnet 4.5/gpt-5 too many times to trust it.