With the buzz that's happening with all the new AI models that get released (what feels like every other week), how are companies running internal AI evals to determine which model is best for their use case?
Depending on how they're made up, different teams do vastly different things.
No evals at all, integration tests with no tooling, some use mixed observability tools like LangFuse in their CI/CD. Some other tools like arize phoenix, deepeval, braintrust, promptfoo, pydanticai throughout their development.
It's definitely an afterthought for most teams although we are starting to see increased interest.
My hope is that we can start thinking about evals as a common language for "product" across role families so I'm trying some advocacy [1] trying to keep it very simple including wrapping coding agents like Claude. Sandboxing and observability "for the masses" is still quite a hard concept but UX getting better with time.
What are you doing for yourself/teams? If not much yet, i'd recommend to just start and figure out where the friction/value is for you.
What I've noticed is that it's hard to measure outputs that aren't binary right or wrong, and that's where most human intervention is needed. The biggest examples of this are chatbots and coding agents – basically any output where you can say "hmm well that's a good response, but there is a better response" and that's what still _feels_ like an unsolved problem, benchmarking those kinds of responses.
On top of that, there are combinations of models+prompts that give different results. For example a prompt could yield a great response from Claude, but the same prompt could yield a mediocre response from Gemini. Not just that but different models have different capabilities (example of this is that composite function calling doesn't work the same way for all models).
I'm asking because I'm generally curious on how teams are solving this today – and it _seems_ like there is no gold standard for evals yet although it's gaining interest.
How I do evals today is by testing an output across different dimensions (and it can vary based on use-case): relevance, instruction following, clarity, hallucination rate, etc. which sucks a lot of time (and can never be fully accurate because how do you fully measure something like "clarity"?), and I feel like there's a better way out there.
I use LLMs to determine what a caller’s “intent” is. I do my best with my initial prompt and then I have the “business” test it and I log phrases that they use.
I then make those phrases my scripted test suite. Any changes in prompts or models get put through the same test suite. In my case, I give my customers a website they can use to test new prompts and takes care of versioning.
I also log phrases that didn’t trigger an intent and modify the prompt and put it back through the suite.
No. I also use the least sophisticated but fastest model that Amazon hosts - and it hosts all of them except OpenAI models - Nova Lite
Going from free text to tool call with parameters in the grand scheme of things is one of the easiest things to do especially when you only have a limited number of tools.
For a co-pilot inside an app that could answer product questions, I looked at ~2000 or so support emails. I asked one LLM to dig out "How would you formulate the users question into a chatbot-like question from this email thread" and "What is the actual answer that should be in the response from this email thread", then just asked our bot that question, and have another LLM rate the answer like SUPERIOR | ACCEPTABLE | UNKNOWN etc. These labels proved out to be a good "finger in the wind"-indicator for altering the chunks, prompt changes or model updates.
For an invoice procesing app processing about 14M invoices/year, it was mostly doing fuzzy accuracy metrics against a pretty ok annotated dataset and iterating the prompt based on diffs for a long time. Once you had that dataset you could alter things and see what broke.
Currently, I work on an app with a pretty sophisicated prompt chain flow. Depending on bugs etc we kind of do tests against _behaviour_, like intent recognition or the correct sql filters. As long as the baseline is working with the correct behaviour, whatever model is powering it is not so important. For the final output, it's humans. But we know immediately if some model or prompt change broke some particular intent.
This makes sense. I am particularly interested in your invoice processing app example because the accuracy of those outputs can be quantitatively measured from 0%-100% accuracy.
I'm curious as to what is _good enough_ and how many iterations it takes to get there. Is 100% the only acceptable threshold? If so, how many iterations does that take? What does that process look like? Okay let's say 100% accuracy is too difficult to reach, then how do you choose your minimum acceptable threshold (is 95% accuracy good enough? is 90%?). Do you have a dedicated set of outputs and documents used for evals? I'd love to hear more about this example (if you worked directly on the evals for this app).
We were lucky enough to have PMs create a set of questions, we did a round of generation and labeled pass/fail annotations on each response.
From there we bootstrapped AI-as-a judge and approximately replicated the results. Then we plug in new models, change prompts, pipelines while being able to approximate the original feedback signal. It's not an exact match, but it's wildly better than one-off testing and the regressions it brings.
We're able to confidently make changes without accidentally breaking something else. Overall win, but it can get costly if the iteration count is high.
This is interesting approach, thanks for the insight! If I may ask, _approximately_ how long does it take to test a newly-released model with the current strategy?
The vast majority of AI companies I talk to seem to evaluate models mostly based on vibes.
At my company, we use a mix of offline and online evals. I’m primarily interested in search agents, so I’m fortunate that information retrieval is a well-developed research field with clear metrics, methodology, and benchmarks. For most teams, I recommend shipping early/dogfooding internally, collecting real traces, and then hand-curating a golden dataset from those traces.
Many people run simple ablation experiments where they swap out the model and see which one performs best. That approach is reasonable, but I prefer a more rigorous setup.
If you only swap the model, some models may appear to perform better simply because they happen to work well with your prompt or harness. To avoid that bias, I use GEPA to optimize the prompt for each model/tool/harness combination I’m evaluating.
Ah, interesting – yeah only swapping out the model isn't super insightful since models perform differently given different prompts. I'm going to look into GEPA, thanks!
The more you can afford to build up your understanding of the problem space and define what inputs & outputs look like, the more flexible you can be with evals. Unfortunately, this is a lot of work and requires thinking and discussion with your team and those involved.
I was thinking about something similar the other day. I have seen a repeating pattern of people complaining that a new model comes out, it's amazing for a few weeks, then they nerf it.
Most of these claims are subjective. I was thinking if we had a standardized chain of though representation, and if we could capture each models chain of thought into this standardized format, we could compare these for the same tasks we run.
Yeah that's essentially what I'm looking for. Since now that AI has become such a core part of most businesses, it's pretty critical to use the _best_ models + prompts for whatever your use case is.
Depending on how they're made up, different teams do vastly different things.
No evals at all, integration tests with no tooling, some use mixed observability tools like LangFuse in their CI/CD. Some other tools like arize phoenix, deepeval, braintrust, promptfoo, pydanticai throughout their development.
It's definitely an afterthought for most teams although we are starting to see increased interest.
My hope is that we can start thinking about evals as a common language for "product" across role families so I'm trying some advocacy [1] trying to keep it very simple including wrapping coding agents like Claude. Sandboxing and observability "for the masses" is still quite a hard concept but UX getting better with time.
What are you doing for yourself/teams? If not much yet, i'd recommend to just start and figure out where the friction/value is for you.
- [1] https://ai-evals.io/ (practical examples https://github.com/Alexhans/eval-ception)
On top of that, there are combinations of models+prompts that give different results. For example a prompt could yield a great response from Claude, but the same prompt could yield a mediocre response from Gemini. Not just that but different models have different capabilities (example of this is that composite function calling doesn't work the same way for all models).
I'm asking because I'm generally curious on how teams are solving this today – and it _seems_ like there is no gold standard for evals yet although it's gaining interest.
How I do evals today is by testing an output across different dimensions (and it can vary based on use-case): relevance, instruction following, clarity, hallucination rate, etc. which sucks a lot of time (and can never be fully accurate because how do you fully measure something like "clarity"?), and I feel like there's a better way out there.
https://news.ycombinator.com/item?id=47241412
I use LLMs to determine what a caller’s “intent” is. I do my best with my initial prompt and then I have the “business” test it and I log phrases that they use.
I then make those phrases my scripted test suite. Any changes in prompts or models get put through the same test suite. In my case, I give my customers a website they can use to test new prompts and takes care of versioning.
I also log phrases that didn’t trigger an intent and modify the prompt and put it back through the suite.
Going from free text to tool call with parameters in the grand scheme of things is one of the easiest things to do especially when you only have a limited number of tools.
For a co-pilot inside an app that could answer product questions, I looked at ~2000 or so support emails. I asked one LLM to dig out "How would you formulate the users question into a chatbot-like question from this email thread" and "What is the actual answer that should be in the response from this email thread", then just asked our bot that question, and have another LLM rate the answer like SUPERIOR | ACCEPTABLE | UNKNOWN etc. These labels proved out to be a good "finger in the wind"-indicator for altering the chunks, prompt changes or model updates.
For an invoice procesing app processing about 14M invoices/year, it was mostly doing fuzzy accuracy metrics against a pretty ok annotated dataset and iterating the prompt based on diffs for a long time. Once you had that dataset you could alter things and see what broke.
Currently, I work on an app with a pretty sophisicated prompt chain flow. Depending on bugs etc we kind of do tests against _behaviour_, like intent recognition or the correct sql filters. As long as the baseline is working with the correct behaviour, whatever model is powering it is not so important. For the final output, it's humans. But we know immediately if some model or prompt change broke some particular intent.
I'm curious as to what is _good enough_ and how many iterations it takes to get there. Is 100% the only acceptable threshold? If so, how many iterations does that take? What does that process look like? Okay let's say 100% accuracy is too difficult to reach, then how do you choose your minimum acceptable threshold (is 95% accuracy good enough? is 90%?). Do you have a dedicated set of outputs and documents used for evals? I'd love to hear more about this example (if you worked directly on the evals for this app).
We were lucky enough to have PMs create a set of questions, we did a round of generation and labeled pass/fail annotations on each response.
From there we bootstrapped AI-as-a judge and approximately replicated the results. Then we plug in new models, change prompts, pipelines while being able to approximate the original feedback signal. It's not an exact match, but it's wildly better than one-off testing and the regressions it brings.
We're able to confidently make changes without accidentally breaking something else. Overall win, but it can get costly if the iteration count is high.
Deleted Comment
At my company, we use a mix of offline and online evals. I’m primarily interested in search agents, so I’m fortunate that information retrieval is a well-developed research field with clear metrics, methodology, and benchmarks. For most teams, I recommend shipping early/dogfooding internally, collecting real traces, and then hand-curating a golden dataset from those traces.
Many people run simple ablation experiments where they swap out the model and see which one performs best. That approach is reasonable, but I prefer a more rigorous setup.
If you only swap the model, some models may appear to perform better simply because they happen to work well with your prompt or harness. To avoid that bias, I use GEPA to optimize the prompt for each model/tool/harness combination I’m evaluating.
https://poyo.co/note/20260217T130137/
I wrote about general ideas I take towards simple single prompt features, but most of it is applicable to more involved agentic approaches too.
Most of these claims are subjective. I was thinking if we had a standardized chain of though representation, and if we could capture each models chain of thought into this standardized format, we could compare these for the same tasks we run.