See also this related tool: https://news.ycombinator.com/item?id=36907074
One big problem we're seeing in this space is over-trust in LLM scorers as 'evaluators'. I've personally seen that minor tweaks to a scoring prompt can sometimes result in vastly different evaluation 'results.' Given recent debacles (https://news.ycombinator.com/item?id=36370685), I'm wondering how we can design LLMOps tools for evaluation which both support the use of LLMs as scorers, but also caution users about their results. Are you thinking similarly about this question, or seen usability testing which points to over-trust in 'auto-evaluators' as an emerging problem?
We offer auto-evals as one tool in the toolbox. We also consider structured output validations, semantic similarity to an expected result, and manual feedback gathering. If anything, I've seen that people are more skeptical of LLM auto-eval because of the inherent circularity, rather than over-trusting it.
Do you have any suggestions for other evaluation methods we should add? We just got started in July and we're eager to incorporate feedback and keep building.
I've seen this in both tools but I wasn't able to understand: In the screenshot with feedback, I see thumbs up and thumbs down options. Where do those values go, what's the purpose? Does it get preserved across runs? It's just not clicking in my head.