I don't think this is true for many fields - especially outside of math/programming. Let's say the task is "find the ten most promising energy startups in Europe." (This is essentially the sort of work I see people frequently talk about using research modes of models for here or on LinkedIn.)
In ye olden days pre-LLM you'd be able to easily filter out a bunch of bad answers from lazy humans since they'd be short, contain no detail, have a bunch of typos, formatting inconsistencies from copy-paste, etc. You can't do that for LLM output.
So unless you're a domain expert on European energy startups you can't check for a good answer without doing a LOT of homework. And if you're using a model that usually only looks at, say, the top two pages of Google results to try to figure this out, how is the validator going to do better than the original generator?
And what about when the top two pages of Google results start turning into model-generated blogspam?
If your benchmark can't evaluate prospective real-world tasks like this, it's of limited use.
A larger issue is that once your benchmark, that used this task as a criteria, based on an expert's knowledge, is published, anyone making an AI Agent is incredibly incentivized to (intentionally or not!) to train specifically on this answer without necessarily actually getting better at the fundamental steps in the task.
IMO you can never use an AI agent benchmark that is published on the internet more than once.
If they can’t write an evaluation for the discriminator I agree. All the input data issues you highlight also apply to generators.
This is actually very wrong. Consider for instance the fact that people who grade your tests in school are typically more talented, capable, trained than the people taking the test. This is true even when an answer key exists.
> Also, human labels are good but have problems of their own,
Granted, but...
> it isn’t like by using a “different intelligence architecture” we elide all the possible errors
nobody is claiming this. We elide the specific, obvious problem that using a system to test itself gives you no reliable information. You need a control.
I don’t think we should assume answering a test would be easy for a Scantron machine just because it is very good at grading them, either.