That's an interesting benchmark. It feels like it tests skills that are very relevant to digital assistants, story writing and role play.
Some thoughts about the setup:
- the setup seems to give reasoning models an inherent advantage because only they have a private plan and a public text in the same output. I feel like giving all models the option to formulate plans and keep track of other players inside <think> or <secret> tags would level the playing field more.
- from personal experience with social tasks for LLMs it helps both reasoning and non-reasoning LLMs to explicitly ask them to plan their next steps, in a way they are assured is kept hidden from all other players. That might be a good addition here either before or after the public subround
- the individual rounds are pretty short. Humans would struggle to coordinate in so few exchanges with so few words. If this was done for context limitations, asking models to summarize the game state from their perspective, then giving them only the current round, the previous round and their own summary of the game before that might be a good strategy.
It would be cool to have some code to play around with to test how changes in the setup change the results. I guess it isn't that difficult to write, but it's peculiar to have the benchmark but no code to run it yourself
Interesting idea of <secret>...maybe extend it to several <secret_i>....to form a groups of secretes with different persons.
In Addition it will be interesting to extend a variation of the game that the players can use tools and execute code to take their preparation one step further.
Most models do pretty well with keeping state in XML if you ask them to. You could extend it to <secret><content>[...]</content><secret_from>P1</secret_from><shared_with>P2, P3</shared_with></secret>. Or tell the model that it can use <secret> tags with xml content and just let it develop a schema on the fly.
At that point, I would love to also see sub-benchmarks how each models's score is affected by being given a schema vs having it make one up, and if the model does better with state in text vs xml vs json. Those don't tell you which model is best, but they are very useful to know for actually using them.
For models that can call tools, just giving them a think tool where they can write their thoughts, can improve performance. Even reasoning models, surprisingly enough.
I did something similar for a "game engine" let the NPCs remember things from other NPCs' and the PC's interaction with them. It wasn't perfect, but the player could negotiate a cheaper price on a dagger for instance if they promised to owe the NPC a larger payout next time they returned to the shop. And it worked... most of the time, the shop owner remembered the debt and inquired about it on the next interaction - but not always, which I guess is kind of "human".
Was interested to find that the Claudes did the most betraying, and were betrayed very little; somewhat surprising given its boy-scout exterior.
(Then again, apparently the president of the local Diplomacy Society attends my church; I discovered this when another friend whom I'd invited saw him, and quipped that he was surprised he hadn't been struck by lightning at the door.)
DeepSeek and Gemini 2.5 had both a low betrayer and betrayed rate.
o3-mini and DeepSeek had the highest number of first-place finishes, but were only in the upper quartile in the TrueSkill leaderboard; presumably because they played more risky strategies, that would either lead ot complete winning or early drop-out?
Also interesting that o1 was only way to sway the final jury a bit more than 50% of the time, while o3-mini managed 63% of the time.
Diplomacy is a game with the following properties:
1. It's not possible to eliminate someone else without another player's help, particularly early in the game
2. There can be only one winner
So "temporary alliances" which are eventually broken are built into the structure of the game; and unlike the "Surviror"-style game here, there's no "payback" round at the end, where the people you've betrayed get to vote against you. I'm not up on the culture of the game, but I'd be surprised if explicit in-game lying isn't considered fair play (e.g., you're not supposed to hold a grudge in real life against someone who lied to you in the game).
I played it once and really didn't enjoy it. With practice I might be able to bifurcate my sense of morals -- I do actually enjoy playing games like Mafia, Resistance, Avalon, etc. But I didn't feel like it would be worth my effort.
I've been using QwQ-32B a lot recently and while I quite like it (especially given its size), I noticed it will often misinterpret the system prompt as something I (the user) said, revealing secrets or details that only the agent is supposed to know. When I saw that it topped the "earliest out" chart, I wondered if that was part of the reason.
I was looking for a more direct measure of this, how often a model "leaked" private state into public state. In a game like this you probably want to sometimes share secrets, but if it happens constantly I would suspect the model struggles to differentiate.
I occasionally try to ask a model to tell a story and give it a hidden motivation of a character, and so far the results are almost always the model just straight out saying the secret.
Yup, that's the problem I run into. You give it some lore to draw on or describe a character or give them some knowledge, and it'll just blurt it out when it finds a place for it. It takes a lot of prompting to get it to stop, and I haven't found a consistent method that works across models (or even across sessions).
As LLM benchmarks go, this is not a bad take at all.
One interesting point about this approach is that is self balancing, so when more powerful models come up, there is no need to change it.
Author here - yes, I'm regularly adding new models to this and other TrueSkill-based benchmarks and it works well. One thing to keep in mind is the need to run multiple passes of TrueSkill with randomly ordered games, because both TrueSkill and Elo are designed to be order-sensitive, as people's skills change over time.
It's interesting to see, but I'm not sure what we should learn from this. It may be useful for multiagent coordination, but in direct interactions... no idea.
This one did make me laugh though: 'Claude 3.5 Sonnet 2024-10-22: "Adjusts seat with a confident yet approachable demeanor"' - an AI communicating to other AIs in a descriptive version of non-verbal behaviour is hilarious.
It shows "state of mind" - i.e. the capability to understand another entities view of the world, and how that is influenced by their actions and other entities actions in the public chat.
I am curious about the prompt given to each AI ? Is that public ?
It shows a shallow understanding of state of mind. Any reasonable person understands that you can't just tell people how to feel about you, you have to earn it through action.
Really love this. I agree with some of the comments here that adding encouragement to keep track of secret plans would be interesting— mostly from an alignment check angle.
One thing I thought of reading logs is that as we know ordering matters to llms. Could you run some analysis on how often “p1” wins vs “p8”? I think this should likely go into your Truescore Bayesian.
My follow up thought is that it would be interesting to let llms choose a name at the beginning; another angle for communication and levels the playing field a bit away from a number.
> Could you run some analysis on how often “p1” wins vs “p8”?
I checked the average finishing positions by assigned seat number from the start, but there weren't enough games to show a statistically significant effect. But I just reviewed the data again, and now with many more games it looks like there might be something there (P1 doing better than P8). I'll run additional analysis and include it in the write-up if anything emerges. For those who haven't looked at the logs: the conversation order etc. are randomized each round.
> My follow up thought is that it would be interesting to let llms choose a name at the beginning
Cool. Looking forward to hearing more from you guys. This ties to alignment in a lot of interesting ways, and I think over time will provide a super useful benchmark and build human intuition for LLM strategy and thought processes.
I now have more ideas; I'll throw them in the github though.
This is a really cool exercise! The format of it seems pretty sound, like a version of the prisoner's dilemma with a larger group (co-operation versus defection).
Although I think that the majority of modern models don't really have the internals suited to this sort of exercise; training data/fine tuning will heavily influence how a model behaves, whether it's more prone to defection, etc.
A Squirrel makes a "Kuk kuk kuk" alarm call not specifically because the "Kuk" token follows the sequence "you saw a predator" (although this would appear to mostly work) but because it has evolved to make that noise to alert other Squirrels to the predator, most likely a response to evolutionary failure associated with a dwindling population; even solitary Squirrels still need to mate, and their offspring need to do the same.
It's like there's an extremely high dimensional context that's missing in LLMs; training on text results in a high dimensional representation of related concepts - but only the way that those concepts relate in language. It's the tip of an iceberg of meaning where in many cases language can't even represent a complex intermediate state within a brain.
Humans try to describe everything we can with words to communicate and that's partly why our species is so damn successful. But when thinking about how to open an unfamiliar door, I don't internally vocalise (which I've learnt not everyone does) "I'm going to grab the handle, and open the door". Instead I look and picture what I'm going to do, that can also include the force I think I'd need to use, the sensation of how the material might feel against my skin and plenty of other concepts & thoughts all definitively _not_ represented by language.
I think you should look at “in-brand” correlation. My hypothesis is that they would undergo similar preference trainings and hence tend to prefer “in-brand” responses over “off-brand” models that might have more significantly different reward training.
Some thoughts about the setup:
- the setup seems to give reasoning models an inherent advantage because only they have a private plan and a public text in the same output. I feel like giving all models the option to formulate plans and keep track of other players inside <think> or <secret> tags would level the playing field more.
- from personal experience with social tasks for LLMs it helps both reasoning and non-reasoning LLMs to explicitly ask them to plan their next steps, in a way they are assured is kept hidden from all other players. That might be a good addition here either before or after the public subround
- the individual rounds are pretty short. Humans would struggle to coordinate in so few exchanges with so few words. If this was done for context limitations, asking models to summarize the game state from their perspective, then giving them only the current round, the previous round and their own summary of the game before that might be a good strategy.
It would be cool to have some code to play around with to test how changes in the setup change the results. I guess it isn't that difficult to write, but it's peculiar to have the benchmark but no code to run it yourself
In Addition it will be interesting to extend a variation of the game that the players can use tools and execute code to take their preparation one step further.
At that point, I would love to also see sub-benchmarks how each models's score is affected by being given a schema vs having it make one up, and if the model does better with state in text vs xml vs json. Those don't tell you which model is best, but they are very useful to know for actually using them.
https://www.anthropic.com/engineering/claude-think-tool
https://github.com/modelcontextprotocol/servers/tree/main/sr...
(Then again, apparently the president of the local Diplomacy Society attends my church; I discovered this when another friend whom I'd invited saw him, and quipped that he was surprised he hadn't been struck by lightning at the door.)
DeepSeek and Gemini 2.5 had both a low betrayer and betrayed rate.
o3-mini and DeepSeek had the highest number of first-place finishes, but were only in the upper quartile in the TrueSkill leaderboard; presumably because they played more risky strategies, that would either lead ot complete winning or early drop-out?
Also interesting that o1 was only way to sway the final jury a bit more than 50% of the time, while o3-mini managed 63% of the time.
Anyway, really cool stuff!
1. It's not possible to eliminate someone else without another player's help, particularly early in the game
2. There can be only one winner
So "temporary alliances" which are eventually broken are built into the structure of the game; and unlike the "Surviror"-style game here, there's no "payback" round at the end, where the people you've betrayed get to vote against you. I'm not up on the culture of the game, but I'd be surprised if explicit in-game lying isn't considered fair play (e.g., you're not supposed to hold a grudge in real life against someone who lied to you in the game).
I played it once and really didn't enjoy it. With practice I might be able to bifurcate my sense of morals -- I do actually enjoy playing games like Mafia, Resistance, Avalon, etc. But I didn't feel like it would be worth my effort.
I occasionally try to ask a model to tell a story and give it a hidden motivation of a character, and so far the results are almost always the model just straight out saying the secret.
This one did make me laugh though: 'Claude 3.5 Sonnet 2024-10-22: "Adjusts seat with a confident yet approachable demeanor"' - an AI communicating to other AIs in a descriptive version of non-verbal behaviour is hilarious.
I am curious about the prompt given to each AI ? Is that public ?
One thing I thought of reading logs is that as we know ordering matters to llms. Could you run some analysis on how often “p1” wins vs “p8”? I think this should likely go into your Truescore Bayesian.
My follow up thought is that it would be interesting to let llms choose a name at the beginning; another angle for communication and levels the playing field a bit away from a number.
I checked the average finishing positions by assigned seat number from the start, but there weren't enough games to show a statistically significant effect. But I just reviewed the data again, and now with many more games it looks like there might be something there (P1 doing better than P8). I'll run additional analysis and include it in the write-up if anything emerges. For those who haven't looked at the logs: the conversation order etc. are randomized each round.
> My follow up thought is that it would be interesting to let llms choose a name at the beginning
Oh, interesting idea!
I now have more ideas; I'll throw them in the github though.
Although I think that the majority of modern models don't really have the internals suited to this sort of exercise; training data/fine tuning will heavily influence how a model behaves, whether it's more prone to defection, etc.
A Squirrel makes a "Kuk kuk kuk" alarm call not specifically because the "Kuk" token follows the sequence "you saw a predator" (although this would appear to mostly work) but because it has evolved to make that noise to alert other Squirrels to the predator, most likely a response to evolutionary failure associated with a dwindling population; even solitary Squirrels still need to mate, and their offspring need to do the same.
It's like there's an extremely high dimensional context that's missing in LLMs; training on text results in a high dimensional representation of related concepts - but only the way that those concepts relate in language. It's the tip of an iceberg of meaning where in many cases language can't even represent a complex intermediate state within a brain.
Humans try to describe everything we can with words to communicate and that's partly why our species is so damn successful. But when thinking about how to open an unfamiliar door, I don't internally vocalise (which I've learnt not everyone does) "I'm going to grab the handle, and open the door". Instead I look and picture what I'm going to do, that can also include the force I think I'd need to use, the sensation of how the material might feel against my skin and plenty of other concepts & thoughts all definitively _not_ represented by language.