Readit News logoReadit News
vnglst · a year ago
Set is a card game where players have to identify sets of three cards from a layout of 12. Each card features a combination of four attributes: shape, color, number, and shading. A valid set consists of three cards where each attribute is either the same on all three cards or different on each. The goal is to find such sets quickly and accurately.

Though this game is a solved computer problem — easily tackled by algorithms or deep learning — I thought it would be interesting to see if Large Language Models (LLMs) could figure it out.

Deleted Comment

oidar · a year ago
If you think this is fun, try to see how it garbles predicate logic.
Corence · a year ago
FYI: Card 8's transcription is different than the image. In the image 5, 8, 12 is a Set but the transcription says Card 8 only has 2 symbols which removes that Set.
nathanwh · a year ago
Not only that, but 2,6,7 is also a set but not included in the results
vnglst · a year ago
Oh no, thanks for pointing this out! I asked GTP-4o to convert the image to text for me and I only checked some of the cards, assuming the rest would be correct. That was a mistake.

I've now corrected the experiment to accurately take the image into account. This meant that Deepseek was no longer able to find all the sets, but o3-mini still did a good job.

yuliyp · a year ago
Both 7 and 8 are incorrect (both claim a count of 2 while the cards have 3). This leads to missing both 5-8-12 and 2-6-7 as valid sets.
RheingoldRiver · a year ago
Woah, what's going on?? I've always played Set with stripey cards, is this a custom deck or did they change it at some point???

This is wildly disconcerting to me

margalabargala · a year ago
This is definitely a custom/knock-off deck. Not only are the stripes not stripey, the capsules are now ovals and the diamonds are now rectangles.
Doxin · a year ago
My first party set deck looks exactly like that. They must've done a redesign at some point.
bhouston · a year ago
I noticed that LLM at least at the Claude and OpenAI 4o level can not play tic tac toe and win against a competent opponent. They make illogical moves.

Interestingly, they can write a piece of code to solve Tic Tac Toe perfectly without breaking a sweat.

levocardia · a year ago
I've always said that appending "use python" to your prompt is a magic phrase that makes 4o amazingly powerful across a wide range of tasks. I have a whole slew of things in my memories that nudge it to use python when dealing with anything even remotely algorithmic, numeric, etc
3vidence · a year ago
Playing tic tac toe could be such a basic topic that there is relatively little information on the internet about how to "always" win.

On the other hand writing a piece of code to solve Tic Tac Toe sounds like it could be a relatively common coding challenge.

eek2121 · a year ago
Win or stalemate? because a stalemate is the likely scenario against a somewhat competent opponent IMO.
potatoman22 · a year ago
It might be the way you're formatting the input. I wonder how they perform when state updates are shared via natural language vs ASCII art vs image
bhouston · a year ago
I tried a bunch of different ways. It wasn’t the prompt or input format.
spuz · a year ago
Since you can train an LLM to play chess from scratch, I would not be surprised if you could also train one to play Set. I might experiment with it tomorrow.

https://adamkarvonen.github.io/machine_learning/2024/01/03/c...

Deleted Comment

zdw · a year ago
Get them to play Fluxx, and we'll be talking...

(this one, where ever changing rules is part of the game: https://www.looneylabs.com/games/fluxx )

James_K · a year ago
I am increasingly concerned that these new reasoning models are thinking.
Waterluvian · a year ago
I still think that we’re at much greater risk of discovering that human thinking is much less magical, than we are of making a machine that does magical thinking.
recursive · a year ago
No problem. Just redefine "thinking".
James_K · a year ago
To what? And how does that change the reality of what the models are doing?
hall0ween · a year ago
My experience of thinking is that it is a constant phenomenon. My experience of LLMs is that they only respond and are not running without input.
sejje · a year ago
That's because we don't leave them running, right? We could, though, yes?
James_K · a year ago
Well there is a gap between the firing of individual neurons in your mind. How long would that gap need to be for it not to count as thinking anymore?