Readit News logoReadit News
kibwen · 5 months ago
> Claude will readily notice when the game tells it that an attack from an electric-type Pokémon is “not very effective” against a rock-type opponent, for instance. Claude will then squirrel that factoid away in a massive written knowledge base for future reference later in the run.

But these models already know all this information??? Surely it's ingested Bulbapedia, along with a hundred zillion terabytes of every other Pokemon resource on the internet, so why does it need to squirrel this information away? What's the point of ruining the internet with all this damn parasitic crawling if the models can't even recall basic facts like "thunderbolt is an electric-type move", "geodude is a rock-type pokemon", "electric-type moves are ineffective against rock-type pokemon"?

timabdulla · 5 months ago
This is the most interesting aspect to me. I had Claude generate a guide to all the gyms in Pokemon Red and instructions for how to quickly execute a play through [0].

It obviously knows the game through and through. Yet even with encyclopedic knowledge of the game, it's still a struggle for it to play. Imagine giving it a gave of which it knows nothing at all.

[0] https://claude.site/artifacts/d127c740-b0ab-43ba-af32-3402e6...

ToValueFunfetti · 5 months ago
Claude has no problem recalling type advantages, but it has no metacognition and fails terribly at knowing what it knows versus what it needs to write down.
devnullbrain · 5 months ago
Well, it's not to play Pokemon. Pokemon is a contrived example to research and display reasoning. That would be a useful step for the real world use-cases that aren't index in fine detail, as a human would. Actually, getting this to work is important for avoiding the thing you're complaining about.
gbear605 · 5 months ago
Claude definitely does know it (I just checked). Plausibly they’re giving that as an example of other things that it actually needs to save, but yeah, it’s a bad example.
stonemetal12 · 5 months ago
It has trained on multiple guides and walkthroughs and whatnot for the game from the internet. Theoretically it knows the game inside and out, it can recall all of that if you ask about it. It lacks the ability to turn that into something other than text.

To me that shines a light on the claim that there is a real conceptual space limited by the output being generated text.

bob1029 · 5 months ago
> ruining the internet with all this damn parasitic crawling

I feel like we may be conflating the performance implications of crawling with the intellectual property ones.

devnullbrain · 5 months ago
PC's complaint may be related to this:

https://x.com/GergelyOrosz/status/1905000078839845303

meroes · 5 months ago
Same could be said for Sudoku. How many times has it seen natural numbers 1-9 and Sudoku tutorials and board states crawling the entire public internet. And yet they can’t solve a Sudoku or even a half size board on their own.
Recursing · 5 months ago
This Sunday there's going to be a hackaton at https://lu.ma/poke to improve the Pokemon-playing agent. I think most hackatons don't achieve much, but maybe someone from HN can go improve the scaffolding and save Claude
blainm · 5 months ago
So when it really struggled to get around (kept just walking into obstacles), they gave Claude the ability to navigate by adding pathfinding and awareness of its position and map ID. However, it still struggles, particularly in open-ended areas.

This suggests a fundamental issue beyond just navigation. While accessing more RAM data or using external tools using said data could improve consistency or get it further, that approach reduces the extent to which Claude is independently playing and reasoning.

A more effective solution would enhance its decision-making without relying on direct RAM access or any kind of fine tuning. I'm sure it's possible.

There has to be a better approach, and also in a way that's not relying on reading values from RAM or any kind of fine tuning.

OgsyedIE · 5 months ago
It can't do a good job of reasoning about higher-level abstractions in its long term memory without making poor decisions about which memory items to retain and which to forget.

Would a mixture-of-experts paradigm, where each expert weights the value of short-term memories differently to the weight of long-term memoried, do noticeably better at overcoming that one category of roadblocks?

rtkwe · 5 months ago
Seems like the 200k context window is a huge issue and it's summarization deletes important information leading it to revisit solved areas even when it's working properly or simply forget things it needs to progress.
barotalomey · 5 months ago
I very much feel nostalgic now, thinking about the time when Twitch plays pokemon was something we all enjoyed.
disambiguation · 5 months ago
We all know the shortcomings of LLMs but its been interesting to observe Agency as a system and the flaws that emerge. For example, does Claude have any reason to get from point A to B quickly and efficiently? Running into walls, running in circles, back tracking, etc. Clearly it doesn't have a sense of urgency, but does it matter as long as it eventually gets there?

But here's what strikes me as odd about the whole approach. Everyone knows it's easier for an LLM to write a program that counts the number of R's in "strawberry" than it is to count it directly, yet no one is leveraging this fact.

Instead of ever more elaborate prompts and contexts, why not ask Claude to write a program for mapping and path finding? Hell if that's too much to ask, maybe make the tools before hand and see if it's at least smart enough to use them effectively.

My personal wishlist is things like fact tables, goal graphs, and a word model - where things are and when things happened. Strategies and hints. All these things can be turned into formal systems. Something as simple as a battle calculator should be a no-brainer.

My last hair brained idea - I would like to see LLMs managing an ontology in prolog as a proxy for reasoning.

This all theory and even if implemented wouldn't solve everything, but I'm tired of watching them throw prompts at the wall in hopes that the LLM can be tricked into being smarter than it is.

bigmadshoe · 5 months ago
I worked for one of a few "game playing AI as a service" startups for several years, and we did in fact leverage the strong programming skills of LLMs to play games. When we pivoted from training directly on human gameplay data to simply having the LLM write modular code to play the game, we managed to solve many long-standing problems in our demo levels. Even on unseen games this works well, since LLMs already have a good understanding of how video games work generally.
disambiguation · 5 months ago
That's great that someone out there is solving this stuff, but it begs the question why we're watching Claude fumbling around and not seeing this service in action?
cruffle_duffle · 5 months ago
These things are not general purpose AI tools. The are Large Language tools.

There are dudes on YouTube who get millions of views doing basic reinforcement learning to train weird shapes to navigate obstacle course, win races, learn to walk, etc. But they do this by making a custom model with inputs and outputs that are directly mapped into the “physical world” which these creatures live.

Until these LLM’s have input and output parameters that specifically wire into “front distance sensor or leg pressure sensor” and “leg muscles or engine speed” they are never going to be truly good at tasks requiring such interaction.

Any such attempt that lacks such inputs and outputs and somehow manages to have passable results will be in spite of the model not because of it. They’ll always get their ass kicked by specialized models trained for such tasks on every dimension including efficiency, power, memory use, compute and size.

And that is the thing, despite their incredible capabilities, LLM’s are not AGI and they are not general purpose models either! And they never will be. And that is just fine.

smjburton · 5 months ago
> But despite recent advances in AI image processing, Hershey said Claude still struggles to interpret the low-resolution, pixelated world of a Game Boy screenshot as well as a human can.

> We built the text side of it first, and the text side is definitely... more powerful. How these models can reason about images is getting better, but I think it's a decent bit behind.

This seems to be the main issue: using an AI model predominantly trained on text-based reasoning to play through a graphical video game challenge. Given this context, the image processing for this model is like an underdeveloped skill compared to its text-based reasoning. Even though it spent an excessive amount of time navigating through Mt. Moon or getting trapped in small areas of the map, Claude will likely only get better at playing Pokemon or other games as it's trained on more image-related tasks and the model balances out its capabilities.

0xbadcafebee · 5 months ago
Anyone who has used AI recently knows how much bullshit the claims are about replacing any human for virtually any task. AI still can't do super advanced things like tell me the correct height of my truck from the manufacturer's tech specs PDF that it has in its databank. Even when I tell it what the correct height is, it'll randomly change the height throughout a session. I have to constantly correct it because thankfully I know enough about a given subject that I know it's bullshitting. Once I correct it, it suddenly admits, oh yeah, actually it was this other thing, here's the right info.

It's an amazing search engine, and has really cool suggestions/ideas. It's certainly very useful. But replace a human? Get real. All these stocks are going to tank once the media starts running honest articles instead of PR fluff.

alabastervlog · 5 months ago
The things feel closer to using a fine-grained search algo with some randomness and a fairly large corpus, than to interacting with something intelligent.

And if you read how they work... that's because that's exactly what they are. There's no thinking going on. When using them, that they've been programmed and prompted to have some kind of tone or personality and to fit within some kind of parameters of "behavior" as if they're a real being reminds me of Flash-based site navigation in the early '00s: flashy bullshit that's impressive for about one minute and then just annoying and inconvenient forever.

As for programming with them, writing the prompts feels more like just another kind of programming than instructing a human.

I'm skeptical this entire approach is more than one small part of what might become AGI, given several more somewhat-unrelated breakthroughs, including probably in hardware.

Like a lot of us, I've gotten sucked into building products with these things because every damn company on the planet, tech or not, has decided to do that for usually-very-bad reasons, and I'm a little worried this is going to crash so hard that having been involved in it at all will be a (minor) black mark on my résumé.

kibwen · 5 months ago
> I'm a little worried this is going to crash so hard that having been involved in it at all will be a (minor) black mark on my résumé.

"I see there's a gap here on your resume, what were you doing between 2021 and 2025?"

"Uh... Prison."

Henchman21 · 5 months ago
Do you have an ETA for when the media will start running honest articles instead of PR fluff? Asking for the entire western world…
cratermoon · 5 months ago
About the time they stop depending on targeted advertising revenue.

Deleted Comment

nprateem · 5 months ago
Exactly. It was a stroke of genius to popularise the word 'hallucinate' instead of 'bullshit' or 'wrong'.

Since it has no fundamental understanding of anything, it's either a sycophant or arrogant idiot.

When I told chatgpt it didn't understand something it obnoxiously told me it understood perfectly. When I told it why, it did a 180 and carried on where I'd left off. I don't use it any more.