Readit News logoReadit News
simonw · 3 years ago
I wrote my own simplest-possible implementation of ReAct in Python here, which I think helps demonstrate quite how much you can get done with this pattern using only a very small amount of code:

https://til.simonwillison.net/llms/python-react-pattern

kfarr · 3 years ago
Love this example! No offense to OP research paper but I appreciate the simplicity of your Python version instead

PS also thanks for this genuine LOL moment from the intro:

> A popular nightmare scenario for AI is giving it access to tools, so it can make API calls and execute its own code and generally break free of the constraints of its initial environment.

> Let's do that now!

nighthawk454 · 3 years ago
Cheers, Simon - been seeing your comments around and enjoying your blog and coverage of this stuff.

Is that prompt in your TIL really all it takes to inform it of these 3 actions? That's pretty impressive. I wonder how many actions it can scale to? I kind of expected some kind of classifier layer to predict if an action was necessary!

simonw · 3 years ago
I've not tested its limits yet. The thing to consider is prompt length - depending on the model you get around 4,000 tokens, and the prompt itself is already 264 according to https://platform.openai.com/tokenizer - you need a bunch of space left for providing the output of your various actions, so tokens get used up pretty quickly.

The ReAct paper talks about fine-tuning to teach a model actions. I'd be interested to see an experiment that fine-tunes the LLaMA model to teach it actions - I have a hunch that might work really well, and save a bunch of token space in the actual execution phase.

johntash · 3 years ago
Thanks for writing this up. I read through it the other day and it was a lot simpler to get through than digging through Langchain for the first time.

I took your example and added a couple other "actions" like searching a searxng instance and returning the markdown version of a certain url. It's surprising how much more useful it can be when it has the ability to look stuff up on the internet.

bestcoder69 · 3 years ago
How reliable has it been for you? After struggling with langchain & GPT4 I was planning to try your lib next before maybe writing my own - I plan to make a super generalized version so that my bot can code itself, so might have to pull out some tricks. (Before I got api access I got ChatGPT4 to do this - just not with a proper ReAct pattern - via a tampermonkey extension it wrote me…lol)
simonw · 3 years ago
I spent about 30 minutes writing the code and 15 minutes writing it up - I haven't spent much time at all testing it and making sure it's robust and reliable. I just wanted to illustrate the concept.
matthewfcarlson · 3 years ago
Cheers Simon! I really appreciated your article on how LLMs and LLaMA are having their Stable Diffusion moment. It was well thought through. I might take your python and expand it to try to make an actually intelligent home assistant that can answer more helpful questions.
dragonwriter · 3 years ago
This is awesome – a very accessible and easily extensible implementation of the concept.
doctor_eval · 3 years ago
That’s nuts. Thanks for sharing.
minimaxir · 3 years ago
The ReAct paradigm is one of the more powerful tools in the recent LangChain package which allows a more batteries-included approach to using it with models like GPT-3 and the ChatGPT API.

https://langchain.readthedocs.io/en/latest/modules/agents/im...

https://langchain.readthedocs.io/en/latest/modules/agents/ex...

Ozzie_osman · 3 years ago
I'm a huge fan of langchain, so Yea if you want to just use this pattern try their agents, but if you want to better understand how it actually works, the blog post from simonw below includes a snippet that does it in a very small amount of simple code.

Clicked a lot faster reading his code than digging through langchain (though I'd still use langchain now that I understand how this works).

nico · 3 years ago
By the way.

The LangChainHub repo seems dead (last commit 2 months ago).

Do you know of any alternative repos/marketplaces of chains/tools/prompts for LangChain?

hoerzu · 3 years ago
This guy used it to execute actions in Google Sheets: https://twitter.com/filmfranz/status/1637556615007338496?t=5...
bestcoder69 · 3 years ago
Anyone had luck getting this going in GPT-4 yet? I tried a couple of the chat-specific agents in langchain a couple days ago but it seems like the extra chat RLHF makes GPT-3.5/4 stubborn about not wanting to write messages in the needed format. I could get it working some of the time, but it was really unreliable. Next up I’ll try Simonw’s (the G.O.A.T.) micro-lib for this.

Also, man, what an annoying context to see “As a language model I cannot…”.

ksubedi · 3 years ago
I have had luck with doing this on GPT4 with careful prompting, but GPT 3.5 is pretty reluctant to respond with anything other than straight up conversational answers.
akomtu · 3 years ago
Where is "reason" in this model? A chain of semi-related thoughts isn't reason. LLMs need a set of axioms and formal logic to establish truthfulness of arbitrary statements.
simonw · 3 years ago
If you can figure out how to do that - extend an LLM with formal logic to establish truth v.s. fiction - you'll be solving something that so far the big AI labs have all failed to do.
computerex · 3 years ago
Gpt-4 can do formal logic and symbolic manipulation. I really don’t understand your comment.
jahewson · 3 years ago
Humans manage to reason just fine with neither of these.
cscurmudgeon · 3 years ago
This is like saying humans aren’t made up of atoms but humans are made up of cells.
akomtu · 3 years ago
Those humans use loosy logic and loosy axioms, but even they intuitively get that logic and axioms are necessary.