I thought https://playground.cognition.ai/ was just returning some cached query results, but no, they’re actually spinning up real VMs and running live queries without any authentication or restrictions. That must be costing them a fortune.
LLM product managers: Show me what's in the context convenient to where I am prompting. Likely the user knowing and editing the precise context between requests will be a user task for a long time
Actually I do have a question! How come things as substantial as this were just released and not part of a "wave" ? I quite liked the waves way of doing things! Great work either way.
SWE-1 has been being booped up by WindSurf to me lately and I've been impressed - often (enough?) getting me the same answers as GPT5 etc., but almost instantly. Gotta say speed is nice.
ha more like how i talk to my two year old. WindSurf's Cascade sidebar tool (which i use in RubyMine) has a stable of LLMs and it somewhat randomly switches the active one out from time to time. So I get a taste of what different ones are like, it's kind of cool.
This has very little resemblance of SWE-grep haha. At least fine-tune a small pre-trained LLM or something on a retrieval dataset. But no, this literally tries to train a small RNN from scratch to retrieve results given a natural language query...
no - grep is just the closest analogy/use case that we have for it. if we end up releasing the CLI it should be as handy and nobrainer as using ripgrep
idk what you expect from a question about "how much data". its tool based search. its a lot.
So that's how that is going ;)
I also enjoyed the tech write-up. It's good to see REAL substantial engineering like this which is both highly impressive and highly productized.
Claude Code took 0.1s, Cursor CLI 19s
idk what you expect from a question about "how much data". its tool based search. its a lot.