Readit News logoReadit News
kburman · 5 months ago
I thought https://playground.cognition.ai/ was just returning some cached query results, but no, they’re actually spinning up real VMs and running live queries without any authentication or restrictions. That must be costing them a fortune.
groby_b · 5 months ago
Currently, all queries are returning "We're under load and processing too many requests. Please try again later."

So that's how that is going ;)

awsanswers · 5 months ago
LLM product managers: Show me what's in the context convenient to where I am prompting. Likely the user knowing and editing the precise context between requests will be a user task for a long time
bluelightning2k · 5 months ago
Actually I do have a question! How come things as substantial as this were just released and not part of a "wave" ? I quite liked the waves way of doing things! Great work either way.
marstall · 5 months ago
SWE-1 has been being booped up by WindSurf to me lately and I've been impressed - often (enough?) getting me the same answers as GPT5 etc., but almost instantly. Gotta say speed is nice.
swyx · 5 months ago
nice, what does booped up mean? is this gen z lingo?
marstall · 5 months ago
ha more like how i talk to my two year old. WindSurf's Cascade sidebar tool (which i use in RubyMine) has a stable of LLMs and it somewhat randomly switches the active one out from time to time. So I get a taste of what different ones are like, it's kind of cool.
bluelightning2k · 5 months ago
This is really cool. Thank you for this. I'm a Windsurf user since launch and was VERY pleasantly surprised to see this pop up.

I also enjoyed the tech write-up. It's good to see REAL substantial engineering like this which is both highly impressive and highly productized.

SafeDusk · 5 months ago
Kickstarting an exploratory open version here https://github.com/aperoc/op-grep since it doesn't look like they will do it.
unturned3 · 5 months ago
This has very little resemblance of SWE-grep haha. At least fine-tune a small pre-trained LLM or something on a retrieval dataset. But no, this literally tries to train a small RNN from scratch to retrieve results given a natural language query...
tifa2up · 5 months ago
Searched for 'hi' and it took 166s to return a response using this model: https://pasteboard.co/oB4VqVC5FGkl.png

Claude Code took 0.1s, Cursor CLI 19s

mgambati · 5 months ago
If you ask a real question, then you might get real results.
silasalberti · 5 months ago
hey I'm from the SWE-grep team - feel free to ask me any questions :)
bluelightning2k · 5 months ago
No question just wanted to say good job and thanks as a user. Same with deepwiki and codemaps.
llllm · 5 months ago
Did you intend to answer them, or you just wanted the questions?
kwillets · 5 months ago
Are you actually using grep here? How much data are you searching?
swyx · 5 months ago
no - grep is just the closest analogy/use case that we have for it. if we end up releasing the CLI it should be as handy and nobrainer as using ripgrep

idk what you expect from a question about "how much data". its tool based search. its a lot.

daralthus · 5 months ago
this would be useful outside of coding. could you release a benchmark so we can have more models tuned for this?
foodbaby · 5 months ago
What base model did you use?