Readit News logoReadit News
memset · a month ago
I work at Ramp and have always been on the “luddite” side of AI code tools. I use them but usually I’m not that impressed and a curmudgeon when I see folks ask Claude to debug something instead of just reading the code. I’m just an old(er) neckbeard at heart.

But. This tool is scarily good. I’m seeing it “1-shot” features in a fairly sizable code base and fixes with better code and accuracy than me.

ColinEberhardt · a month ago
An important point here is that it isnt doing a 1-shot implementation, it is iteratively solving a problem over multiple iterations, with a closed feedback loop.

Create the right agentic feedback loop and a reasoning model can perform far better through iteration than its first 1-shot attempt.

This is very human. How much code can you reliable write without any feedback? Very little. We iterate, guided by feedback (compiler, linter, executing and exploring)

yomismoaqui · a month ago
These Xmas there have been a lot of converted programmers after having some free time and playing with things like Codex, Claude Code, AMP...
keyle · a month ago
This basically sums up where we're at. Undeniably useful but careful in approach.
cloudking · a month ago
We use https://devin.ai for this and it works very well. Devin has it's own virtual environment, IDE, terminal and browser. You can configure it to run your application and connect to whatever it needs. Devin can modify the app, test changes in the browser and send you a screen recording of the working feature with a PR.
martypitt · a month ago
Interestingly, Devin lists Ramp (the OP) as a customer on their front page.

Surprised they need both.

ostegm · a month ago
This is a great writeup! Could you share more about the sandbox <-> client communication architecture? e.g., is the agent emitting events to a queue/topic, writing artifacts to object storage, and the client subscribes; or is it more direct (websocket/gRPC) from the sandbox? I’ve mostly leaned on sandbox.exec() patterns in Modal, and I’m curious what you found works best at scale.
mootoday · a month ago
After reading the article, I built a tool like that with sprites.dev. There's a websocket to communicate stdout and stderr to the client.

Web app submits the prompt, a sandbox starts on sprites.dev and any Claude output in the sandbox gets piped to the web app for display.

Not sure I can open source it as it's something I built for a client, but ask if you have any questions.

martypitt · a month ago
This is a really great post - and what they've built here is very impressive.

I wonder if we're at the point where the cost of building and maintaining this yourselves (assisted with an AI Copilot) is now more effective than an off-the-shelf?

It feels like there's a LOT of moving parts here, but also it's deeply tailored to their own setup.

FWIW - I tried pointing Claude at the post and asking it to design an implementation, (like the post said to do) and it struggled - but perhaps I prompted it wrong.

heffstaDug · a month ago
I had this exact idea, I pointed Codex to it, with giving it context of our environment which is pretty complex. It is struggling, but that is because even our dev experience where I work is not great and not documented, so that would need to be lifted before I can reliably get an agent setup as well integrated as this blog post details.
falloutx · a month ago
This kind of project totally shows that Claude Code is nothing special, if anything it lacks a lot of features. I hope every company develops a model agnostic coding agent rather than using a one tightly controlled by one company.
willtemperley · a month ago
Yes. I don't think that one-size-fits-all is the future of coding agents. Different companies have different requirements. I would like to build specialised test harnesses that internal coding agents could use to iterate rapidly.

Also, inevitably these AI companies will start selling out data and become part of the surveillance state, if they're not already.

redman25 · a month ago
It's really a shame because anthropic had a lot of opportunity to show good will by open sourcing claude code.
inssein · a month ago
Probably the best internal ai platform I've seen to date, incredible work.

Deleted Comment

yoav · a month ago
Fun marketing experiment, but you basically implemented ralph wiggum in the cloud.

Claude code locally in a vm and/or with work trees will 1 shot far better without burning cloud infra cash.

I’d bet this ends up wasting more money and time than it’s worth in practice.