After a brief play last night, the biggest feature of aider I miss is more control over the context window -- saying "/clear" to re-start the conversation from scratch, or specifying files to add or remove as they become relevant or irrelevant. Not clear how much or how long files stay in the context window.
The other question I have is whether you use Anthropic's "prompt caching" [1] to reduce the cost of the long conversation?
[1] https://docs.anthropic.com/en/docs/build-with-claude/prompt-...
and nothing expires out from the session until you get near the context window max - at which point we do a compaction process
we're a little over a month into development and have a lot on our roadmap
the cli is client/server model - the TUI is our initial focus but the goal is to build alternative frontends, mobile, web, desktop, etc
we think of our task as building a very good code review tool - you'll see more of that side in the following weeks
can answer any questions here
One other thing that would be neat to make more visible: what kind of prompts and tools are at the heart of this agent?
I found a bunch of tools here. Haven't found an overarching prompt yet. https://github.com/sst/opencode/tree/dev/packages/opencode/s...
but we're going to make all this very configurable next week
other than the focus on tui design, does this have any advantage over Claude Code, Aider, Gemini using the same model?
we're very focused on UX and less so on LLM performance. we use all the same system prompts/config as claude code
that said people do observe better performance because of out of the box LSP support - edit tools return errors and the LLM immediately fixes them
As far as I know, Next works without hiccups if you deploy to containers. However, that’s not our case, as we use Lambdas on AWS. We’ve been using OpenNext since the early versions. I must add that I’ve followed a lot of libraries on Discord, and among all those I’ve joined (PayloadCMS coming second), there’s none as helpful, friendly, and open to discussing issues as OpenNext.
We serve millions of pages per day at TelevisaUnivision, and we have nearly 5 million pages indexed on Google. Since migrating to RSC nearly three years ago (we started with the betas), we now pay only 10% of what we previously did on AWS, and we’ve transformed almost all of our poor-performing pages into fast ones in Google Search Console. We cache significantly more now and don’t follow the typical caching conventions of Next/OpenNext—we use ElastiCache with Redis. Nonetheless, the framework and library have enabled us to do this and even allowed us to use a different CDN (currently Fastly, previously Akamai).
We trigger our deployments from GitHub Actions using SST. It’s opinionated, and one of the individuals behind it isn’t the friendliest, but it works—so I respect it for that.
I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.
Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.
Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.