Readit News logoReadit News
Posted by u/skarat 3 months ago
Ask HN: Cursor or Windsurf?
Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
danpalmer · 3 months ago
Zed. They've upped their game in the AI integration and so far it's the best one I've seen (external from work). Cursor and VSCode+Copilot always felt slow and janky, Zed is much less janky feels like pretty mature software, and I can just plug in my Gemini API key and use that for free/cheap instead of paying for the editor's own integration.
vimota · 3 months ago
I gave Zed an in-depth trial this week and wrote about it here: https://x.com/vimota/status/1921270079054049476

Overall Zed is super nice and opposite of janky, but still found a few of defaults were off and Python support still was missing in a few key ways for my daily workflow.

sivartnotrab · 3 months ago
ooc what python support was missing for you? I'm debating Zed
submeta · 3 months ago
Consumes lots of resources on an M4 Macbook. Would love to test it though. If it didn’t freeze my Macbook.

Edit:

With the latest update to 0.185.15 it works perfectly smooth. Excellent addition to my setup.

_bin_ · 3 months ago
I'll second the zed recommendation, sent from my M4 macbook. I don't know why exactly it's doing this for you but mine is idling with ~500MB RAM (about as little as you can get with a reasonably-sized Rust codebase and a language server) and 0% CPU.

I have also really appreciated something that felt much less janky, had better vim bindings, and wasn't slow to start even on a very fast computer. You can completely botch Cursor if you type really fast. On an older mid-range laptop, I ran into problems with a bunch of its auto-pair stuff of all things.

aquariusDue · 3 months ago
In my case this was the culprit: https://github.com/zed-industries/zed/issues/13190 otherwise it worked great mostly.
enceladus06 · 3 months ago
Are you running ollama local model or one of the zed llms?
brianzelip · 3 months ago
Here's a recent Changelog podcast episode about the latest with Zed and its new agentic feature, https://changelog.com/podcast/640.
xmorse · 3 months ago
I am using Zed too, it still has some issues but it is comparable to Cursor. In my opinion they iterate even faster than the VSCode forks.
DrBenCarson · 3 months ago
Yep not having to build off a major fork will certainly help you move fast
charlie0 · 3 months ago
Why are the Zeds guys so hung up on UI rendering times....? I don't care that the UI can render at 120FPS if it takes 3 seconds to get input from an LLM. I do like the clean UI though.
allie1 · 3 months ago
I just wish they'd release a debugger already. Once its done i'll be moving to them completely.
frainfreeze · 3 months ago
Zed doesn't even run on my system and the relevant github issue is only updated by people who come to complain about the same issue.
Aeolun · 3 months ago
Don’t use windows? I don’t feel like that’s a terribly uncommon proposition for a dev.
KomoD · 3 months ago
Windows? If so, you can run it, you just have to build it.
wellthisisgreat · 3 months ago
Does it have Cursor’s “tab” feature?
dvtfl · 3 months ago
Aeolun · 3 months ago
Sort of. The quality is light and day different (cursor feels like magic, Zed feels like a chore).

Dead Comment

nlh · 3 months ago
I use Cursor as my base editor + Cline as my main agentic tool. I have not tried Windsurf so alas I can't comment here but the Cursor + Cline combo works brilliantly for me:

* Cursor's Cmk-K edit-inline feature (with Claude 3.7 as my base model there) works brilliantly for "I just need this one line/method fixed/improved"

* Cursor's tab-complete (neé SuperMaven) is great and better than any other I've used.

* Cline w/ Gemini 2.5 is absolutely the best I've tried when it comes to full agentic workflow. I throw a paragraph of idea at it and it comes up with a totally workable and working plan & implementation

Fundamentally, and this may be my issue to get over and not actually real, I like that Cline is a bring-your-own-API-key system and an open source project, because their incentives are to generate the best prompt, max out the context, and get the best results (because everyone working on it wants it to work well). Cursor's incentive is to get you the best results....within their budget (of $.05 per request for the max models and within your monthly spend/usage allotment for the others). That means they're going to try to trim context or drop things or do other clever/fancy cost saving techniques for Cursor, Inc.. That's at odds with getting the best results, even if it only provides minor friction.

machtiani-chat · 3 months ago
Just use codex and machtiani (mct). Both are open source. Machtiani was open sourced today. Mct can find context in a hay stack, and it’s efficient with tokens. Its embeddings are locally generated because of its hybrid indexing and localization strategy. No file chunking. No internet, if you want to be hardcore. Use any inference provider, even local. The demo video shows solving an issue VSCode codebase (of 133,000 commits and over 8000 files) with only Qwen 2.5 coder 7B. But you can use anything you want, like Claude 3.7. I never max out context in my prompts - not even close.

https://github.com/tursomari/machtiani

asar · 3 months ago
This sounds really cool. Can you explain your workflow in a bit more detail? i.e. how exactly you work with codex to implement features, fix bugs etc.
evnix · 3 months ago
How does this compare to aider?
richardreeze · 3 months ago
How much do you (roughly, per month) pay for Gemini's API? That's my main concern with switching to "bring your own API keys" tools.
abhinavsharma · 3 months ago
Totally agree on aligning with the one with clearest incentives here
masterjack · 3 months ago
I also like Cline since it being open source means that while I’m using it I can see the prompts and tools and thus learn how to build better agents.
pj_mukh · 3 months ago
Clines agent work is better than Cursors own?
shmoogy · 3 months ago
Cursor does something with truncating context to save costs on their end, you dont get the same with Cline because you're paying for each transaction - so depending on complexity I find Cline works significantly better.

I still use cursor chat with agent mode though, but I've always been indecisive. Like the others said though, its nice to see how cline behaves to assist with creating your own agentic workflows.

fastball · 3 months ago
For the agentic stuff I think every solution can be hit or miss. I've tried claude code, aider, cline, cursor, zed, roo, windsurf, etc. To me it is more about using the right models for the job, which is also constantly in flux because the big players are constantly updating their models and sometimes that is good and sometimes that is bad.

But I daily drive Cursor because the main LLM feature I use is tab-complete, and here Cursor blows the competition out of the water. It understands what I want to do next about 95% of the time when I'm in the middle of something, including comprehensive multi-line/multi-file changes. Github Copilot, Zed, Windsurf, and Cody aren't at the same level imo.

solumunus · 3 months ago
If we’re talking purely auto complete I think Supermaven does it the best.
fastball · 3 months ago
Cursor bought Supermaven last year.

Dead Comment

joelthelion · 3 months ago
Aider! Use the editor of your choice and leave your coding assistant separate. Plus, it's open source and will stay like this, so no risk to see it suddenly become expensive or dissappear.
mbanerjeepalmer · 3 months ago
I used to be religiously pro-Aider. But after a while those little frictions flicking backwards and forwards between the terminal and VS Code, and adding and dropping from the context myself, have worn down my appetite to use it. The `--watch` mode is a neat solution but harms performance. The LLM gets distracted by deleting its own comment.

Roo is less solid but better-integrated.

Hopefully I'll switch back soon.

fragmede · 3 months ago
I suspect that if you're a vim user those friction points are a bit different. For me, Aider's git auto commit and /undo command are what sells it for me at this current junction of technology. OpenHands looks promising, though rather complex.
Oreb · 3 months ago
Approximately how much does it cost in practice to use Aider? My understanding is that Aider itself is free, but you have to pay per token when using an API key for your LLM of choice. I can look up for myself the prices of the various LLMs, but it doesn't help much, since I have no intuition whatsoever about how many tokens I am likely to consume. The attraction of something like Zed or Cursor for me is that I just have a fixed monthly cost to worry about. I'd love to try Aider, as I suspect it suits my style of work better, but without having any idea how much it would cost me, I'm afraid of trying.
m3adow · 3 months ago
I'm using Gemini 2.5 Pro with Aider and Cline for work. I'd say when working for 8 full hours without any meetings or other interruptions, I'd hit around $2. In practice, I average at $0.50 and hit $1 once in the last weeks.
anotheryou · 3 months ago
Depends entirely on the API.

With deepseek: ~nothing.

BeetleB · 3 months ago
It will tell you how much each request cost you as well as a running total.

You your /tokens to see how many tokens it has in its context for the next request. You manage it by dropping files and clearing the context.

aitchnyu · 3 months ago
Yup, choose your model and pay as you go, like commodities like rice and water. The others played games with me to minimize context and use cheaper models (such as 3 modes, daily credits etc, using most expensive model etc).

Also the --watch mode is the most productive interface of using your editor, no need of extra textboxes with robot faces.

fragmede · 3 months ago
fwiw. Gemini-*, which is available in Aider, isn't Pay As You Go (payg) but post paid, which means you get a bill at the end of the month and not the OpenAI/others model of charging up credits before you can use the service.
jbellis · 3 months ago
I love Aider, but I got frustrated with its limitations and ended up creating Brokk to solve them: https://brokk.ai/

Compared to Aider, Brokk

- Has a GUI (I know, tough sell for Aider users but it really does help when managing complex projects)

- Builds on a real static analysis engine so its equivalent to the repomap doesn't get hopelessly confused in large codebases

- Has extremely useful git integration (view git log, right click to capture context into the workspace)

- Is also OSS and supports BYOK

I'd love to hear what you think!

evnix · 3 months ago
Apart from the GUI, What does it improve on when compared to aider.
benterix · 3 months ago
For daily work - neither. They basically promote the style of work where you end up with mediocre code that you don't fully understand, and with time the situation gets worse.

I get much better result by asking specific question to a model that has huge context (Gemini) and analyzing the generated code carefully. That's the opposite of the style of work you get with Cursor or Windsurf.

Is it less efficient? If you are paid by LoCs, sure. But for me the quality and long-term maintainability are far more important. And especially the Tab autocomplete feature was driving me nuts, being wrong roughly half of the time and basically just interrupting my flow.

mark_l_watson · 3 months ago
I agree! I like local tools, mostly, use Gemini 2.5 Pro when actually needed and useful, and do a lot of manual coding.
scottmas · 3 months ago
But how do you dump your entire code base into Gemini? Literally all I want is a good model with my entire code base in its context window.
mark_l_watson · 3 months ago
I wrote a simple Python script that I run in any directory that gets the context I usually need and copies to the clipboard/paste buffer. A short custom script let's you adjust to your own needs.
halfjoking · 3 months ago
Repomix can be run from the command line

https://github.com/yamadashy/repomix

benterix · 3 months ago
Legal issues aside (you are the legal owner of that code or you checked with one), and provided it's small enough, just ask an LLM to write a script to do so . If the code base is too big, you might have luck choosing the right parts. The right balance of inclusions and exclusions can work miracles here.
satvikpendem · 3 months ago
Cursor can index your codebase efficiently using vector embeddings rather than literally adding all your text files into context. Someone else mentioned machtiani here which seems to work similarly.
pembrook · 3 months ago
For a time windsurf was way ahead of cursor in full agentic coding, but now I hear cursor has caught up. I have yet to switch back to try out cursor again but starting to get frustrated with Windsurf being restricted to gathering context only 100-200 lines at a time.

So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.

Also, it struggles with files over 800ish lines which is extremely annoying

We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.

evolve2k · 3 months ago
Wait, are these 800 lines of code? Am I the only one seeing that as a major code smell? Assuming these are code files, the issue is not AI processing power but rather bread and butter coding practices related to file organisation and modularisation.
pembrook · 3 months ago
I agree if the point is to write code for human consumption, but the point of vibe coding tools like Windsurf is to let the LLMs handle everything with occasional direction. And the LLMs will create 2000+ line files when asking them to generate anything from scratch.

To generate such files and then not be able to read them is pure stupidity.

ThomasRedstone · 3 months ago
The people editing 800+ line files often didn't write them, legacy codebases often stink!

I've dealt with a few over the years with 30k+ line long files, always aiming to refactor that into something more sensible, but that's only possible over a long time.

kypro · 3 months ago
I agree, but I've worked with many people now who seem to prefer one massive file. Specifically Python and React people seem to do this a lot.

Frustrates the hell out of me as someone who thinks at 300-400 lines generally you should start looking at breaking things up.

falleng0d · 3 months ago
you can use the filesystem mcp and have it use the read file tool to read the files in full on call
erenst · 3 months ago
I’ve been using Zed Agent with GitHub Copilot’s models, but with GitHub planning to limit usage, I’m exploring alternatives.

Now I'm testing Claude Code’s $100 Max plan. It feels like magic - editing code and fixing compile errors until it builds. The downside is I’m reviewing the code a lot less since I just let the agent run.

So far, I’ve only tried it on vibe coding game development, where every model I’ve tested struggles. It says “I rewrote X to be more robust and fixed the bug you mentioned,” yet the bug still remains.

I suspect it will work better for backend web development I do for work: write a failing unit test, then ask the agent to implement the feature and make the test pass.

Also, give Zed’s Edit Predictions a try. When refactoring, I often just keep hitting Tab to accept suggestions throughout the file.

energy123 · 3 months ago
Can you say more to reconcile "It feels like magic" with "every model I’ve tested struggles."?
erenst · 3 months ago
It feels like magic when it works and it at least gets the code to compile. Other models* would usually return a broken code. Specially when using a new release of a library. All the models use the old function signatures, but Claud Code then sees compile error and fixes it.

Compared to Zed Agent, Claude Code is: - Better at editing files. Zed would sometimes return the file content in the chatbox instead of updating it. Zed Agent also inserted a new function in the middle of the existing function. - Better at running tests/compiling. Zed struggled with nix environment and I don't remember it going to the update code -> run code -> update code feedback loop.

With this you can leave Claude Code alone for a few minutes, check back and give additional instructions. With Zed Agent it was more of a constantly monitoring / copy pasting and manually verifying everything.

*I haven't tested many of the other tools mentioned here, this is mostly my experience with Zed and copy/pasting code to AI.

I plan to test other tools when my Claude Code subscription expires next month.

seabass · 3 months ago
Zed's agentic editing with Claude 3.7 + thinking does what you're describing testing out with the $100 Claude Code tool. Why leave the Zed editor and pay more to do something you can run for free/cheap within it instead?
victorbjorklund · 3 months ago
I'm with Cursor for the simple reason it is in practice unlimited. Honestly the slow requests after 500 per month are fast enough. Will I stay with Cursor? No, ill switch the second something better comes along.
mdrzn · 3 months ago
Same. Love the "slow but free" model, I hope they can continue providing it, I love paying only $20/m instead of having a pay by usage.

I've been building SO MANY small apps and web apps in the latest months, best $20/m ever spent.

k4rli · 3 months ago
20€ seems totally subsidized considering the amount of tokens. Pricing cheaply to be competitive but users will jump to the next one when they inevitably hike the price up.
xiphias2 · 3 months ago
I'm cursor with claude 3.7

Somehow other models don't work as well with it. ,,auto'' is the worst.

Still, I hate it when it deletes all my unit tests to ,,make them pass''

didgeoridoo · 3 months ago
Or when it arbitrarily decides to rewrite half the content on your website and not mention it.

Or, my favorite: when you’ve been zeroing in on something actually interesting and it says at the last minute, “let’s simplify our approach”. It then proceeds to rip out all the code you’ve written for the last 15 minutes and insert a trivial simulacrum of the feature you’ve been working on that does 2% of what you originally specified.

$5 to anyone who can share a rules.md file that consistently guides Sonnet 3.7 to give up and hand back control when it has no idea what it’s doing, rather than churn hopelessly and begin slicing out nearby unrelated code like it’s trying to cut out margins around a melanoma.

geor9e · 3 months ago
I wish it was unlimited for me. I got 500 fast requests, about 500 slow requests, then at some point it started some kind of exponential backoff, and became unbearably slow. 60+ second hangs with every prompt, at least, sometimes 5 minutes. I used that period to try out windsurf, vscode copilot, etc and found they weren't as good. Finally the month refreshed and I'm back to fast requests. I'm hoping they get the capacity to actually become usably unlimited.
rvnx · 3 months ago
Cursor is acceptable because for the price it's unbeatable. Free, unlimited requests are great. But by itself, Cursor is not anything special. It's only interesting because they pay Claude or Gemini from their pockets.

Ideally, things like RooCode + Claude are much better, but you need infinite money glitch.

herbst · 3 months ago
On weekend the slow requests regularly are faster than the paid requests.