I love Zed but this has all the hallmarks of something being totally rushed out the door.
It works off the Claude Code SDK, which mean it doesn't support many of the built in slash commands - it doesn't support /compact, which is 100% necessary because when you use this implementation enough, you'll eventually get a "Prompt too long" error message with no ability to do anything about it. Since you can't see how far you are in the context window, it's a deal breaker, since you have to start a fresh chat and might run out of room before you can ask it to create a summary prompt for continuing.
There is no way to switch models that I can tell - I think it just picks up on your default model - and there is no way to switch to Plan mode, which has become absolutely crucial to my workflow.
I didn't see Zed picking up on problems reported in the IDE, it was defaulting to running 'tsc -b' in my directories.
At this point it's better to run a terminal inside Zed and work from there. The official response in the Zed Discord has been "talk to your local Anthropic rep" to get them to support Zed's Agent Client Protocol (ACP).
The Agent Model came out very recently, I’ve been following the GitHub issue over the past days and you can see it was rushed out. But I don’t see anything wrong with that, many AI topics are being rushed out and adding slash commands and other small things are very small things to add once the foundation is there.
The model is usually so confused after a /compact I also prefer a /clear.
I set up my directives to maintain a work log for all work that I do. I instruct Claude Code to maintain a full log of the conversation, all commands executed including results, all failures as well as successes, all learnings and discoveries, as well as a plan/task list including details of what's next. When context is getting full, I do a /clear and start the new session by re-reading the work log and it is able to jump right back into action without confusion.
Work logs are great because the context becomes portable - you can share it between different tools or engineers and can persist the context for reuse later if needed.
I notice when I'm getting close and I tell it how to document current state into an .md file. Then I hit /clear and @ the new file.
This is probably very similar to /compact except I have a lot of control over the resulting context and can edit it and /clear again and retry if I run into an issue.
Yeah I was initially excited here, but it feels more like a demonstration of what's possible rather than a working tool.
I found the interface very nice but quickly ran up against limitations on prompt length (it wasn't that long) for example. I am used to being able to give detailed instructions, or even paste in errors/tracebacks.
One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).
I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode
I was somewhat surprised to find that Zed still doesn't have a way to add your own local autocomplete AI using something like Ollama. Something like Qwen 2.5 coder at a tiny 1.5b parameters will work just fine for the stuff that I want. It runs fast and works when I'm between internet connections too.
I'd also like to see a company like Zed allow me to buy a license of their autocomplete AI model to run locally rather than renting and running it on their servers.
I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing. Something with the coding knowledge of Qwen Coder combined with the professionalism and predictability of IBM Granite 3. I'd pay quite a lot for such an agent (especially if it got updates every couple of months that worked in new documentation, bugfixes, github threads, etc to keep the answers up-to-date).
> I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing.
Unfortunately, pretraining on a lot of data (~everything they can get their hands on) is needed to give current LLMs their "intelligence" (for whatever definition of intelligence). Using less training data doesn't work as well for now. There definitely not enough programming and business writing to train a good model only on that.
I use Cursor solely for the agent mode and do all my editing in an proper IDE, meaning Jetbrains products.
I genuinely don't understand why one would want to AI autocomplete. Deterministic autocomplete is amazing but AI autocomplete completely breaks my flow. Even just the few seconds of lag absolutely drive me nuts and then it often it is close to what I wanted but not exactly what I wanted. Either I am in control or the generative AI but mixing both feels so wrong.
I am happy people find use for the autocomplete but ugh I really don't get how they can stomach it. Maybe it is for people that are not good at typing or something.
Same sentiment for me. I barely use the agent, but love their autocomplete. Though I sometimes hear people say that GH Copilot has largely caught up on this front. Can anyone speak to that? I haven’t compared them recently.
If performance were equal, I’d strongly consider going back to GH Copilot just because I don’t love my main IDE being a fork. I occasionally encounter IDE-level bugs in Cursor that are unrelated to the AI features. Perhaps they’re in the upstream as well, but I always wonder if a. there will be a delay in merging fixes or b. whether the fork is introducing new bugs. Just an inherent tradeoff I guess of forking a complex codebase.
I don't know, I think it's a tie. I can have the agent do some busy work or refactoring while I'm writing code with the autocomplete. I can tell it how I want a file split up or how I want stuff changed, and tell it that I'll be making other changes and where. It's smart enough to ignore me and my work while it keeps itself busy with another task. Sort of the best of both worlds. Right now I have it replacing DraftJS with another library while I'm working on some feature requests.
I feel like this is the big divide, some people have no use for agents and swear by autocomplete. Others find the autocomplete a little annoying/not that useful and swear by agents.
For me my aha moment came with Claude Code and Sonnet 4. Before that AI coding was more of a novelty than actually useful.
I have recently been using Zed much more than cursor. However, the autocomplete is literally the only thing missing, and when dealing with refactors or code with tons of boilerplate, its just unbeatable. Eagerly awaiting a better autocomplete model and I can finally ditch Cursor.
I find Zed has some really frustrating UX choices. I’ll run an operation and it will either fail quietly, or be running in the background for a while with no indication that it is doing so.
Does it really? At the end of the day i need it to do my job. Ideal values don’t help me doing my job. So i choose the editor best suited and the features i need. And that’s not zed at the moment.
This is simply not true… that’s the problem. As much as I like Zed, using it for the sake of not being an electron app doesn’t make any sense when Cursor’s edit prediction adds so much value. I’m not starved of resources and can run Cursor just fine – as far as Electron apps go VS Code is great, performant enough. I value productivity. I’ll very happily drop Cursor for Zed the second edit prediction is comparable. I’m eagerly waiting.
I wonder if Augment [1] are working on a Zed plugin.
I've been using Augment for more than a year in Jetbrains IDEs, and been very impressed by it, both the autocomplete and the Cursor-style agent. I've looked at Cursor and couldn't figure out why anyone needed to use a dedicated IDE when Augment exists as a plugin. Colleagues who have used Cursor have switched to Augment and say it's better.
Seems to me like Augment is an AI tool flying under most people's radar; not sure why it's not all over Hacker News.
>One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).
It's not only the autocomplete. I've never had any issue with Cursor while Zed panicked, crashed and behaved inconsistently often (the login indicator would flicker between states while you were logged in and vice versa, clicking some menus would crash it and similar annoyances). Another strange thing I've observed is the reminder in the UI that rating an AI prompt would send your _entire chat history_ to Zed, which might be a major red flag for many people. One could accidentally rate it without being aware of that and then Zed has access to large and potentially sensitive parts of your company's code - I can't imagine any company being happy with that.
There are plenty of great VCs out there, going with Sequoia will definitely come with some unpleasant late consequences.
>This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode
There are many "real competitors" to Cursor, like Windsurf, (Neo-)Vim, Helix, Emacs, Jetbrains. It's also worth being aware that not everybody is too excited about letting AI slop be the dominant part of their work. Some people prefer sprinkling a little AI here and there, instead of letting it do pretty much everything.
"even pulling up a file via search is more accurate in Cursor"
Huh? it takes it sometimes like 40s to find some file with the fuzzy search for me. In that time im going to the terminal running a "find" command with lots of * before I get some result in cursor
I want to try Zed but the Helix mode seems quite young. Vim mode sounds good, but i just can't move away from Helix mode. (oh and of course, my own modifications to Helix's input config)
My difficulty in finding editors that fit my desired input scheme kinda reminds me of the old pre-LSP days. Where you'd chose an editor based on it's language features. I wonder if we need some sort of common editor interface to allow these sort of text editing primitives to work in new editors, as it seems to be considerable friction.
I agree, I've fantasized about an editor with a truly pluggable editing model which is decoupled from the other parts.
Yi was kind of designed like this, I believe. You could compile in an emacs-like model, a vim-like model, or presumably make your own model.
I've used Helix and Kakoune in addition to Emacs and Vim, but dealing with the limitations/featureset/plugin treadmill gets a little tiring.
I have been following Zed, and it seems that they have rearchitected things to enable adding Helix mode and making the editing model a bit more modular, but it's still fairly new. They are fixing bugs pretty quickly. I will have to try it again.
I prefered Kakoune to Helix (it was more consistent). But to your point, being able to swap these things out more easily would let you choose an editor based on features, and not tradeoff between features and an ergonomic editing model.
Ironically you can use Ki inside of VSCode (and I know you can use Vim that way too), but VSCode is so darn bloated and slow...
The truly pluggable editor is emacs.
I too spent months trying out neovim, then emacs, then finding helix. Spent a year on helix, then zed because I would rather have something more complete, and brought with me all i could of helix modal editing.
But emacs. Emacs is the one that can truly become anything you like. And with lsp and treesitter being finally in it. I've finally came to my senses and started building my helix in it.
It’s exciting that Zed even has a Helix mode. That was a big moment for Helix.
Last time I tried it, though, I immediately ran into parts of the keymap that hadn’t been translated yet. I’m already at my limit of tools in beta mode/built from my own fork, so I switched back to Vim mode – where the team is on record explaining their thorough testing methodology.
As a Helix user of two years, I sometimes wonder if I actually like the Helix keymap (certainly some parts are nicer than Vim’s) or if I simply tolerate it because of how nice it is to get a polished TUI IDE out of the box. Either way, my muscle memory expects Helix mode now, rather than Vim.
Neovim can run in server mode, where other editors send it user input and then Neovim sends back the buffer. This is how I use vim in VSCode — not the Vim extension but the Neovim extension, which uses the real Neovim, which of course reads my Neovim config and plugins and makes them available to VSCode. So it seems like helix “just” needs a server mode, and then you can integrate it into any editor.
Helix seems to have good LSP support from what I can tell? The only language I use at $WORK that doesn't have full support is GraphQL which lacks auto indent.
If you want to try something similar to Helix in emacs, there's meow-mode. While I'm not a helix user myself, it shouldn't be too difficult to get meow to work like helix.
I thought the same, but I gave Helix a shot for fun a couple years ago and never looked back. It really does feel better/more ergonomic, but the greatest benefit is that almost everything you need is built in. I spent way too much time fiddling with Vim and NeoVim configs.
For me, definitely. Plus it's quite the muscle memory switch. I switched to Kakoune ages ago, and then eventually Helix because i liked its design a bit more.
I had the exact same problem. I was so stoked to try helix mode and then realized it obviously doesn’t have any of my backspace shortcuts. Duh, but still… back to helix!
I like Zed in concept.
I like Zed in the architectural and foundational aspects.
I want more tools like Zed to exist.
But, I find Zed challenging to adopt due to random nuances. First, settings management is a mixed bag and sometimes I just want a quick way to open the "settings.json" from the settings pane without fussing around. Then I'd like the "settings.json" to stay open (reopen) on a restart of Zed. Then I'd like the ability to use an LLM that doesn't have native tool calling support, which Zed seems to be the only app I've used that doesn't have a workaround. Then I'd like the UI to be a little easier to navigate as a new user, it feels a bit scattered and overwhelming at times.
I haven't used Zed much and I may give it another shot (soon), but it very much feels like a tool built by engineers for engineers... Which is great for power users, but seems not so great for new adopters.
I don't think the shortcomings are a blocker, but they are the reason I haven't adopted Zed. The shortcomings are just enough for me to take a step back and say "maybe I'll try again later".
I spent a while trying to set it up, as I share your general take on their ethos. Personally, I'm okay with a 'power user'-focused text editor, even! But the relative lack of syntax highlighting options got me to give up. Maybe I'm just spoiled from SublimeText's dope, complex, extensible system for specifying "contexts" in themes, but Zed was just nowhere near enough for me.
The keybinding system is also nuts if you turn on Vim mode, but I think I'd eventually get used to that. But functions need to be a different color than arguments, which need to be a different color than local variables... Just non-negotiable.
I look forward to trying it again sometime soon! The AI features seem rad, this included.
Zed does have a way to run LLMs without tool calling. From the agent pane, in the menu, select “new text thread”. I believe there’s a keyboard shortcut but I’m on my phone right now.
I'll take another look but from what I perceived all attempts to start a thread included tool calling in the payload.
I couldn't seem to get any message through without tool calling instructions in the payload. What you're describing sounds exactly like what I attempted.
I tried something like over 6 different variations of model configs with restarts of Zed in-between. The documentation and what Zed tries to configure are different as well. The fields don't match up with the built in type checking. I tried "openai" with the endpoint configured, "openai_compatible", and even "openrouter" hoping the REST signatures would be match well enough. Each configured with various fields to turn tool calling off and every single request that hit the REST server had tool calling.
That's unfortunate. I use Zed and I'm moving towards containerising my dev environment (using SSH remote dev to connect Zed to the container) because all this agentic stuff seems like a security nightmare. At the very least I want to restrict the blast radius to my repos dir.
I would give them a week or less to support this. They've been improving the debugger so fast, it will take them no time to support remote claude code connections.
Remote dev isn't very good in Zed, unfortunately. For some reason they chose not to apply the local editor's settings to a remote session by default. Every remote has its own config file. Questionable choice, imo
IIRC it doesn’t work in Cursor either, and their own AI sidebar was getting weird issues too. Mostly switched back to VSCode for SSH workflows because of that.
Really love Zed after working in it full time for a month now and pay their 20$ sub tier to support them even when I rarely use the LLM integration beyond the auto-complete.
At first I was very dismissive of it due to being Apple-first but they've turned it around with really good Linux support and it seems like Windows soon as well!
Also love Zed, but sigh, it's VC funded. We all know how this is going to end. Best VIM mode ever implemented in a (non vim) app. I use it as my 2nd editor (most of the time in Jetbrains products).
I just hope I'm wrong about the medium term impact of the VC funding but rushing AI AI AI out seems to be a sign of that rather than fixing fundamental issues that remain such as the ugly font rendering.
Agreed, though being OS is no panacea as we have seen from countless other projects, but it does mitigate some concern of investing in an editor and its ecosystem and getting rugpulled.
It works off the Claude Code SDK, which mean it doesn't support many of the built in slash commands - it doesn't support /compact, which is 100% necessary because when you use this implementation enough, you'll eventually get a "Prompt too long" error message with no ability to do anything about it. Since you can't see how far you are in the context window, it's a deal breaker, since you have to start a fresh chat and might run out of room before you can ask it to create a summary prompt for continuing.
There is no way to switch models that I can tell - I think it just picks up on your default model - and there is no way to switch to Plan mode, which has become absolutely crucial to my workflow.
I didn't see Zed picking up on problems reported in the IDE, it was defaulting to running 'tsc -b' in my directories.
At this point it's better to run a terminal inside Zed and work from there. The official response in the Zed Discord has been "talk to your local Anthropic rep" to get them to support Zed's Agent Client Protocol (ACP).
I set up my directives to maintain a work log for all work that I do. I instruct Claude Code to maintain a full log of the conversation, all commands executed including results, all failures as well as successes, all learnings and discoveries, as well as a plan/task list including details of what's next. When context is getting full, I do a /clear and start the new session by re-reading the work log and it is able to jump right back into action without confusion.
Work logs are great because the context becomes portable - you can share it between different tools or engineers and can persist the context for reuse later if needed.
This is probably very similar to /compact except I have a lot of control over the resulting context and can edit it and /clear again and retry if I run into an issue.
I found the interface very nice but quickly ran up against limitations on prompt length (it wasn't that long) for example. I am used to being able to give detailed instructions, or even paste in errors/tracebacks.
I'll check back in in a few months.
One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).
I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode
I'd also like to see a company like Zed allow me to buy a license of their autocomplete AI model to run locally rather than renting and running it on their servers.
I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing. Something with the coding knowledge of Qwen Coder combined with the professionalism and predictability of IBM Granite 3. I'd pay quite a lot for such an agent (especially if it got updates every couple of months that worked in new documentation, bugfixes, github threads, etc to keep the answers up-to-date).
It is indeed a fine tuned Qwen2.5-Coder-7B
Unfortunately, pretraining on a lot of data (~everything they can get their hands on) is needed to give current LLMs their "intelligence" (for whatever definition of intelligence). Using less training data doesn't work as well for now. There definitely not enough programming and business writing to train a good model only on that.
You mean an locally run OpenAI API compatible server?
https://huggingface.co/srisree/nano_coder
I genuinely don't understand why one would want to AI autocomplete. Deterministic autocomplete is amazing but AI autocomplete completely breaks my flow. Even just the few seconds of lag absolutely drive me nuts and then it often it is close to what I wanted but not exactly what I wanted. Either I am in control or the generative AI but mixing both feels so wrong.
I am happy people find use for the autocomplete but ugh I really don't get how they can stomach it. Maybe it is for people that are not good at typing or something.
If performance were equal, I’d strongly consider going back to GH Copilot just because I don’t love my main IDE being a fork. I occasionally encounter IDE-level bugs in Cursor that are unrelated to the AI features. Perhaps they’re in the upstream as well, but I always wonder if a. there will be a delay in merging fixes or b. whether the fork is introducing new bugs. Just an inherent tradeoff I guess of forking a complex codebase.
For me my aha moment came with Claude Code and Sonnet 4. Before that AI coding was more of a novelty than actually useful.
I've been using Augment for more than a year in Jetbrains IDEs, and been very impressed by it, both the autocomplete and the Cursor-style agent. I've looked at Cursor and couldn't figure out why anyone needed to use a dedicated IDE when Augment exists as a plugin. Colleagues who have used Cursor have switched to Augment and say it's better.
Seems to me like Augment is an AI tool flying under most people's radar; not sure why it's not all over Hacker News.
[1] https://www.augmentcode.com/
Give the agent as much context as possible and let it go, review and correct the implementation, let it go again, finish it off…
The I just find the autocomplete a little annoying in my workflow, especially with the local self-hosted models I need to use at work.
Claude Code on corporate approved AWS Bedrock account.
It's not only the autocomplete. I've never had any issue with Cursor while Zed panicked, crashed and behaved inconsistently often (the login indicator would flicker between states while you were logged in and vice versa, clicking some menus would crash it and similar annoyances). Another strange thing I've observed is the reminder in the UI that rating an AI prompt would send your _entire chat history_ to Zed, which might be a major red flag for many people. One could accidentally rate it without being aware of that and then Zed has access to large and potentially sensitive parts of your company's code - I can't imagine any company being happy with that.
>I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed
There are plenty of great VCs out there, going with Sequoia will definitely come with some unpleasant late consequences.
>This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode
There are many "real competitors" to Cursor, like Windsurf, (Neo-)Vim, Helix, Emacs, Jetbrains. It's also worth being aware that not everybody is too excited about letting AI slop be the dominant part of their work. Some people prefer sprinkling a little AI here and there, instead of letting it do pretty much everything.
Glad it's working for you but I think you might be the only one!
I’ll keep an eye on this ‘proper’ Zed support for sure, although the current setup is working just fine so I might wait for v0.2.
Huh? it takes it sometimes like 40s to find some file with the fuzzy search for me. In that time im going to the terminal running a "find" command with lots of * before I get some result in cursor
My difficulty in finding editors that fit my desired input scheme kinda reminds me of the old pre-LSP days. Where you'd chose an editor based on it's language features. I wonder if we need some sort of common editor interface to allow these sort of text editing primitives to work in new editors, as it seems to be considerable friction.
Yi was kind of designed like this, I believe. You could compile in an emacs-like model, a vim-like model, or presumably make your own model.
I've used Helix and Kakoune in addition to Emacs and Vim, but dealing with the limitations/featureset/plugin treadmill gets a little tiring.
I have been following Zed, and it seems that they have rearchitected things to enable adding Helix mode and making the editing model a bit more modular, but it's still fairly new. They are fixing bugs pretty quickly. I will have to try it again.
They have a nice discussion here:
https://github.com/zed-industries/zed/discussions/6447
They reference Ki, which also looks cool, and they out some of Helix's inconsistencies in their comparison: https://ki-editor.github.io/ki-editor/docs/comparisons/
I prefered Kakoune to Helix (it was more consistent). But to your point, being able to swap these things out more easily would let you choose an editor based on features, and not tradeoff between features and an ergonomic editing model.
Ironically you can use Ki inside of VSCode (and I know you can use Vim that way too), but VSCode is so darn bloated and slow...
But emacs. Emacs is the one that can truly become anything you like. And with lsp and treesitter being finally in it. I've finally came to my senses and started building my helix in it.
Last time I tried it, though, I immediately ran into parts of the keymap that hadn’t been translated yet. I’m already at my limit of tools in beta mode/built from my own fork, so I switched back to Vim mode – where the team is on record explaining their thorough testing methodology.
As a Helix user of two years, I sometimes wonder if I actually like the Helix keymap (certainly some parts are nicer than Vim’s) or if I simply tolerate it because of how nice it is to get a polished TUI IDE out of the box. Either way, my muscle memory expects Helix mode now, rather than Vim.
If you want to try something similar to Helix in emacs, there's meow-mode. While I'm not a helix user myself, it shouldn't be too difficult to get meow to work like helix.
Been wanting to learn Helix more and using it for small edits but never saw a Helix mode in any editor yet
But, I find Zed challenging to adopt due to random nuances. First, settings management is a mixed bag and sometimes I just want a quick way to open the "settings.json" from the settings pane without fussing around. Then I'd like the "settings.json" to stay open (reopen) on a restart of Zed. Then I'd like the ability to use an LLM that doesn't have native tool calling support, which Zed seems to be the only app I've used that doesn't have a workaround. Then I'd like the UI to be a little easier to navigate as a new user, it feels a bit scattered and overwhelming at times.
I haven't used Zed much and I may give it another shot (soon), but it very much feels like a tool built by engineers for engineers... Which is great for power users, but seems not so great for new adopters.
I don't think the shortcomings are a blocker, but they are the reason I haven't adopted Zed. The shortcomings are just enough for me to take a step back and say "maybe I'll try again later".
I assume that keybind is also configurable?
The keybinding system is also nuts if you turn on Vim mode, but I think I'd eventually get used to that. But functions need to be a different color than arguments, which need to be a different color than local variables... Just non-negotiable.
I look forward to trying it again sometime soon! The AI features seem rad, this included.
I couldn't seem to get any message through without tool calling instructions in the payload. What you're describing sounds exactly like what I attempted.
I tried something like over 6 different variations of model configs with restarts of Zed in-between. The documentation and what Zed tries to configure are different as well. The fields don't match up with the built in type checking. I tried "openai" with the endpoint configured, "openai_compatible", and even "openrouter" hoping the REST signatures would be match well enough. Each configured with various fields to turn tool calling off and every single request that hit the REST server had tool calling.
- I don't want to constantly auto-accept. The point of auto-accept is that it auto-accepts. Seems like a bug.
- It'd be great if I could go back to a specific message and delete the ones I don't want, similar to the CLI version.
- Where is Plan Mode? Maybe I just couldn't figure out how to get to it.
- I can't easily see Background Tasks.
- How do I change models?
- How do I create new sessions (via /new for instance)? Why is `/clear` not supported?
- I don't want to see the entirety of the edits in the terminal. Can they be collapsed by default? Or maybe show a preview?
https://x.com/sridca/status/1963271904384401886
At first I was very dismissive of it due to being Apple-first but they've turned it around with really good Linux support and it seems like Windows soon as well!
I just hope I'm wrong about the medium term impact of the VC funding but rushing AI AI AI out seems to be a sign of that rather than fixing fundamental issues that remain such as the ugly font rendering.
- Zed: VC funded open source
- Sublime Text: indie closed source
None is ideal, but I guess we all know why.