Readit News logoReadit News
shreezus · 2 years ago
I really like Cursor, however I think ultimately a good open-source alternative will likely overtake it soon.

Keep in mind Cursor is just a fork of VSCode, with AI features that are pretty much just embedded extensions. Their product is great, but many users would prefer a bring-your-own-key & selecting their own model providers.

kamaal · 2 years ago
>>Their product is great, but many users would prefer a bring-your-own-key & selecting their own model providers.

On the contrary. Most enterprise users will prefer one package they can buy and not buy the thing piece wise.

Big reason why VSCode won was because they were able to provide a lot of things out the box saving the user round trip through the config hell rabbit hole modern vim/emacs ecosystems are.

If you want to sell developer tooling products, provide as much as possible out of the box.

People want to solve their problems using your tool. Not fix/build problems/missing features in your tool.

wyclif · 2 years ago
>saving the user round trip through the config hell rabbit hole modern vim/emacs ecosystems are

That used to be a valid problem, but times have changed. For instance, Neovim has things now like kickstart.nvim and lazy.nvim that solve this problem. I've been test-driving LazyVim for the past month or so and I don't have to config anything anymore because the updates and plugins are sane choices.

shreezus · 2 years ago
With that argument, it would be reasonable to assume Microsoft will just clone the key features (Composer etc) and bake them into the next generation of Copilot on VSCode.

Microsoft has its top-tier distribution advantages, plus they can build native integrations with Github/Azure etc to take things to a new level - think one-click deployments built into VSCode.

In fact, given the rising popularity of Replit/Vercel/etc I wouldn't be surprised if Microsoft is cooking as we speak.

rafaelmn · 2 years ago
A big reason copilot spread so fast is because people already trust GitHub with their code - enabling AI doesn't really modify risk. If GH wanted to break TOS and train on your code they could, even without copilot, if you're using GH for private repos.

Any other third party needs to get vetted/trusted - I would be the first to doubt an AI startup.

GardenLetter27 · 2 years ago
We really need a model that can integrate with the LSP though - so it never generates LSP invalid code.
skp1995 · 2 years ago
That's what we are doing at Aide (shameless plug since I work on Aide)

I think using the LSP is not just a trivial task of grabbing definitions using the LSP, there is context management and the speed at which inference works. On top of it, we also have to grab similar snippets from the surrounding code (open files) so we generate code which belongs to your codebase.

Lots of interesting challenges but its a really fun space to work on.

https://aide.dev/

thawab · 2 years ago
continue(yc) is an open source vscode extension. The best thing about cursor is their auto complete feature, their own fine-tuned model. It will be a while for others to build something close to it.
ode · 2 years ago
How much better is cursor than continue? I've been trying continue with codestral and am only moderately impressed so far.
cellshade · 2 years ago
If you want to talk about in house autocomplete models, Supermaven has superior autocomplete, IMO.
thomashop · 2 years ago
I agree with your general point, but there is already a bring-your-own-key and select your own model providers option in Cursor.
anonzzzies · 2 years ago
They broke the openrouter integration though; it worked and now it does not anymore. Not sure if it was intentional or not, but it is a PITA.
Zion1745 · 2 years ago
The option you mentioned only works in the chat panel, not with other killer features that utilize the cursor.
WhyNotHugo · 2 years ago
> Keep in mind Cursor is just a fork of VSCode, with AI features that are pretty much just embedded extensions

Sounds to me like Rabbit R1. A company picks up existing open source tools, builds their own extension/UI on top, and ship as something entirely new and novel. It'll grab a lot of attention short term, but others will quickly figure out how to make their own implementation that runs directly on the existing open source tools.

samstave · 2 years ago
iansinnott · 2 years ago
What alternative open source solutions are currently competing with it?
worldsayshi · 2 years ago
I've been using https://github.com/VictorTaelin/AI-scripts together with my own nvim plugin that I should publish soon-ish.

Also there's https://github.com/yacineMTB/dingllm.nvim which seems promising but quite wip.

d4rkp4ttern · 2 years ago
Surprised nobody mentioned zed which is open-source, rust-based and also has some compelling AI-edit features where you can use your own model. I haven't tried Cody yet but zed and Cursor are at the top of the list for me to spend more time with.

zed: https://zed.dev/

HN Discussion from few days ago (397 pts): https://news.ycombinator.com/item?id=41302782

westoncb · 2 years ago
I've explored both Zed and Cursor recently and have ended up preferring Zed by a fair margin. Unfortunately their documentation is lacking, but the tool has a pretty coherent design so it's not too bad to figure out. This blog post was the most useful resource I could find to understand the tool: https://zed.dev/blog/zed-ai

For me the collab with Anthropic mentioned is significant too—auspicious.

SirLordBoss · 2 years ago
The lack of an option on Windows makes it harder to justify when alacritty + nvim achieves great speeds as well, with all the customizability and what not.

Can anyone chime in on whether using zed on wsl is viable, or loses all the speed benefits?

vunderba · 2 years ago
Does anyone offhand know if you bring your own key (anthropic, OpenAI, etc) does it hit the AI providers directly or does it pass it to zeds servers first?
FridgeSeal · 2 years ago
I believe it goes straight to you.

It's all open source though, so you could probs verify easily enough.

divan · 2 years ago
For old-schoolers who have been living under a rock for the past few weeks :) how is this different from using Copilot/Copilot-chat?
iansinnott · 2 years ago
- copilot would only predict after the cursor, whereas Cursor predicts nearby edits, which is quite helpful

- copilot-chat was just a chat sidebar last time I used it, you still had to manually apply any code suggestions. cursor will apply changes for you. It's very helpful to have a diff of what the AI wants to change.

It's been a while since i've used copilot though, so copilot chat might be more advanced then i'm remembering.

edit: formatting

divan · 2 years ago
Thanks! "Nearby edits" mean edits in the same file or the whole workspace?

I test Copilot Workspace from time to time, it's still far from perfect, but it already can make large-scale codebase changes across multiple files in a repository. Ultimately that's what I want from an AI assistant on my machine - give a prompt and see changes in all repo, not just current file.

worldsayshi · 2 years ago
> It's been a while since i've used copilot though, so copilot chat might be more advanced then i'm remembering.

Copilot is still surprisingly basic but I've heard rumours that they are working on a version with a lot more features?

thawab · 2 years ago
I think it’s having an agile team focused on this. In the past it was because cursor index your code (vector search) so any question you ask the llm has context of your code. Now it’s the autocomplete feature (their own model). Next i think it will be composer (multi file edit, still in beta).
shombaboor · 2 years ago
keeping up with the latest code assistants is the new keeping up with the latest js frameworks.
anotherpaulg · 2 years ago
An aider community member made a neovim plugin. It provides the aider style pair programming chat UX. Not the cursor/copilot AI autocomplete function.

https://github.com/joshuavial/aider.nvim

armchairhacker · 2 years ago
I’ve heard great things about Cursor and Claude but haven’t tried them yet. I just feel like: how do I even get started?

To me it feels like trying to explain something (for an LLM) is harder than writing the actual code. Either I know what I want to do, and describing things like iteration in English is more verbose than just writing it; or I don’t know what I want to do, but then can’t coherently explain it. This is related to the “rubber duck method”: trying to explain an idea actually makes one either properly understand it or find out it doesn’t make sense / isn’t worthwhile.

For people who experience the same, do tools like Cursor make you code faster? And how does the LLM handle things you overlook in the explanation: both things you overlooked in general, and things you thought were obvious or simply forgot to include? (Does it typically fill in the missing information correctly, incorrectly but it gets caught early, or incorrectly but convincing-enough that it gets overlooked as well, leading to wasted time spent debugging later?)

IanCal · 2 years ago
At its core, it's just vscode. So I'm not stuck unable to write code.

In general, it's like autocomplete that understands what you're doing better. If I've added a couple of console.logs and I start writing another after some new variable has been set/whatever it'll quickly complete that with the obvious thing to add. I'll start guessing where next to move the cursor as an autocomplete action, so it'll quickly move me back and forth from adding a new var in a class to then where I'm using it for example.

As a quick example, I just added something to look up a value from a map and the autocomplete suggestion was to properly get it from the map (after 'const thing = ' it added 'const thing = this.things.get(...)' and then added checking if there was a result and if not throwing an error.

It's not perfect. It's much better than I expected.

For larger work, I recently tried their multi-file editing. I am writing a small app to track bouldering attempts, and I don't know react or other things like that so well. I explained the basic setup I needed and it made it. "Let's add a problem page that lists all current problems", "each attempt needs a delete button", "I need it to scan QR codes", "Here's the error message". I mostly just wrote these things and clicked apply-all. I'm not explaining exactly how or what to do, then I'd just do it.

I'm surprised at how much it gets right first time. The only non-obvious problem to a novice/non-developer it got stuck on was using "id" somewhere, which clashed with an expected use and caused a weird error. That's where experience helps, having caused very similar kinds of problems before.

Sometimes I think as programmers we like to think of ourselves doing groundbreaking brand new work, but huge amounts of what we do is pretty obvious.

fred123 · 2 years ago
With an LLM integrated into your IDE like Cursor or Copilot, oftentimes the LLM autocompletes the correct code faster than I can think about what must be done next. I’ve been coding for 15 years.
the_duke · 2 years ago
Two answers here:

In languages I know well, I use copilot like a smart autocomple. I already know what I want to write and just start typing. Copilot can usually infervery well what I'm going to write for a few lines of code, and it saves time.

In languages I don't know well, where I don't fully know the various standard library and dependency APIs, I write a quick explanation to get the basic code generated and then tweak manually.

Deleted Comment

tiffanyh · 2 years ago
The fact this was created so quickly implies to me, having AI assistance embedded in your editor is not a competitive moat/differentiator.

Curious to see how all this VC money into editors end up.

CuriouslyC · 2 years ago
I'm convinced the 60M Cursor round was a blunder. Tools like this and Aider being open source along with VS Code/Vim/Emacs/IntelliJ's robust plugin support means they have basically no moat.
mhuffman · 2 years ago
Their moat will be the per-seat sales to larger companies ... at least that is all I can imagine they will be able to come up with.
tymonPartyLate · 2 years ago
Cody plugin is a great alternative if you prefer Jetbrains IDEs. I've tried cursor several times and the AI integration is fantastic, but the plugin quality is low, navigation and refactorings are worse for me and I'm struggling to configure it the way I like :(
d4rkp4ttern · 2 years ago
The reviews of Cody’s JetBrains plugin are very critical :

https://plugins.jetbrains.com/plugin/9682-cody-ai-coding-ass...

bcjordan · 2 years ago
Btw if anyone is trying out a move from JetBrains IDEs to Cursor (or VSCode base) I found it essential to select the JetBrains mapping in the VSCode keyboard config. Many of the refactors / diff jumping / commit shortcuts are supported out of the box and it's a much smoother transition when you don't need to retrain muscle memory / look up whether a given feature is supported while learning the new editor
0xCAP · 2 years ago
I get that it's still early stage, but the dependencies already look like a mess to me. No way I'm installing nui.nvim just to rock this plug-in.
yetone · 2 years ago
Hello, I am the author of avante.nvim. Thank you for your suggestion, it's very helpful for avante.nvim!

I plan to abandon nui.nvim for implementing the UI (actually, we only use nui's Split now, so it's exceptionally simple to abandon). Regarding the tiktoken_core issue, everything we did was just to make installation easier for users. However, the problem you mentioned is indeed an issue. I plan to revert to our previous approach: only providing installation documentation for tiktoken_core instead of automatically installing it for users.

As for why avante.nvim must depend on tiktoken_core, it's because I've used the powerful prompts caching feature recently introduced by the Anthropic API. This feature can greatly help users save tokens and significantly improve response speed. However, this feature requires relatively accurate token count calculations, as it only takes effect for tokens greater than 1024; otherwise, adding the caching parameter will result in an error.

gsuuon · 2 years ago
The docs say that cache_control is just ignored[1] if less than 1024 tokens, maybe it's a bug if it's erroring instead?

[1] https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

acheong08 · 2 years ago
Check out that Makefile. It’s scary af: literally just downloading the latest release of a package not even controlled by the author with 0 documentation. What’s stopping the owner of that repo from uploading a supply chain attack which will get distributed to every user of Avante.

Suggestion to the author: fork the repo and pin it to a hash.

leni536 · 2 years ago
Not to dismiss your criticism, but I think supply chain attacks are generally a weak point of the vim/neovim plugin ecosystem, especially with all the fancy autoupdate package managers.

No package signing, no audits, no curation. Just take over one popular vim package and you potentially gain access to a lot of dev departments.

yriveiro · 2 years ago
Nui is a wide spread plugin in Neovim ecosystem, it’s use to have high quality UI widgets.

Probably it also use Plenary for I/O as well.

Not reinventing the wheel is a good thing, don’t see the problem with the dependencies.

Deleted Comment