Hi all! Graphite cofounder Greg here - happy to help answer questions. To preempt one: I’ve been asked a few times so far why we decided to join.
Personally, I work on Graphite for two reasons. 1) I love working with kind, smart, intense teammates. I want to be surrounded by folks who I look up to and who energize me. 2) I want to build bleeding-edge dev tools that move the whole industry forward. I have so much respect for all y’all across the world, and nothing makes me happier than getting to create better tooling for y’all to engineer with. Graphite is very much the combination of these two passions: human collaboration and dev tools.
Joining Cursor accelerates both these goals. I get to work with the same team I love, a new bunch of wonderful people, and get to keep recruiting as fast as possible. I also get to keep shipping amazing code collaboration tooling to the industry - but now with more resourcing and expertise. We get to be more ambitious with our visions and timelines, and pull the future forward.
I wouldn’t do this if I didn’t think the Cursor team weren’t standup people with high character and kindness. I wouldn’t do this if I thought it meant compromising our vision of building a better generation of code collaboration tooling. I wouldn’t do it if I thought it wouldn’t be insanely fun and exciting. But it seems to be all those things, so we’re plunging forward with excitement and open hearts!
As someone who loves all the non-AI portions of Graphite (the CLI and the reviewer UI) should I be worried about this acquisition? Or will the CLI and Reviewer Ui continue to be be maintained and improved?
Forgive some ignorance, but we use Graphite at work, and I don't dislike it or anything, but I haven't really been able to see its appeal over just doing a PR within Github, at least if you exclude the AI stuff.
What do you like about the non-AI parts? I mean it's a little convenient to be able to type `gt submit` in order to create the remote branch and the PR in one step, but it doesn't feel like anything that an alias couldn't do.
Congrats!! I see this as two great companies joining forces in a crowded space where it is clear the whole is worth more than the sum of their parts. Best of luck on your journey
Makes sense and appreciate the transparency. Have admired what you're building at Graphite and look forward to seeing what you build as part of the Cursor team. Congrats!
Imo Cursor did had the first mover advantage by making the first well known AI coding agent IDE. But I can't help but think they have no realistic path forward.
As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes. The fact it's a fork of VSCode (or whatever) will make me never use it. I wonder if they bet wrong.
But that's just usability and preference. When the SOTA model makers give out tokens for substantially less than public API cost, how in the world is Cursor going to stay competitive? The moat just isn't there (in fact I would argue its non-existent)
Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.
I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.
Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.
As the other commenter stated, I don't use CLIs for development. I use VSCode.
I'm very pro IDE. I've built up an entire collection of VSCode extensions and workflows for programming, building, customizing build & debugging embedded systems within VSCode. But I still prefer CLI based AI (when talking about an agent to the IDE version).
> Composer 1
My bet is their model doesn't realistically compare to any of the frontier models. And even if it did, it would become outdated very quickly.
It seems somewhat clear (at least to me) that economics of scale heavily favor AI model development. Spend billions making massive models that are unusable due to cost and speed and distill their knowledge + fine tune them for stuff like tools. Generalists are better than specialists. You make one big model and produce 5 models that are SOTA in 5 different domains. Cursor can't do that realistically.
OP isn't saying to do all of your work in the terminal; they're saying they prefer CLI-based LLM interfaces. You can have your IDE running alongside it just fine, and the CLIs can often present the changes as diffs in the IDEs too.
> Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.
I have absolutely no horse in this race, but I turned from a 100% Cursor user at the beginning of the year, to one that basically uses agents for 90% of my work, and VS Code for the rest of it. The value proposition that Cursor gave me was not able to compete with what the basic Max subscription on anthropic gave me, and VS Code is still a superior experience to Claude in the IDE space.
I think though that Cursor has all the potential to beat Microsoft at the IDE game if they focus on it. But I would say it's by no way a given that this is the default outcome.
> Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.
This is a pretty dumb statistic in a vacuum. It was clearly 100% a few years ago before CLI-based development was even possible. The trend is very significant.
It does not matter what 80-90% of developers do. Code development is heavily tail-skewed: focus on the frontier and on the people who are able to output production-level code at a much higher pace than the rest.
I use an IDE. It has a command line in it. It also has my keybinds, build flow, editor preferences, and CI integrations. Making something CLI means I can use it from my IDE, and possibly soon with my IDE.
Kilocode as an IDE plugin has completely removed Cursor from my toolkit.
Cursor has been both nice and awful. When it works, it has been good. However for a long time it would freeze on re-focus and recently an update broke my profile entirely on one machine so it wouldn't even launch anymore.
Kilocode with options of free models has been very nice so far.
As someone who uses Cursor, i don't understand why anyone would use CLI AI coding tools as opposed to tools integrated in the IDE. There's so much more flexibility and integration, I feel like I would be much less productive otherwise. And I say this as someone who is fluent in vim in the shell.
Now, would I prefer to use vs code with an extension instead? Yes, in the perfect world. But Cursor makes a better, more cohesive overall product through their vertical integration, and I just did the jump (it's easy to migrate) and can't go back.
I agree. I did most of my work in vim/cli (still often do), but the tight agent integrations in the IDEs are hard to beat. I'm able to see more in cursor (entire diffs), and it shows me all of the terminal output, whereas Claude Code hides things from you by default, by only showing you a few pieces and summaries of what it did. I do prefer to use CC for cli usage though (e.g. using aws cli, Kubernetes, etc). The tab-autocomplete is also excellent.
I also like how cursor is model-agnostic. I prefer codex for first drafts (it's more precise and produces less code), for Claude when less precision or planning is required, and other, faster models when possible.
Also, one of cursor's best features is rollback. I know people have some funky ways to do it in CC with git work trees etc, but it's built into cursor.
Mobile developer here. I historically am an emacs user so am used to living in a terminal shell. My current setup is a split pane terminal with one half running claude and the other running emacs for light editing and magit. I run one per task, managed by git worktrees, so I have a bunch of these terminals going simultaneously at any given time, with a bunch of fish/tmuxinator automation including custom claude commands. I pop over to Xcode if I need to dig further into something.
I’ve tried picking up VSCode several times over the last 6-7 years but it never sticks for me, probably just preference for the tools I’m already used to.
Xcode’s AI integration has not gone well so far. I like being able to choose the best tool for that, rather than a lower common denominator IDE+LLM combination.
Now that I can do a lot with 3-6 AI agents running usefully 2-5min at a time to crank through my plans, the IDE is mostly just taking valuable space
For backend/application code, I find it's instead about focusing on the planning experience, managing multiple agents, and reviewing generated artifacts+PRs. File browsers, source viewers, REPLs, etc don't matter here (verbose, too zoomed-in, not reflecting agent activity, etc), or at best, I'll look at occasionally while the agents do their thing.
It is very easy to open multiple terminals, have them side by side, do different things. It is more natural to invoke agents and let them do their things.
The Claude Code integration with IntelliJ (or any Jetbrains IDE for that matter) is the perfect combination. That is the perfect world to me. An entire company maintaining a fork of VS Code just doesn't compute to me, but its how you sell it to shareholders.
I think beginner programmers like the fact that they can just open one app and the AI chat box is right next to their editor window. Other than that, I agree that it's pretty silly to maintain a whole IDE just to attach an AI chat box to it.
Now that there's MCP, the community will out-innovate anything a single company can do in terms of bolting on features. It's easy enough to get all the LSP integration and stuff into Claude code.
So it all comes down to model differentiation. Can cursor compete as a foundation model creator? Maybe, but even so, that's going to be a very tough market. Margins will be razor thin at best. It's a commodity.
Anyway, the last thing I would want if I were them is to keep worrying about maintaining this IDE themselves.
One of the biggest values for Cursor is getting all these different models under a single contract. A contract that very importantly covers the necessary data privacy we want as a business. We can be sure that no matter which model a developer chooses to use, we are covered under the clauses that disallow them from retaining and training on our conversations.
I struggle with understand why engineers enjoy using these CLI coding tools so much. I have tried a few times and I simply cannot get into a good workflow. Cursor, Kline and others feel like the sweet spot for me.
It's really nice that the integrated nature means that, with no extra work on my part, the agent can see exactly what I'm seeing including the active file and linter errors. And all the model interaction is unified. I point them to specific files in the same way, they all have access to the same global rules (including team-global rules), documentation is supplied consistently, and I can seamlessly switch between models in the same conversation.
As an older engineer, I prefer CLI experiences to avoid mouse usage. The more I use the mouse, the more I notice repetitive stress injury symptoms
But also, 90% of the time if I'm using an IDE like VSCode, I spend most of my time trying to configure it to behave as much like vim as possible, and so a successful IDE needn't be anything other than vim to me, which already exists on the terminal
What I don't understand why people would go all in on one IDE/editor and refuse to make plugins for others. Whether you prefer the CLI or the integrated experience, only offering it on vscode (and a shitty version of it, as well) is just stupid.
Codeium (now Windsurf) did this, and the plugins all still work with normal Windsurf login. The JetBrains plugin and maybe a few others are even still maintained! They get new models and bugfixes.
(I work at Windsurf but not really intended to be an ad I’m just yapping)
Cursor if I recall actually started life as a VScode plugin. But the plugin API didn’t allow for the type of integration & experiences they wanted. Hit limits quickly and then decided to make a fork.
> If they had a 200$ subscription with proper unlimited usage (within some limits obviously)
I don't understand the "within some limits" people ask for.
If we use a service to provide value, and it is worth the value it provides, why would we ever accept a limit or cap? We want to stop adding value until next calendar month?
Or if the idea is $200 plus overages, might as well just be usage based.
Imagine a rental car that shut off after 100 km instead of just billing 20 km overage to go 120 km. Would you be thrilled for a day of errands knowing the hard cut off? Or would you want flex? You go 60 km out, 40 km back; now it's not worth paying to drive the last 20? If that's the case, probably should have walked the whole way?
Perhaps not a terrible analogy if some devs think of using these models like hitchhiking. Mostly out for the hike but if I can get an Uber now and then for $200/month, then I can do some errands faster, but still hike most places…
OR, hitchhikers don't think they need that much, they only run an errand a week, in which case, back to usage pricing, don't pay for what you don't use.
- - -
As an example: The primary limiter for our firm's wholesale adoption of Anthropic are their monthly caps. The business accounts have a cap! WTH, Anthropic, firms shouldn't LLM review code for the last week or two of the month? It can't be relied on.
To be clear, there's no cap on the usage per se, the cap is at the billing, even if you have it on a corp card that recharges fully constantly, it can tick over at $1500/day for 3 days, then halfway through day 4, it won't recharge again, because you hit $5k/month limit.
If you write to them and ask (like the error messages tells you) they say: Move to Enterprise, it's X users. Well, no, we don't have X people? Sure, but Enterprise is X users. What if we buy empty seats? Um...
(The simplest explanation is that $5k/month really burns more than $5k/month of costs so every API call loses them money, and they'd rather shepherd people to occasional subscription usage where they train them to leave it idle most of the time. Fine, offer usage at cost instead of loss, see who bites.)
Meanwhile, we use unlimited from their competitors, and have added several other ways to buy Anthropic indirectly, which seems weird they'd want to earn less per API call but someone somewhere is meeting their incentives I guess.
Tab complete is still useful and code review/suggesting changes can be better in a GUI than in a terminal. I think there is still a radically better code review experience that is yet to be found, and it's more likely to come from a new player like Cursor/Graphite than one of the giants.
Also Cursor's dataset of actual user actions in coding and review is pure gold.
God cursors tab complete is woeful in basically all of my usage at work. It’s so actively wrong that I turned it off. It’s agent flows are far far more useful to me
> As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes
Fascinating.
As a person who *loathes VS Code* and prefers terminal text editors, I find Cursor great!
Maybe because that I have zero desire to customize/leverage Cursor/VS Code.
Neat. Cursor can do what it wants with it, and I can just lean into that...
> The fact it's a fork of VSCode (or whatever) will make me never use it
Are you sure entire opinion is not just centred around that fact? Sounds like it
The UX of IDE integration with the existing VSCode plugins and file manager… it’s not even close to the same. Some people just get comfortable with what they are comfortable with
I personally use CLI coding agents as well, but many people do prefer tight IDE integration.
I’ve tried every popular agent IDE, but none of them beat Cursor’s UX. Their team thought through many tiny UX details, making the whole experience smooth like a butter. I think it’s a huge market differentiator.
I also would think Cursor would be screwed, but I tried out the Codex VS code extension and its still very barebones, and Cursor seems to update like 5 times a day and is constantly coming out with mostly great new features. Plus it is nice to be able to use any model provider.
I think calling Open AI Codex or Claude Code "CLI" is a bit a of minomer. It's more of a GUI, just rendered in a terminal. I honestly think a "regular" for GUI for OpenAI Codex / Claude Code could be much better.
Cursor is better suited for enterprise. You get centralized stats and configuration management. Managers are pushed for AI uptake, productivity and quality metrics. Cursor provides them.
Virtually anybody going all in AI is exposing itself of being redundant.
I don't envy startups in the space, there's no moat be it cursor or lovable or even larger corps adopting ai. What's the point of Adobe when creating illustrations or editing pics will be embedded (kinda is already) in the behemoth's chat it's?
And please don't tell me that hundreds of founder became millionaires or have great exits or acquihires expecting them. I'm talking about "build something cool that will last".
I agree. The reason Cursor’s “first mover” advantage doesn’t matter is because there’s fundamentally no business there. I’ve used 3 IDE or text editors my whole life, I’ve never paid for one. If I wanted, I could use AI to write myself a new text editor. Like you said, there’s no moat for any of this shit, and I’m guessing that by 2027 the music will stop.
If these ai companies had 100x dev output, why would you acquire a company? Why not just show screenshots to your agent and get it to implement everything?
Is it market share? Because I don't know who has a bigger user base that cursor.
The claims are clearly exaggerated or as you say, we'd have AI companies pumping out new AI focused IDEs left and right, crazy features, yet they all are Vs code forks that roughly do the same shit
A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?
I am validating and testing these for the company and myself. Each has a personality with quirks and deficiencies. Sometimes the magic sauce is the prompting or at times it is the agentic undercurrent that changes the wave of code.
More specific models with faster tools is the better shovel. We are not there yet.
Heyo, disclosure that I work for graphite, and opinions expressed are my own, etc.
Graphite is a really complicated suite of software with many moving pieces and a couple more levels of abstraction than your typical B2B SaaS.
It would be incredibly challenging for any group of people to build a peer-level Graphite replacement any faster than it took Graphite to build Graphite, no matter what AI assistance you have.
It’s always faster and easier to copy than create(AI or not). There is lot of thought and effort in doing it first, which the second team(to an extent) can skip.
Much respect to what have you have achieved in a short time with graphite.
A lot of B2B SaaS is about tones of integrations to poorly designed and documented enterprise apps or security theatre, compliance, fine grained permissions, a11y, i18n, air gapped deployments or useless features to keep largest customers happy and so on and on.
Graphite (as yet) does not any of these problems - GitHub, Slack and Linear are easy as integrations go, and there is limited features for enterprises in graphite.
Enterprise SaaS is hard to do just for different type of complexity
My guess is the purchase captures the 'lessons learned' based upon production use and user feedback.
What I do not understand is that if a high level staff with capacity can produce an 80% replacement why not assign the required staff to complete that last 10% to bring it to production readiness? That last 10% is unnecessary features and excess outside of the requirements.
I hate the unrealistic AI claims about 100X output as much as anyone, but to be fair Cursor hasn't been pushing these claims. It's mostly me-too players and LinkedIn superstars pushing the crazy claims because they know triggering people is an easy ticket to more engagement.
The claims I've seen out of the Cursor team have been more subtle and backed by actual research, like their analysis of PR count and acceptance rate: https://cursor.com/blog/productivity
So I don't think Cursor would have ever claimed they could duplicate a SaaS company like Graphite with their tools. I can think of a few other companies who would make that claim while their CEO was on their latest podcast tour, though.
I'm really used to my Graphite workflow and I can't imagine going without it anymore. An acquisition like this is normally not good news for the product.
Graphite isn’t really about code review IMO, it’s actually incredibly useful even if you just use the GitHub PR UI for the actual review. Graphite, its original product anyway, is about managing stacks of dependent pull requests in a sane way.
Heard on the worry, but I can confirm Graphite isn’t going anywhere. We're doubling down on building the best workflow, now with more resourcing than ever before!
Supermaven said the same thing when they were acquired by Cursor and then EOLed a year later. Honestly, it makes sense to me that Cursor would shut down products it acquires - I just dislike pretending that something else is happening.
There is literally nothing anyone can say to convince me any product or person is safe during an acquisition. Time and time again it's proven to just not be true. Some manager/product owner/VP/c-suite will eventually have the deciding factor and I trust none of them to actually care about the product they're building or the community that uses it
I’m working on something in a similar direction and would appreciate feedback from people who’ve built or operated this kind of thing at scale.
The idea is to hook into Bitbucket PR webhooks so that whenever a PR is raised on any repo, Jenkins spins up an isolated job that acts as an automated code reviewer. That job would pull the base branch and the feature branch, compute the diff, and use that as input for an AI-based review step. The prompt would ask the reviewer to behave like a senior engineer or architect, follow common industry review standards, and return structured feedback - explicitly separating must-have issues from nice-to-have improvements.
The output would be generated as markdown and posted back to the PR, either as a comment or some attached artifact, so it’s visible alongside human review. The intent isn’t to replace human reviewers, but to catch obvious issues early and reduce review load.
What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness. I’m also concerned about failure modes - for example, noisy or overconfident comments, review fatigue, or teams starting to trust automated feedback more than they should.
If you’ve tried something like this with Bitbucket/Jenkins, or think this is fundamentally a bad idea, I’d really like to hear why. I’m especially interested in practical lessons.
> What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness.
The results of a diff-only review won't be very good. The good AI reviewers have ways to index your codebase and use tool searches to add more relevant context to the review prompt. Like some of them have definitely flagged legit bugs in review that were not apparent from the diff alone. And that makes a lot of sense because the best human reviewers tend to have a lot of knowledge about the codebase, like "you should use X helper function in Y file that already solves this".
At $DAYJOB, there's an internal version of this, which I think just uses Claude Code (or similar) under the hood on a checked out copy of the PR.
Then it can run `git diff` to get the diff, like you mentioned, but also query surrounding context, build stuff, run random stuff like `bazel query` to identify dependency chains, etc.
They've put a ton of work into tuning it and it shows, the signal-to-noise ratio is excellent. I can't think of a single time it's left a comment on a PR that wasn't a legitimate issue.
I work at Graphite, our reviewer is embedded into a bigger-scope code review workflow that substitutes for the GH PR Page.
You might want to look at existing products in this space (Cursor's Bugbot, Graphite's Reviewer FKA Diamond, Greptile, Coderabbit etc.). If you sign up for graphite and link a test github repo, you can see what the flow feels like for yourself.
There are many 1000s of engineers who already have an AI reviewer in their workflow. It comments as a bot in the same way dependabot would. I can't share practical lessons, but I can share that I find it to be practically pretty useful in my day-to-day experience.
cursor has a reviewer product which works quite well indeed, though I've only used it with github. not sure how they manage context, but it finds issues that the diff causes well outside the diff.
We have coding agents heavily coupled with many aspects of the company's RnD cycle. About 1k devs.
Yes, you definitely need the project's context to have valuable generations. Different teams here have different context and model steering, according to their needs.
For example, specific aspects of the company's architecture is supplied in the context. While much of the rest (architecture, codebases, internal docs, quarterly goals) are available as RAG.
It can become noisy and create more needless review work. Also, only experts in their field find value in the generations. If a junior relies on it blindly, the result is subpar and doesn't work.
I wonder about this. Graphite is a fantastic tool that I use every day. Cursor was an interesting IDE a year ago that I don't really see much of a use case for anymore. I know they've tried to add other features to diversify their business, and that's where Graphite fits in for them, but is this the best exit for Graphite? It seems like they could have gotten further on their own, instead of becoming a feature that Cursor bought to try to stay in the game.
How does Graphite compare with other AI code review tools like Qodo?
My team has been using Qodo for a while now and i've found it to be pretty helpful. EVery once in a while it finds a serious issue, but the most useful part from my experience are the features that are geared towards speeding up my review rather than replacing it. Things like effort labels that are automatically added to the pr and a generated walk through that takes you through all of the changed files.
Would love to see a detailed comparison of the different options. Is there some kind of benchmark for AI code review that compares tools?
Personally, I work on Graphite for two reasons. 1) I love working with kind, smart, intense teammates. I want to be surrounded by folks who I look up to and who energize me. 2) I want to build bleeding-edge dev tools that move the whole industry forward. I have so much respect for all y’all across the world, and nothing makes me happier than getting to create better tooling for y’all to engineer with. Graphite is very much the combination of these two passions: human collaboration and dev tools.
Joining Cursor accelerates both these goals. I get to work with the same team I love, a new bunch of wonderful people, and get to keep recruiting as fast as possible. I also get to keep shipping amazing code collaboration tooling to the industry - but now with more resourcing and expertise. We get to be more ambitious with our visions and timelines, and pull the future forward.
I wouldn’t do this if I didn’t think the Cursor team weren’t standup people with high character and kindness. I wouldn’t do this if I thought it meant compromising our vision of building a better generation of code collaboration tooling. I wouldn’t do it if I thought it wouldn’t be insanely fun and exciting. But it seems to be all those things, so we’re plunging forward with excitement and open hearts!
What do you like about the non-AI parts? I mean it's a little convenient to be able to type `gt submit` in order to create the remote branch and the PR in one step, but it doesn't feel like anything that an alias couldn't do.
With more resources than ever. We're building whole platform. That's a lot more than just AI.
Somebody screenshot this please. We are looking at comedy gold in the next 3 years and there’s no shortage of material.
As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes. The fact it's a fork of VSCode (or whatever) will make me never use it. I wonder if they bet wrong.
But that's just usability and preference. When the SOTA model makers give out tokens for substantially less than public API cost, how in the world is Cursor going to stay competitive? The moat just isn't there (in fact I would argue its non-existent)
I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.
Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.
I'm very pro IDE. I've built up an entire collection of VSCode extensions and workflows for programming, building, customizing build & debugging embedded systems within VSCode. But I still prefer CLI based AI (when talking about an agent to the IDE version).
> Composer 1
My bet is their model doesn't realistically compare to any of the frontier models. And even if it did, it would become outdated very quickly.
It seems somewhat clear (at least to me) that economics of scale heavily favor AI model development. Spend billions making massive models that are unusable due to cost and speed and distill their knowledge + fine tune them for stuff like tools. Generalists are better than specialists. You make one big model and produce 5 models that are SOTA in 5 different domains. Cursor can't do that realistically.
I have absolutely no horse in this race, but I turned from a 100% Cursor user at the beginning of the year, to one that basically uses agents for 90% of my work, and VS Code for the rest of it. The value proposition that Cursor gave me was not able to compete with what the basic Max subscription on anthropic gave me, and VS Code is still a superior experience to Claude in the IDE space.
I think though that Cursor has all the potential to beat Microsoft at the IDE game if they focus on it. But I would say it's by no way a given that this is the default outcome.
Composer is extremely dumb compared to sonnet, let alone opus. I see no reason to use it. Yes, it's cheaper, but your time is not free.
This is a pretty dumb statistic in a vacuum. It was clearly 100% a few years ago before CLI-based development was even possible. The trend is very significant.
What are we talking about? Autocomplete or GPT/Claude contender or...? What makes it so great?
Cursor has been both nice and awful. When it works, it has been good. However for a long time it would freeze on re-focus and recently an update broke my profile entirely on one machine so it wouldn't even launch anymore.
Kilocode with options of free models has been very nice so far.
Now, would I prefer to use vs code with an extension instead? Yes, in the perfect world. But Cursor makes a better, more cohesive overall product through their vertical integration, and I just did the jump (it's easy to migrate) and can't go back.
I also like how cursor is model-agnostic. I prefer codex for first drafts (it's more precise and produces less code), for Claude when less precision or planning is required, and other, faster models when possible.
Also, one of cursor's best features is rollback. I know people have some funky ways to do it in CC with git work trees etc, but it's built into cursor.
I’ve tried picking up VSCode several times over the last 6-7 years but it never sticks for me, probably just preference for the tools I’m already used to.
Xcode’s AI integration has not gone well so far. I like being able to choose the best tool for that, rather than a lower common denominator IDE+LLM combination.
For backend/application code, I find it's instead about focusing on the planning experience, managing multiple agents, and reviewing generated artifacts+PRs. File browsers, source viewers, REPLs, etc don't matter here (verbose, too zoomed-in, not reflecting agent activity, etc), or at best, I'll look at occasionally while the agents do their thing.
I use VS Code, open a terminal with VS Code, run `claude` and keep the git diff UI open on the left sidebar, terminal at the bottom.
Deleted Comment
Now that there's MCP, the community will out-innovate anything a single company can do in terms of bolting on features. It's easy enough to get all the LSP integration and stuff into Claude code.
So it all comes down to model differentiation. Can cursor compete as a foundation model creator? Maybe, but even so, that's going to be a very tough market. Margins will be razor thin at best. It's a commodity.
Anyway, the last thing I would want if I were them is to keep worrying about maintaining this IDE themselves.
But also, 90% of the time if I'm using an IDE like VSCode, I spend most of my time trying to configure it to behave as much like vim as possible, and so a successful IDE needn't be anything other than vim to me, which already exists on the terminal
A simple text interface, access to endless tools readily available with an (usually) intuitive syntax, man pages, ...
As a dev in front of it super easy to understand what it's trying to do, and as simple as it gets.
Never felt the same in Cursor, it's a lot of new abstractions that dont feel remotely as compounding
(I work at Windsurf but not really intended to be an ad I’m just yapping)
Even Emacs nuts like me can use agents natively from our beloved editor ;) https://xenodium.com/agent-shell-0-25-updates
I can't randomly throw credits into a pit and say "oh 2000$ spent this month whatever". For larger businesses I suspect it is even worse.
If they had a 200$ subscription with proper unlimited usage (within some limits obviously) I would have jumped up and down though.
Relatively heavy cursor usage in my experience is around 100USD/month. You can set a limit to on demand billing.
I don't understand the "within some limits" people ask for.
If we use a service to provide value, and it is worth the value it provides, why would we ever accept a limit or cap? We want to stop adding value until next calendar month?
Or if the idea is $200 plus overages, might as well just be usage based.
Imagine a rental car that shut off after 100 km instead of just billing 20 km overage to go 120 km. Would you be thrilled for a day of errands knowing the hard cut off? Or would you want flex? You go 60 km out, 40 km back; now it's not worth paying to drive the last 20? If that's the case, probably should have walked the whole way?
Perhaps not a terrible analogy if some devs think of using these models like hitchhiking. Mostly out for the hike but if I can get an Uber now and then for $200/month, then I can do some errands faster, but still hike most places…
OR, hitchhikers don't think they need that much, they only run an errand a week, in which case, back to usage pricing, don't pay for what you don't use.
- - -
As an example: The primary limiter for our firm's wholesale adoption of Anthropic are their monthly caps. The business accounts have a cap! WTH, Anthropic, firms shouldn't LLM review code for the last week or two of the month? It can't be relied on.
To be clear, there's no cap on the usage per se, the cap is at the billing, even if you have it on a corp card that recharges fully constantly, it can tick over at $1500/day for 3 days, then halfway through day 4, it won't recharge again, because you hit $5k/month limit.
If you write to them and ask (like the error messages tells you) they say: Move to Enterprise, it's X users. Well, no, we don't have X people? Sure, but Enterprise is X users. What if we buy empty seats? Um...
(The simplest explanation is that $5k/month really burns more than $5k/month of costs so every API call loses them money, and they'd rather shepherd people to occasional subscription usage where they train them to leave it idle most of the time. Fine, offer usage at cost instead of loss, see who bites.)
Meanwhile, we use unlimited from their competitors, and have added several other ways to buy Anthropic indirectly, which seems weird they'd want to earn less per API call but someone somewhere is meeting their incentives I guess.
Also Cursor's dataset of actual user actions in coding and review is pure gold.
Fascinating.
As a person who *loathes VS Code* and prefers terminal text editors, I find Cursor great!
Maybe because that I have zero desire to customize/leverage Cursor/VS Code.
Neat. Cursor can do what it wants with it, and I can just lean into that...
Are you sure entire opinion is not just centred around that fact? Sounds like it
The UX of IDE integration with the existing VSCode plugins and file manager… it’s not even close to the same. Some people just get comfortable with what they are comfortable with
I’ve tried every popular agent IDE, but none of them beat Cursor’s UX. Their team thought through many tiny UX details, making the whole experience smooth like a butter. I think it’s a huge market differentiator.
Also their own composer model is not bad at all.
I don't envy startups in the space, there's no moat be it cursor or lovable or even larger corps adopting ai. What's the point of Adobe when creating illustrations or editing pics will be embedded (kinda is already) in the behemoth's chat it's?
And please don't tell me that hundreds of founder became millionaires or have great exits or acquihires expecting them. I'm talking about "build something cool that will last".
Dead Comment
Is it market share? Because I don't know who has a bigger user base that cursor.
A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?
More specific models with faster tools is the better shovel. We are not there yet.
Graphite is a really complicated suite of software with many moving pieces and a couple more levels of abstraction than your typical B2B SaaS.
It would be incredibly challenging for any group of people to build a peer-level Graphite replacement any faster than it took Graphite to build Graphite, no matter what AI assistance you have.
Much respect to what have you have achieved in a short time with graphite.
A lot of B2B SaaS is about tones of integrations to poorly designed and documented enterprise apps or security theatre, compliance, fine grained permissions, a11y, i18n, air gapped deployments or useless features to keep largest customers happy and so on and on.
Graphite (as yet) does not any of these problems - GitHub, Slack and Linear are easy as integrations go, and there is limited features for enterprises in graphite.
Enterprise SaaS is hard to do just for different type of complexity
What I do not understand is that if a high level staff with capacity can produce an 80% replacement why not assign the required staff to complete that last 10% to bring it to production readiness? That last 10% is unnecessary features and excess outside of the requirements.
I hate the unrealistic AI claims about 100X output as much as anyone, but to be fair Cursor hasn't been pushing these claims. It's mostly me-too players and LinkedIn superstars pushing the crazy claims because they know triggering people is an easy ticket to more engagement.
The claims I've seen out of the Cursor team have been more subtle and backed by actual research, like their analysis of PR count and acceptance rate: https://cursor.com/blog/productivity
So I don't think Cursor would have ever claimed they could duplicate a SaaS company like Graphite with their tools. I can think of a few other companies who would make that claim while their CEO was on their latest podcast tour, though.
Also, graphite isn't just "screenshots"; it's a pretty complicated product.
My usually prefer Gemini but sometimes other tools catch bugs Gemini doesn't.
As someone who has never heard of Graphite, can anyone share their experience comparing it to any of the tools above?
> "Will the plugin remain up? Yes!"
> https://supermaven.com/blog/sunsetting-supermaven
sweet summer child.
> After bringing features of Supermaven to Cursor Tab, we now recommend any existing VS Code users to migrate to Cursor.
Supermaven was acquired by Cursor and sunset after 1 year.
The idea is to hook into Bitbucket PR webhooks so that whenever a PR is raised on any repo, Jenkins spins up an isolated job that acts as an automated code reviewer. That job would pull the base branch and the feature branch, compute the diff, and use that as input for an AI-based review step. The prompt would ask the reviewer to behave like a senior engineer or architect, follow common industry review standards, and return structured feedback - explicitly separating must-have issues from nice-to-have improvements.
The output would be generated as markdown and posted back to the PR, either as a comment or some attached artifact, so it’s visible alongside human review. The intent isn’t to replace human reviewers, but to catch obvious issues early and reduce review load.
What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness. I’m also concerned about failure modes - for example, noisy or overconfident comments, review fatigue, or teams starting to trust automated feedback more than they should.
If you’ve tried something like this with Bitbucket/Jenkins, or think this is fundamentally a bad idea, I’d really like to hear why. I’m especially interested in practical lessons.
The results of a diff-only review won't be very good. The good AI reviewers have ways to index your codebase and use tool searches to add more relevant context to the review prompt. Like some of them have definitely flagged legit bugs in review that were not apparent from the diff alone. And that makes a lot of sense because the best human reviewers tend to have a lot of knowledge about the codebase, like "you should use X helper function in Y file that already solves this".
Then it can run `git diff` to get the diff, like you mentioned, but also query surrounding context, build stuff, run random stuff like `bazel query` to identify dependency chains, etc.
They've put a ton of work into tuning it and it shows, the signal-to-noise ratio is excellent. I can't think of a single time it's left a comment on a PR that wasn't a legitimate issue.
You might want to look at existing products in this space (Cursor's Bugbot, Graphite's Reviewer FKA Diamond, Greptile, Coderabbit etc.). If you sign up for graphite and link a test github repo, you can see what the flow feels like for yourself.
There are many 1000s of engineers who already have an AI reviewer in their workflow. It comments as a bot in the same way dependabot would. I can't share practical lessons, but I can share that I find it to be practically pretty useful in my day-to-day experience.
Yes, you definitely need the project's context to have valuable generations. Different teams here have different context and model steering, according to their needs. For example, specific aspects of the company's architecture is supplied in the context. While much of the rest (architecture, codebases, internal docs, quarterly goals) are available as RAG.
It can become noisy and create more needless review work. Also, only experts in their field find value in the generations. If a junior relies on it blindly, the result is subpar and doesn't work.
My team has been using Qodo for a while now and i've found it to be pretty helpful. EVery once in a while it finds a serious issue, but the most useful part from my experience are the features that are geared towards speeding up my review rather than replacing it. Things like effort labels that are automatically added to the pr and a generated walk through that takes you through all of the changed files.
Would love to see a detailed comparison of the different options. Is there some kind of benchmark for AI code review that compares tools?