Cursor raised 900M, are losing market share to claude code(resorting to poaching 2 leads from there [1]), AND they're decreasing the value of their product? Huge red flag. They should be able to burn cash like no tomorrow. Also, the PR language on this post, and the timing(midnight on a US holiday) is not ideal.
This news coupled with google raising the new gemini flash cost by 5x, azure dropping their startup credits, and 2-3 others(papers showing RL has also hit a wall for distilling or improving models), are now solid signals that despite what Sam altman says, intelligence will NOT be soon too cheap to meter. I think we are starting to see the squeeze from the big players. Interesting. I wonder how many startups are betting on models becoming 5-10x cheaper for their business models. If on device models don't get good, I bet a lot of them are in big trouble
> I think we are starting to see the squeeze from the big players.
I’m not convinced that these price increases represent an attempt to squeeze more profit out of a saturated market.
To me they look an awful lot like people realising that the sheer compute cost associated with modern models makes the historical zero-marginal cost model of software impossible. API calls to LLM models have far more in common with making calls to EC2 or Lambda for compute, than they do a standard API calls for SaSS.
A lot of early LLM based business models seemed to assume that the historical near zero-marginal cost of delivery for software would somehow apply to hosted LLM models which clearly isn’t the case.
You mix that in with rising datacenter costs, driven by lack of available electricity infrastructure to meet their demands, plus everyone trying to grab as much LLM land as possible, which requires more datacenters, more faster. And the result is rapidly increasing base costs for compute. Which we’re now seeing reflected in LLM pricing.
For me the thing that stands out about LLMs, is that their compute costs are easily 100-10000x greater per API call than a traditional SaSS API call. That fact alone should be enough for people to realise that the historically bottomless VC money that normal funds this stuff, isn’t quite a bottomless as it needs to be to meaningfully subsidise consumer pricing.
Very insightful. I think the payment model would have worked out just fine if the state of the art was the optimization of GPT-4 class models to bring down the cost over time, which would have made the services profitable eventually. Instead, newer models are getting larger and more resource heavy through reasoning, meaning costs per request are going up instead of down.
It was obvious that they over raised. It's insane to raise that kinda money to go sell an open-source and otherwise free code editor that wraps an LLM that you don't own or host. So you're not providing the service, you don't have to code much of a product because you're front loaded 95% of the product with open source...you have no secret sauce... And you're going to raise that kinda money for what exactly? In hopes you can fool a bunch of people?
Do they think they're secret sauce is UX? There's better editors out there now too.
You want to know what the hype train of Cursor was for? It was marketing for LLMs.
I doubt Cerebras has even close to the scale to be a major player in this area.
Nvidia sold $35B of just datacenter GPUs last year. Of which the vast majority will be used for AI.
Cerebra entire revenue last year was only $78M. That’s three orders of magnitude smaller than Nvidia datacenter GPU business. Scaling a company 10X in a year is a pretty hard thing to do, and it’s not a question of money, it’s a question of people and organisation. So much stuff in a business breaks when it scales 10X, that it take months to years to fix enough stuff to support another 10x growth spurt without everything just imploding.
Currently Cerebras, although faster, is more expensive than the traditional alternatives. Cursor's use case doesn't benefit from instant, users are happy to wait the few seconds (and watching the magic may even be beneficial)
Cursor is an interesting case study on the wrapper vs. core debate. They were the first big success story in the coding space, enjoyed first mover advantages, and sweetheart deals with volume tokens from providers.
Now that all the providers have moved towards in-housing their coding solutions, the sweetheart deals are gone. And the wrapper goes back to "at cost" usage. Which, on paper should be less value / $ than any of the tiers offered by the providers themselves.
Whatever data they collected, and whatever investments they made in core tech remains to be seen. And it's a question of making use of that data. We can see that it is highly valuable for providers (as they all subsidise usage to an extent). Goog is offering lots of stuff for free, presumably to collect data.
One interesting distinction is on cursor vs. windsurf. Windsurf actually developed some internal models (either pre-trained or post-trained I don't know, but probably doesn't matter much) swe1 and swe1-lite that are actually pretty good on their own. I don't think cursor has done that, beyond their tab-next-prediction model. A clue, perhaps.
Anyway, it will be interesting to see how this all unfolds 2-5 years from now.
I think LLM agents have completely broken the business model that companies like Cursor were founded on.
Early on Cursor added value by finding clever to integrate LLM into an IDE, which would allow single shot output of an LLM to produce something useful, and do so quickly. That required a fair bit of engineering to make happen reliably.
But LLM agents completely break that. The moment people realised that rather than trying to bend our tools to work within the limits of an LLM, we could instead just make LLM “self-prompt” their way to better outputs, Cursors model stopped being viable.
It’s another classic case of the AI “Bitter Lesson”[0] being learned. Throwing more data, and more compute at AI produces faster, better progress, than careful methodical engineering.
I think I might be missing something. I use Cursor daily as part of my development process and it feels like magic.
I've tried Aider and other agentic options and its amazingly clunky. Maybe I'm looking at the "apple vs linux" effect: I'm the apple user that just expect things to work out of the box, and although there are way better alternatives, the integration is worse.
I’m interested if anyone here knows what exactly Cursor has built? My limited understanding is that they’ve done nothing but fork VS code and add a chat window and AI-integrated editing tools.
Basically...but, no, I mean they have some features that provide UX around prompting an LLM and then taking the output and using it to edit files for you. It's a "quality of life" or productivity tool.
It's useful and has SOME value. Just not enough unfortunately.
This is history repeating itself. How could a company that raised so much money possibly compete with another that has the same ... arguably better ... product for less? Marketing budget? Hype? First to market? Sure. Absolutely. All those things do fade though.
We've seen this movie before. Remember Sublime Text? Remember what happened when Atom and VS Code came along? Fortunately Sublime Text didn't over raise (if they raised anything at all, I can't remember). Point is, people catch on and save money if they can. So they will do the same with Cursor.
Use open source editors that have the same features...better ones even. I'll argue that Roo Code is much much better than Cursor. I even like Windsurf better to be honest, but I wouldn't pay for either. I'll support open-source and save my money to pay the LLM.
Cursor is about to go the way of Sublime Text or Notepad++. Might keep some cult following, but it's market share will drop off a cliff.
It's ok. Their investors don't care! This was all to get people to use LLMs more. Their investors are fine with the sacrifice. That's all it ever was.
Google has done nothing but make another search engine, Apple has done nothing but make another phone, Ferrari has done nothing but make another car, etc.
I saw an ad today OpenAI offering free AI interior design mock-up. Not sure if it a specific feature or just way to use image generation but either way they are commoditizing the thin wrappers.
They built nothing that someone else couldn't...and didn't. They got nothing.
This is also a bit of foreshadowing for all SaaS by the way.
Remember, nothing prevents anyone from simply vibe coding anything they see out there in the future. It turns the value of all software to near $0.
Cursor will go under. They'll be acquired for cheap (making more headlines, good press!), go bankrupt, or completely change their business into something else and undergo major restructuring. The point of Cursor is NOT to be a profitable ai code editor business. Look at the investors. Thrive dumped over $9B, using that $9B to advertise, market, and sell the product and services offered by the other companies they invest in -- the LLMs. At the end of the day, that's where the money flows.
In fact, none of the investors care one bit of Cursor survives. It's served its purpose. It was the "freebie" that people give out, the appetizer, for something bigger.
There's is absolutely nothing that Cursor has that anyone else can't easily and doesn't already. In fact better. Open-source solutions already outpace Cursor.
The difference is open-source doesn't exactly have a giant marketing budget. So most people haven't caught on yet, but they will.
'We previously described ... as "rate limits", which wasn't an intuitive way to describe the limit. It is a usage credit pool'
Very strange that they decided to describe monthly credits as rate limits, and then spin it as 'unintuitive'. Feels like someone is trying to pull a fast one.
Well, rate limits take a moving window of time (say, one second) and check how many requests you make during that time, and throttle you, if necessary.
I’ve deleted all of these wrappers (cursor, windsurf etc) after discovering Claude Code on pro. I’m not sure how it does it, but it’s just better. And ultimately, more cost effective.
I think it’s fundamentally about context management and business model. Claude Code is expensive because it will happily put very large volumes into context because Anthropic are paid by the token. Cursor makes the bet that it can pay less per token whilst giving you enough value to still make margins on your $20 per month (assuming you’re using their default models).
This all becomes very clear when you do something that feels like magic in Claude Code and then run /cost and see you’ve blown through $10 in a single hour long session. Which is honestly worth it for me.
Roo Code manages context a LOT better than Cursor or Windsurf. Cursor and Windsurf don't care about this because they want people to use more tokens. Their investors want more people to use tokens.
Think about this one. They don't even tell you what your usage is! Look at Roo Code showing you the context usage and cost of each conversation. Features to compact the context. It's built around bringing awareness to the unit economics of AI and built to give users choice. The tools that work to keep users in the dark are serving someone else's interests.
I’ve found cursor to be meaningfully faster, and tab autocomplete is really nice. It’s not like I can avoid touching code anyways, and when I do, cursor tab is near perfect at being a very smart auto complete. Claude is running through AWS bedrock though, so that could be the performance issue. But I do much prefer the terminal app for prompting
That adage that "this is the worst it'll ever be" when people espouse AI coding agents is looking a bit shoddy. No, costs don't inevitably go down when you're on a sweetheart deal.
"We’re improving how we communicate future pricing changes" like clearly and explicitly stating what your customers are paying for ? What kind of BS is this ?
Have people been dropping cursor usage for Claude code? I have dropped to using cursor as just an ide with auto complete. Curious if others are doing this too.
Cursor’s autocomplete is SuperMaven (which they acquired).
From the site :
“Supermaven uses Babble, a proprietary language model specifically optimized for inline code completion. Our in-house models and serving infrastructure allow us to provide the fastest completions and the longest context window of any copilot.”
LLMs are literally auto-complete models. I just so happens that when your auto-complete model gets big enough, and you poke it in the right way, it accidentally pretends to be intelligent. And it turns out, that pretending to be intelligent is almost as useful as actually being intelligent.
Claude Code makes me feel like I'm dispatching a legit engineer to go get something done. But they come back in a minute instead of a week. Most of the time the solution gets the job done. Sometimes it introduces too much complexity, sometimes it's totally wrong, but it gets the job done. Cursor meanwhile just feels like shortening the (copy editor/paste chat/copy chat/paste editor) loop.
For $200/month you can get equivalent value to a team of engineers. Plan accordingly! The stack is no longer safe for employment. You need to move up to manager or move down to metal.
This news coupled with google raising the new gemini flash cost by 5x, azure dropping their startup credits, and 2-3 others(papers showing RL has also hit a wall for distilling or improving models), are now solid signals that despite what Sam altman says, intelligence will NOT be soon too cheap to meter. I think we are starting to see the squeeze from the big players. Interesting. I wonder how many startups are betting on models becoming 5-10x cheaper for their business models. If on device models don't get good, I bet a lot of them are in big trouble
[1] https://www.investing.com/news/economy-news/anysphere-hires-...
I’m not convinced that these price increases represent an attempt to squeeze more profit out of a saturated market.
To me they look an awful lot like people realising that the sheer compute cost associated with modern models makes the historical zero-marginal cost model of software impossible. API calls to LLM models have far more in common with making calls to EC2 or Lambda for compute, than they do a standard API calls for SaSS.
A lot of early LLM based business models seemed to assume that the historical near zero-marginal cost of delivery for software would somehow apply to hosted LLM models which clearly isn’t the case.
You mix that in with rising datacenter costs, driven by lack of available electricity infrastructure to meet their demands, plus everyone trying to grab as much LLM land as possible, which requires more datacenters, more faster. And the result is rapidly increasing base costs for compute. Which we’re now seeing reflected in LLM pricing.
For me the thing that stands out about LLMs, is that their compute costs are easily 100-10000x greater per API call than a traditional SaSS API call. That fact alone should be enough for people to realise that the historically bottomless VC money that normal funds this stuff, isn’t quite a bottomless as it needs to be to meaningfully subsidise consumer pricing.
(Edited for tone)
Do they think they're secret sauce is UX? There's better editors out there now too.
You want to know what the hype train of Cursor was for? It was marketing for LLMs.
Nvidia sold $35B of just datacenter GPUs last year. Of which the vast majority will be used for AI.
Cerebra entire revenue last year was only $78M. That’s three orders of magnitude smaller than Nvidia datacenter GPU business. Scaling a company 10X in a year is a pretty hard thing to do, and it’s not a question of money, it’s a question of people and organisation. So much stuff in a business breaks when it scales 10X, that it take months to years to fix enough stuff to support another 10x growth spurt without everything just imploding.
any reference for this?
Now that all the providers have moved towards in-housing their coding solutions, the sweetheart deals are gone. And the wrapper goes back to "at cost" usage. Which, on paper should be less value / $ than any of the tiers offered by the providers themselves.
Whatever data they collected, and whatever investments they made in core tech remains to be seen. And it's a question of making use of that data. We can see that it is highly valuable for providers (as they all subsidise usage to an extent). Goog is offering lots of stuff for free, presumably to collect data.
One interesting distinction is on cursor vs. windsurf. Windsurf actually developed some internal models (either pre-trained or post-trained I don't know, but probably doesn't matter much) swe1 and swe1-lite that are actually pretty good on their own. I don't think cursor has done that, beyond their tab-next-prediction model. A clue, perhaps.
Anyway, it will be interesting to see how this all unfolds 2-5 years from now.
Early on Cursor added value by finding clever to integrate LLM into an IDE, which would allow single shot output of an LLM to produce something useful, and do so quickly. That required a fair bit of engineering to make happen reliably.
But LLM agents completely break that. The moment people realised that rather than trying to bend our tools to work within the limits of an LLM, we could instead just make LLM “self-prompt” their way to better outputs, Cursors model stopped being viable.
It’s another classic case of the AI “Bitter Lesson”[0] being learned. Throwing more data, and more compute at AI produces faster, better progress, than careful methodical engineering.
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
I've tried Aider and other agentic options and its amazingly clunky. Maybe I'm looking at the "apple vs linux" effect: I'm the apple user that just expect things to work out of the box, and although there are way better alternatives, the integration is worse.
It's useful and has SOME value. Just not enough unfortunately.
This is history repeating itself. How could a company that raised so much money possibly compete with another that has the same ... arguably better ... product for less? Marketing budget? Hype? First to market? Sure. Absolutely. All those things do fade though.
We've seen this movie before. Remember Sublime Text? Remember what happened when Atom and VS Code came along? Fortunately Sublime Text didn't over raise (if they raised anything at all, I can't remember). Point is, people catch on and save money if they can. So they will do the same with Cursor.
Use open source editors that have the same features...better ones even. I'll argue that Roo Code is much much better than Cursor. I even like Windsurf better to be honest, but I wouldn't pay for either. I'll support open-source and save my money to pay the LLM.
Cursor is about to go the way of Sublime Text or Notepad++. Might keep some cult following, but it's market share will drop off a cliff.
It's ok. Their investors don't care! This was all to get people to use LLMs more. Their investors are fine with the sacrifice. That's all it ever was.
This is also a bit of foreshadowing for all SaaS by the way.
Remember, nothing prevents anyone from simply vibe coding anything they see out there in the future. It turns the value of all software to near $0.
Cursor will go under. They'll be acquired for cheap (making more headlines, good press!), go bankrupt, or completely change their business into something else and undergo major restructuring. The point of Cursor is NOT to be a profitable ai code editor business. Look at the investors. Thrive dumped over $9B, using that $9B to advertise, market, and sell the product and services offered by the other companies they invest in -- the LLMs. At the end of the day, that's where the money flows.
In fact, none of the investors care one bit of Cursor survives. It's served its purpose. It was the "freebie" that people give out, the appetizer, for something bigger.
The difference is open-source doesn't exactly have a giant marketing budget. So most people haven't caught on yet, but they will.
Cursor just makes that window one month long.
Technically, that's a rate limit.
But yeah, only technically.
This all becomes very clear when you do something that feels like magic in Claude Code and then run /cost and see you’ve blown through $10 in a single hour long session. Which is honestly worth it for me.
Think about this one. They don't even tell you what your usage is! Look at Roo Code showing you the context usage and cost of each conversation. Features to compact the context. It's built around bringing awareness to the unit economics of AI and built to give users choice. The tools that work to keep users in the dark are serving someone else's interests.
Not because of Cursor‘s pricing, but because in the end Claude Code is unmatched.
For example they can react to in editor linter errors without running a lint command etc.
From the site : “Supermaven uses Babble, a proprietary language model specifically optimized for inline code completion. Our in-house models and serving infrastructure allow us to provide the fastest completions and the longest context window of any copilot.”
For $200/month you can get equivalent value to a team of engineers. Plan accordingly! The stack is no longer safe for employment. You need to move up to manager or move down to metal.
Why couldn't Claude do a managers job?
I see people mention converting old legacy code from an old language to something more modern. I've also seen people mention greenfield projects.
Anything other than this? I'm trying to bring this productivity to my work but so far haven't been able to replace a week of work in a few minutes yet