For users who've already switched to the forks, the cost of switching back is essentially zero, especially if Microsoft begins introducing changes that break fork compatibility. In that case, the migration direction would reverse almost overnight.
For users who've already switched to the forks, the cost of switching back is essentially zero, especially if Microsoft begins introducing changes that break fork compatibility. In that case, the migration direction would reverse almost overnight.
Microsoft has clearly taken notice. They're already starting to lock down the upstream VSCode codebase, as seen with recent changes to the C/C++ extension [0]. It's not hard to imagine that future features like TypeScript 7.0 might be limited or even withheld from forks entirely. At the same time, Microsoft will likely replicate Windsurf and Cursor's features within a year. And deliver them with far greater stability and polish.
Both Windsurf and Cursor are riddled with bugs that don't exist upstream, _especially_ in their AI assistant features beyond the VSCode core. Context management which is supposed to be the core featured added is itself incredibly poorly implemented [1].
Ultimately, the future isn't about a smarter editor, it's about a smarter teammate. Tools like GitHub Copilot or future agents will handle entire engineering tickets: generating PRs with tests, taking feedback, and iterating like a real collaborator.
[0] https://www.theregister.com/2025/04/24/microsoft_vs_code_sub...
[1] https://www.reddit.com/r/cursor/comments/1kbt790/rules_in_49...
So, to avoid depressed AIs ending the world randomly, have a stable of multiple AIs with different provenance (one from Anthropic, one from OpenAI, one from Google...) require a majority agreement to reduce the error rate. Adjust thresholds depending on criticality of the task at hand.
Sounds exactly like “the cloud”, especially AWS. Basically “get married to our platform, build on top of it, and make it hard to leave.” The benefits are that it’s easy to get started. And also that they invested in the infrastructure, but now they are trying to lock you in by storing as much state and data as possible with them withoit an easy way to migrate. So, increase your switching costs. For social networks the benefit was that they had the network effect but that doesn’t apply here.
Yeah they keep pushing higher-level services, but the uptake of these is extremely limited. If you used something like SageMaker, which has an extremely high lock-in factor, it's probably because you're an old school company that don't know what you're doing and AWS held your developer's hand to get the Hello World-level app working, but at least you got your name printed in their case study materials of the project at the end.
I think OpenAI looks at AWS and thinks they can do better. And for their investors, they must do better. But in the end I think the commoditization of LLMs is already almost complete, and this is just a futile attempt to fight it.
At the end of the day, all I ever seem to use is the chat completion API with structured outputs turned on. Despite my "basic" usage, I am employing tool use, recursive conversations, RAG, etc. I don't see the value in outsourcing state management of my "agent" to a 3rd party. I have way more autonomy if I keep things like this local.
The entire premise of these products is that you are feeding a string literal into some black box and it gives you a new string. Hopefully, as JSON or whatever you requested. If you focus just on the idea of composing the appropriate string each time, everything else melts away. This is the only grain that really matters. Think about other ways in which we compose highly-structured strings based upon business state stored in a database. It's literally the exact same thing you do when you SSR a webpage with PHP. The only real difference is how it is served.
If you built on the Assistant API, maybe take the hint and don't just rewrite to the Responses API? Own your product, black box the LLM-of-the-day.
Deleted Comment
Indeed, but after scanning this article that pulls in all those pieces of indirect evidence I wondered whether some type of structured knowledge database (that encodes the innumerable pieces of historical information that are known, tags them with confidence levels etc.) would not be useful to advance research in such domains.
Something like a large collection of RDF triplets against which you could run a query like "Given this new data point how (more)likely that Alexander the Great's tunic is identified in a royal tomb at Vergina?"
Have you used Cursor on a daily basis? I have. Every day for six months now. I haven't encountered a single bug that prevent me to work.
Moreover, while Microsoft tries to catch up lately, it's still very far behind, especially on the "tab autocompletion" front.
Microsoft provides the editor base, foundation models provide the smarts, and Cursor provides some, in my experience, extremely buggy context management features. There is no moat.
[0] https://knowyourmeme.com/memes/unfinished-horse-drawing-flam...