Readit News logoReadit News
Posted by u/scaredpelican 4 months ago
Cursor IDE support hallucinates lockout policy, causes user cancellationsold.reddit.com/r/cursor/c...
Earlier today Cursor, the magical AI-powered IDE started kicking users off when they logged in from multiple machines.

Like,you’d be working on your desktop, switch to your laptop, and all of a sudden you're forcibly logged out. No warning, no notification, just gone.

Naturally, people thought this was a new policy.

So they asked support.

And here’s where it gets batshit: Cursor has a support email, so users emailed them to find out. The support peson told everyone this was “expected behavior” under their new login policy.

One problem. There was no support team, it was an AI designed to 'mimic human responses'

That answer, totally made up by the bot, spread like wildfire.

Users assumed it was real (because why wouldn’t they? It's their own support system lol), and within hours the community was in revolt. Dozens of users publicly canceled their subscriptions, myself included. Multi-device workflows are table stakes for devs, and if you're going to pull something that disruptive, you'd at least expect a changelog entry or smth.

Nope.

And just as people started comparing notes and figuring out that the story didn’t quite add up… the main Reddit thread got locked. Then deleted. Like, no public resolution, no real response, just silence.

To be clear: this wasn’t an actual policy change, just a backend session bug, and a hallucinated excuse from a support bot that somehow did more damage than the bug itself.

But at that point, it didn’t matter. People were already gone.

Honestly one of the most surreal product screwups I’ve seen in a while. Not because they made a mistake, but because the AI support system invented a lie, and nobody caught it until the userbase imploded.

nerdjon · 4 months ago
There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore and then a company that would benefit from that narrative gets directly hurt by it.

Which of course they are going to try to brush it all away. Better than admitting that this problem very much still exists and isn’t going away anytime soon.

lynguist · 4 months ago
https://www.anthropic.com/research/tracing-thoughts-language...

The section about hallucinations is deeply relevant.

Namely, Claude sometimes provides a plausible but incorrect chain-of-thought reasoning when its “true” computational path isn’t available. The model genuinely believes it’s giving a correct reasoning chain, but the interpretability microscope reveals it is constructing symbolic arguments backward from a conclusion.

https://en.wikipedia.org/wiki/On_Bullshit

This empirically confirms the “theory of bullshit” as a category distinct from lying. It suggests that “truth” emerges secondarily to symbolic coherence and plausibility.

This means knowledge itself is fundamentally symbolic-social, not merely correspondence to external fact.

Knowledge emerges from symbolic coherence, linguistic agreement, and social plausibility rather than purely from logical coherence or factual correctness.

emn13 · 4 months ago
While some of what you say is an interesting thought experiment, I think the second half of this argument has, as you'd put it, a low symbolic coherence and low plausibility.

Recognizing the relevance of coherence and plausibility does not need to imply that other aspects are any less relevant. Redefining truth merely because coherence is important and sometimes misinterpreted is not at all reasonable.

Logically, a falsehood can validly be derived from assumptions when those assumptions are false. That simple reasoning step alone is sufficient to explain how a coherent-looking reasoning chain can result in incorrect conclusions. Also, there are other ways a coherent-looking reasoning chain can fail. What you're saying is just not a convincing argument that we need to redefine what truth is.

jimbokun · 4 months ago
> Knowledge emerges from symbolic coherence, linguistic agreement, and social plausibility rather than purely from logical coherence or factual correctness.

This just seems like a redefinition of the word "knowledge" different from how it's commonly used. When most people say "knowledge" they mean beliefs that are also factually correct.

CodesInChaos · 4 months ago
> The model genuinely believes it’s giving a correct reasoning chain, but the interpretability microscope reveals it is constructing symbolic arguments backward from a conclusion.

Sounds very human. It's quite common that we make a decision based on intuition, and the reasons we give are just post-hoc justification (for ourselves and others).

jmaker · 4 months ago
I haven’t used Cursor yet. Some colleagues have and seemed happy. I’ve had GitHub Copilot on for what feels like a couple years, a few days ago VS Code was extended to provide an agentic workflow, MCP, bring-your-own-key, it interprets instructions in a codebase. But the UX and the outputs are bad in over 3/4 of cases. It’s a nuisance to me. It injects bad code even though it has the full context. Is Cursor genuinely any better?

To me it feels like people that benefit from or at least enjoy that sort of assistance and I solve vastly different problems and code very differently.

I’ve done exhausting code reviews on juniors’ and middles’ PRs but what I’ve been feeling lately is that I’m reviewing changes introduced by a very naive poster. It doesn’t even type-check. Regardless of whether it’s Claude 3.7, o1, o3-mini, or a few models from Hugging Face.

I don’t understand how people find that useful. Yesterday I literally wasted half an hour for a test suite setup a colleague of mine introduced to the codebase that wasn’t good, and I tried delegating that fix to several of the Copilot models. All of them missed the point, some even introduced security vulnerabilities in the process invalidating JWT validation, I tried “vide coding” it till it works, until I gave up in frustration and just used an ordinary search engine, which led me to the docs, in which I immediately found the right knob. I reverted all that crap and did the simple and correct thing. So my conclusion was simple: vibe coding and LLMs made the codebase unnecessarily more complicated and wasted my time. How on earth do people code whole apps with that?

ScottBurson · 4 months ago
> The model genuinely believes it’s giving a correct reasoning chain

The model doesn't "genuinely believe" anything.

nickledave · 4 months ago
Yes

https://link.springer.com/article/10.1007/s10676-024-09775-5

> # ChatGPT is bullshit

> Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

skrebbel · 4 months ago
Offtopic but I'm still sad that "On Bullshit" didn't go for that highest form of book titles, the single noun like "Capital", "Sapiens", etc

Dead Comment

ModernMech · 4 months ago
It's a huge problem. I just can't get past it and I get burned by it every time I try one of these products. Cursor in particular was one of the worst; the very first time I allowed it to look at my codebase, it hallucinated a missing brace (my code parsed fine), "helpfully" inserted it, and then proceeded to break everything. How am I supposed to trust and work with such a tool? To me, it seems like the equivalent of lobbing a live hand grenade into your codebase.

Don't get me wrong, I use AI every day, but it's mostly as a localized code complete or to help me debug tricky issues. Meaning I've written and understand the code myself, and the AI is there to augment my abilities. AI works great if it's used as a deductive tool.

Where it runs into issues is when it's used inductively, to create things that aren't there. When it does this, I feel the hallucinations can be off the charts -- inventing APIs, function names, entire libraries, and even entire programming languages on occasion. The AI is more than happy to deliver any kind of information you want, no matter how wrong it is.

AI is not a tool, it's a tiny Kafkaesque bureaucracy inside of your codebase. Does it work today? Yes! Why does it work? Who can say! Will it work tomorrow? Fingers crossed!

yodsanklai · 4 months ago
You're not supposed to trust the tool, you're supposed to review and rework the code before submitting for external review.

I use AI for rather complex tasks. It's impressive. It can make a bunch of non-trivial changes to several files, and have the code compile without warnings. But I need to iterate a few times so that the code looks like what I want.

That being said, I also lose time pretty regularly. There's a learning curve, and the tool would be much more useful if it was faster. It takes a few minutes to make changes, and there may be several iterations.

mediaman · 4 months ago
I'd add that the deductive abilities translate to well-defined spec. I've found it does well when I know what APIs I want it to use, and what general algorithmic approaches I want (which are still sometimes brainstormed separately with an AI, but not within the codebase). I provide it a numbered outline of the desired requirements and approach to take, and it usually does a good job.

It does poorly without heavy instruction, though, especially with anything more than toy projects.

Still a valuable tool, but far from the dreamy autonomous geniuses that they often get described as.

Mountain_Skies · 4 months ago
Versioning in source control for even personal projects just got far more important.
skissane · 4 months ago
> the very first time I allowed it to look at my codebase, it hallucinated a missing brace (my code parsed fine), "helpfully" inserted it, and then proceeded to break everything.

This is not an inherent flaw of LLMs, rather it is a flaw of a particular implementation-if you use guided sampling, so during sampling you only consider tokens allowed by the programming language grammar at that position, it becomes impossible for the LLM to generate ungrammatical output

> When it does this, I feel the hallucinations can be off the charts -- inventing APIs, function names, entire libraries,

They can use guided sampling for this too - if you know the set of function names which exist in the codebase and its dependencies, you can reject tokens that correspond to non-existent function names during sampling

Another approach, instead of or as well as guided sampling, is to use an agent with function calling - so the LLM can try compiling the modified code itself, and then attempt to recover from any errors which occur.

Dead Comment

theonething · 4 months ago
> it hallucinated a missing brace (my code parsed fine), "helpfully" inserted it, and then proceeded to break everything.

Your tone is rather hyperbolic here, making it sound like an extra brace resulted in a disaster. It didn't. It was easy to detect and easy to fix. Not a big deal.

cryptoegorophy · 4 months ago
I think that’s why Apple is very slow at rolling out AI if it ever actually will. Downside is way too big than the upside.
saintfire · 4 months ago
You say slowly, but in my opinion Apple made an out of character misstep by releasing a terrible UX to everyone. Apple intelligence is a running joke now.

Yes they didn't push it as hard as, say, copilot. I still think they got in way too deep way too fast.

zdragnar · 4 months ago
Investors seem to be starved for novelty right now. Web 2.0 is a given, web 3.0 is old, crypto has lost the shine, all that's left to jump on at the moment is AI.

Apple fumbled a bit with Siri, and I'm guessing they're not too keen to keep chasing everyone else, since outside of limited applications it turns out half baked at best.

Sadly, unless something shinier comes along soon, we're going to have to accept that everything everywhere else is just going to be awful. Hallucinations in your doctor's notes, legal rulings, in your coffee and laundry and everything else that hasn't yet been IoT-ified.

sillyfluke · 4 months ago
They already rolled out an "AI" product. Got humiliated pretty bad, and rolled it back. [0]

[0] https://www.bbc.com/news/articles/cq5ggew08eyo

jmaker · 4 months ago
Even the iOS and macOS typing correction engine has been getting worse for me over the past few OS updates. I’m now typing this on iOS, and it’s really annoying how it injects completely unrelated words, replaces minor typos with completely irrelevant words. Same in Safari on macOS. The previous release felt better than now, but still worse than a couple years ago.
gambiting · 4 months ago
>>if it ever actually will.

If they don't then I'd hope they get absolutely crucified by trade comissions everywhere, currently there are bilboards in my city advertising Apple AI even though it doesn't even exist yet - if it's never brought to the market then it's a serious case of misleading advertising.

poink · 4 months ago
Yet Apple has reenabled Apple Intelligence multiple times on my devices after OS updates despite me very deliberately and angrily disabling it multiple times
m3kw9 · 4 months ago
When you got 1-2billion users a day doing maybe 10 billion prompts a day, it’s risky
anonzzzies · 4 months ago
Did anyone say that? They are an issue everywhere, including for code. But with code at least I can have tooling to automatically check and feed back that it hallucinated libraries, functions etc, but with just normal research / problems there is no such thing and you will spend a lot of time verifying everything.
threeseed · 4 months ago
I use Scala which has arguably the best compiler/type system with Cursor.

There is no world in which a compiler or tooling will save you from the absolute mayhem it can do. I’ve had it routinely try to re-implement third party libraries, modify code unrelated to what it was asked, quietly override functions etc.

It’s like a developer who is on LSD.

felipefar · 4 months ago
Yes, most people who have an incentive in pushing AI say that hallucinations aren't a problem, since humans aren't correct all the time.

But in reality hallucinations either make people using AI lose a lot of their time trying to stuck the LLMs from dead ends or render those tools unusable.

manmal · 4 months ago
You get some superficial checking by the compiler and test cases, but hallucinations that pass both are still an issue.
rini17 · 4 months ago
Except when the hallucinated library exists and it's malicious. This is actually happening. Without AI, by using plain google you are less likely to fall for that (so far).
jmaker · 4 months ago
Until the model injects a subtle change to your logic that does type-check and then goes haywire in production. Just takes a colleague of yours under pressure and another one to review the PR, and then you’re on call and they out sick or on vacation.
learningstud · 4 months ago
People hallucinate all the time out of pressure or habit. We don't need AI for that. It's hard to tell most people from AI. Most people would fail Turing tests as subjects.
_jonas · 4 months ago
I see this fallacy often too.

My company provides hallucination detection software: https://cleanlab.ai/tlm/

But we somehow end up in sales meetings where the person who requested the meeting claims their AI does not hallucinate ...

mntruell · 4 months ago
(Cursor cofounder)

Apologies - something very clearly went wrong here. We’ve already begun investigating, and some very early results:

* Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

* We’ve made sure this user is completely refunded - least we can do for the trouble.

For context, this user’s complaint was the result of a race condition that appears on very slow internet connections. The race leads to a bunch of unneeded sessions being created which crowds out the real sessions. We’ve rolled out a fix.

Appreciate all the feedback. Will help improve the experience for future users.

nextaccountic · 4 months ago
Why did you remove this thread?

https://old.reddit.com/r/cursor/comments/1jyy5am/psa_cursor_...

(For reference, here it is in reveddit https://www.reveddit.com/v/cursor/comments/1jyy5am/psa_curso... - text from post was unfortunately not saved)

It's already locked and with a stickied comment from a dev clarifying what happened

Did you remove it so people can't find about this screwup when searching Google?

Anyway, if you acknowledge it was a mistake to remove the thread, could you please un-remove it?

PrayagS · 4 months ago
The whole subreddit is moderated poorly. I’ve seen plenty of users post on r/LocalLlama about how something negative or constructive they said on the Cursor sub was just removed.
AyyEye · 4 months ago
Why would anyone trust you?

The best case scenario is that you lied about having people answer support. LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive. Then you tried to control the narrative on reddit. So forgive me if I hit that big red DOUBT button.

Even in your post you call it "AI-assisted responses" which is as weaselly as it gets. Was it a chatbot response or was a human involved?

But 'a chatbot messed up' doesn't explain how users got locked out in the first place. EDIT: I see your comment about the race condition now. Plausible but questionable.

So the other possible scenario is that you tried to hose your paying customers then when you saw the blowback blamed it on a bot.

'We missed the mark' is such a trope non-apology. Write a better one.

I had originally ended this post with "get real" but your company's entire goal is to replace the real with the simulated so I guess "you get what you had coming". Maybe let your chatbots write more crap code that your fake software engineers push to paying customers that then get ignored and/or lied to when they ask your chatbots for help. Or just lie to everyone when you see blowback. Whatever. Not my problem yet because I can write code well enough that I'm embarrassed for my entire industry whenever I see the output from tools like yours.

This whole "AI" psyop is morally bankrupt and the world would be better off without it.

PoignardAzur · 4 months ago
> The best case scenario is that you lied about having people answer support. LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive.

Also, illegal in the EU.

jackaroe420 · 4 months ago
I don't know who you are but you said this so well!
Azeralthefallen · 4 months ago
Hi since i know you will never respond to this or hear this.

We spent almost 2 months fighting with you guys about basic questions any B2B SaaS should be able to answer us. Things such as invoicing, contracts, and security policies. This was for a low 6 figure MRR deal.

When your sales rep responds "I don't know" or "I will need to get back to you" for weeks about basic questions it left us with a massive disappointment. Please do better, however we have moved to Copilot.

dspillett · 4 months ago
> Any AI responses used for email support are now clearly labeled as such.

Because we all know how well people pay attention to such clear labels, even seasoned devs not just “end users”⁰.

Also, deleting public view of the issue (locking & hiding the reddit thread) tells me a lot about how much I should trust the company and its products, and as such I will continue to not use them.

--------

[0] though here there the end users are devs

Snakes3727 · 4 months ago
I do truely love how you guys even went so far to hide and lock the post from Reddit.

This person is not the only one to experiencing this bug. As this thread has pointed out.

KennyBlanken · 4 months ago
I wish more people realized that virtually any subreddit for a company or product is run by the company - either directly or via a firm that specializes in 'sentiment analysis and management' or whatever the marketdroids call it these days. Even if they don't remove posts via moderation, they'll just hammer it with downvotes from sockpuppet accounts.

HN goes a step further. It has a function that allows moderators to kill or boost a post by subtracting or adding a large amount to the post's score. HN is primarily a place for Y Combinator to hype their latest venture, and a "safe" place for other startups and tech companies.

patcon · 4 months ago
Agreed, this is what's infuriating: insistence on control.

They will utterly fail to build for a community of users if they don't have anyone on-hand who can tell them what a terrible idea that was

To the cofounder: hire someone (ideally with some thoughtful reluctance around AI, who understands what's potentially lost in using it) who will tell you your ideas around this are terrible. Hire this person before you fuck up your position in benevolent leadership of this new field

petesergeant · 4 months ago
I dunno, that seems pretty reasonable to me simply for stopping the spread of misinformation. The main story will absolutely get written up by some smaller news sources, but is it really a benefit for someone facing a similar issue in the future to find an outdated and probably confusing Reddit post about it?
slotrans · 4 months ago
> We use AI-assisted responses as the first filter for email support.

Literally no one wants this. The entire purpose of contacting support is to get help from a human.

fragmede · 4 months ago
Sorta? I mean I want my problem fixed, regardless of it it's a person or not. Having a person listen to me complain about my problems might sooth my conscience, but I can't pay my bill or why was it so high; having those answered by a system that is contextualized to my problem sand is empowered to fix it, and not just a talking to a brick wall? I wouldn't say totally fine, but at the end of the day, if my problem is solved or my query, even if it's weird, I can't say I really needed for the voice on the other end of the pHone to come from a human. If a companies business model isn't sustainable without using AI agents, it's not really my problem that it's not, but also if I'm using their product, presumably I don't want that to go away.
hartator · 4 months ago
> For context, this user’s complaint was the result of a race condition that appears on very slow internet connections.

Seems like you are still blaming the user for his “very slow internet”.

How do you know the user internet was slow? Couldn’t a race condition like this exist anyway with regular 2 fast internet connections competing for the same sessions?

Something doesn’t add up.

mritchie712 · 4 months ago
huh?

this is a completely reasonable and seemingly quite transparent explaination.

if you want a conspiracy, there are better places to look.

eranation · 4 months ago
Side note... I'm a paying enterprise customer who moved all my team to cursor and have to say I'm considering canceling due to the non existent support. For example Cursor will create new files instead of edit an existing one when you have a workspace with multiple folders in a monorepo...
geuis · 4 months ago
Why in all of hades would you force your entire eng org to only use one LLM provider. It's incredibly easy to run this stuff locally on 4+ year old hardware. Why is this even something you're spending company money on? Investor funds?
hakaneskici · 4 months ago
Hi Michael,

Slightly related to this; I just wanted to ask whether all Cursor email inboxes are gated by AI agents? I've tried to contact Cursor via email a few times in the past, but haven't even received an AI response :)

Cheers!

mntruell · 4 months ago
Not all of them (e.g. security@)! But our support system currently is. We are standing up a much bigger team here but are behind where we should be.
adenta · 4 months ago
You’ve promised a ton of people refunds that never got them. Others in this thread, and myself included

Edit: he did refund 22 mins after seeing this

krzat · 4 months ago
you didn't get a refund because the promise of refund was also hallucinated.
PoignardAzur · 4 months ago
Maybe wait more than an hour before implying the refunds were a lie all along.
makingstuffs · 4 months ago
Yeah I got asked for feedback and offered a refund when I cancelled. Never got any reply after. Guess it was AI slop
PUSH_AX · 4 months ago
It's a real shame that your team deletes threads like this in instances where they have control (eg they are mods on the subreddit). Part of me wonders if you had a magic wand would you have just deleted this too, but you're forced to chime in now because you don't.
ach9l · 4 months ago
so the actual implementation of the code to log people off was also hallucination? the enforcement too? all the way to a production environment? is this safe, or just a virtual scape goat?
Ukv · 4 months ago
To my understanding there weren't really distinct "implementation of the code to log people off" and "enforcement" - just a bug where previous sessions were being expired when a new one was created.

That an LLM then invented a reason when asked by users why they're being logged out isn't that surprising. While not impossible, I don't think there's currently indication that they intended to change policy and are just blaming it on a hallucination as a scape goat.

ph4evers · 4 months ago
Keep going! I love Cursor. Don’t let the haters get to you
redbell · 4 months ago
> Any AI responses used for email support are now clearly labeled as such

Also, from the first comment in the post:

> Unfortunately, this is an incorrect response from a front-line AI support bot.

Well, this actually hurts.. a lot! I believe one of the key pillars of making a great company is customer support, which represents the soul or the human part of the company.

fossuser · 4 months ago
Thank for the details and replying here!

Don’t let the dickish replies get to you.

make3 · 4 months ago
Support emails shouldn't be AI. It's just so annoying. Put a human in the loop at least. This is a paying service, not a massive ad supported thing.
SCdF · 4 months ago
> * Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

Don't use AI. Actually care. Like, take a step back, and realise you should give a shit about support for a paid product.

Don't get me wrong: AI is a very effective tool, *for doing things you don't care about*. I had to do a random docker compose change the the other day. It's not production code, it will be very obvious whether or not AI output works, and I very rarely touch docker and don't care to become a super expert in it. So I prompted the change, and it was good enough and so I ran with it.

You using AI for support tells me that you don't care about support. Which tells me whether or not I should be your customer.

petesergeant · 4 months ago
There’s AI and there’s “AI”, and this whole drama would have been avoided by returning links to an FAQ rather found using embedding search rather than actually then trying to turn it into a textual answer, which — working with these systems all day — is madness
thih9 · 4 months ago
> Don't use AI. Actually care.

I agree with this. Also, whenever I care about code, I don’t use AI. So I very rarely use AI assistants for coding.

I guess this is why Cursor is interested in making AI assistants popular everywhere, they don’t want the association that “AI assisted” means careless. Even when it does, at least with today’s level of AI.

throwawaysleep · 4 months ago
The amount paid is still pretty trivial. I wouldn’t expect much human support for most SaaS products costing $20 a month.
charlietango592 · 4 months ago
Not trying to defend them, but I think it’s a problem of scaling up. The user base grew very quickly and keeping up with the support inquiries must be a tough job. Therefore the first like of defense is AI support replies.

I agree with you, they should care.

mindwok · 4 months ago
They’re like a team of 10 people with thousands, if not hundreds of thousands of users. “Actually care” is not a viable path to success here.
nkrisc · 4 months ago
> Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

And what’s a customer supposed to do with that information? Know that they can’t trust it? What’s the point then?

mrheosuper · 4 months ago
Does your codebase use LLM ?
SpanishBrowne · 4 months ago
cofounder or another bot stringing letters together?

Dead Comment

Dead Comment

geuis · 4 months ago
Or you could hire real people to actually answer real customer issues. Just an idea.
birdman3131 · 4 months ago
Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Note that I have absolutely no knowledge or reason to believe this other than general distrust of companies.

rustc · 4 months ago
> Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Yeah, who puts an AI in charge of support emails with no human checks and no mention that it's an AI generated reply in the response email?

daemonologist · 4 months ago
AI companies high on their own supply, that's who. Ultralytics is (in)famous for it.
recursive · 4 months ago
A forward-thinking company that believes in the power of Innovation™.
p1necone · 4 months ago
An AI company dogfooding their own marketing. It's almost admirable in a way.
nkrisc · 4 months ago
This is the future AI companies are selling. I believe they would 100%.
xbar · 4 months ago
I worry that the tally of those who do is much higher than is prudent.
conradfr · 4 months ago
A lot of company actually, although 100% automation is still rare.
pxx · 4 months ago
OpenAI seems to do this. I've gotten complete nonsense replies from their support for billing questions.

Dead Comment

that_guy_iain · 4 months ago
Is this sarcasm? AI has been getting used to handle support requests for years without human checks. Why would they suddenly start adding human checks when the tech is way better than it was years ago?
furyofantares · 4 months ago
It does say it's AI generated. This is the signature line:

    Sam
    Cursor AI Support Assistant
    cursor.com • hi@cursor.com • forum.cursor.com

babypuncher · 4 months ago
Given how incredibly stingy tech companies are about spending any money on support, I would not be surprised if the story about it being a rogue AI support agent is 100% true.

It also seems like a weird thing to lie about, since it's just another very public example of AI fucking up something royally, coming from a company whose whole business model is selling AI.

xienze · 4 months ago
Both things can be true. The AI support bot might have been trained to respond with “yup that’s the new policy”, but the unexpected shitstorm that erupted might have caused the company to backpedal by saying “official policy? Ha ha, no of course not, that was, uh, a misbehaving bot!”
arkh · 4 months ago
> how incredibly stingy tech companies are about spending any money on support

Which is crazy. Support is part of marketing so it should get the same kind of consideration.

Why do people think Amazon is hard to beat? Price? nope. Product range? nope. Delivery time? In part. The fact if you have a problem with your product they'll handle it? Yes. After getting burned multiple times by other retailers you're gonna pay the Amazon tax so you don't have to ask 10 times for a refund or be redirected to the supplier own support or some third party repair shop.

Everyone knows it. But people are still stuck on the "support is a cost center" way of life so they keep on getting beat by the big bad Amazon.

sitkack · 4 months ago
That is because AI runs PR as well.
throwaway314155 · 4 months ago
Yeah it makes little sense to me that so many users would experience exactly the same "hallucination" from the same model. Unless it had been made deterministic but even then subtle changes in the wording would trigger different hallucinations, not an identical one.
WesolyKubeczek · 4 months ago
What if the prompt to the “support assistant” postulates that 1) everything is a user error, 2) if it’s not, it’s a policy violation, 3) if it’s not, it may be our fuckup but we are allowed? This plus the question in the email leading to a particular answer.

Given that LLMs are trained on lots of stuff and not just the policy of this company, it’s not hard to imagine how it could conjure that the policy (plausibly) is “one session per user”, and blame them of violating it.

6510 · 4 months ago
This is the best idea I read all day. Going to implement AI for everything right now. This is a must have feature.
isaacremuant · 4 months ago
I think this would actually make them look worse, not better.
joe_the_user · 4 months ago
Weirdly, your conspiracy theory actually makes the turn of events less disconcerting.

The thing is, what the AI hallucinated (if it was an AI-hallucinating), was the kind of sleezy thing companies do do. However, the thing with sleezy license changes is they only make money if the company publicizes them. Of course, that doesn't mean a company actually thinks that far ahead (X many managers really think "attack users ... profit!"). Riddles in enigmas...

jgb1984 · 4 months ago
LLM anything makes me queasy. Why would any self respecting software developer use this tripe? Learn how to write good software. Become an expert in the trade. AI anything will only dig a hole for software to die in. Cheapens the product, butchers the process and absolutely decimates any hope for skill development for future junior developers.

I'll just keep chugging along, with debian, python and vim, as I always have. No LLM, no LSP, heck not even autocompletion. But damn proud of every hand crafted, easy to maintain and fully understood line of code I'll write.

cachvico · 4 months ago
I use it all the time, and it has accelerated my output massively.

Now, I don't trust the output - I review everything, and it often goes wrong. You have to know how to use it. But I would never go back. Often it comes up with more elegant solutions than I would have. And when you're working with a new platform, or some unfamiliar library that it already knows, it's an absolute godsend.

I'm also damn proud of my own hand-crafted code, but to avoid LLMs out of principal? That's just luddite.

20+ years of experience across game dev, mobile and web apps, in case you feel it relevant.

ericwood · 4 months ago
I have a hard time being sold on “yea it’s wrong a lot, also you have to spend more time than you already do on code review.”

Getting to sit down and write the code is the most enjoyable part of the job, why would I deprive myself of that? By the time the problem has been defined well enough to explain it to an LLM sitting down and writing the code is typically very simple.

YeGoblynQueenne · 4 months ago
>> I use it all the time, and it has accelerated my output massively.

Like how McDonalds makes a lot of burgers fast and they are very successful so that's all we really care about?

timewizard · 4 months ago
> "and it has accelerated my output massively."

The folly of single ended metrics.

> but to avoid LLMs out of principal? That's just luddite.

Do you double check that the LLM hasn't magically recreated someone else's copyrighted code? That's just irresponsible in certain contexts.

> in case you feel it relevant.

Of course it's relevant. If a 19 year old with 1 year of driving experience tries to sell me a car using their personal anecdote as a metric I'd be suspicious. If their only salient point is that "it gets me to where I'm going faster!" I'd be doubly suspicious.

callc · 4 months ago
I’m pretty much in the same boat as you, but here’s one place that LLMs helped me:

In python I was scanning 1000’s of files each for thousands of keywords. A naive implementation took around 10 seconds, obviously the largest share of execution time after running instrumentation. A quick ChatGPT led me to Aho-Corasick and String searching algorithms, which I had never used before. Plug in a library and bam, 30x speed up for that part of the code.

I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.

I could have searched the web and probably found it out.

But the LLM basically auto completed the web, which I appreciate.

kovac · 4 months ago
This is where education comes in. When we come cross a certain scale, we should know that O(n) comes into play, and study existing literature before trying to naively solve the problem. What would happen if the "AI" and web search didn't return anything? Would you have stuck with your implementation? What if you couldn't find a library with a usable license?

Once I had to look up a research paper to implement a computational geometry algorithm because I couldn't find it any of the typical Web sources. There were also no library to use with a license for our commercial use.

I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.

klabb3 · 4 months ago
Yes! This is how AI should be used. You have a question that’s quite difficult and may not score well on traditional keyword matching. An LLM can use pattern matching to point you in the right direction of well written library based on CS research and/or best practices.
valenterry · 4 months ago
I mean, even in the absence of knowledge of the existence of text searching algorithms (where I'm from we learn that in university) just a simple web search would have gotten you there as well no? Maybe would have taken a few minutes longer though.
mrheosuper · 4 months ago
But do you know every important detail of that library. For example, maybe that lib is not thread safe, or it allocates a lot of memory to speed thing up, or it wont work on ARM CPU because it uses some x86 hackery ASM?
mixmastamyk · 4 months ago
Sounds like a job for silver/ripgrep and possibly stack exchange. Might take another minute to get it rolling but has other benefits like cost and privacy.
aleph_minus_one · 4 months ago
> I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.

Get friends with weirder daily schedules. :-)

marcus_holmes · 4 months ago
I was with you 150% (though Arch, Golang and Zed) until a friend convinced me to give it a proper go and explained more about how to talk to the LLM.

I've had a long-term code project that I've really struggled with, for various reasons. Instead of using my normal approach, which would be to lay out what I think the code should do, and how it should work, I just explained the problem and let the LLM worry about the code.

It got really far. I'm still impressed. Claude worked great, but ran out of free tokens or whatever, and refused to continue (fine, it was the freebie version and you get what you pay for). I picked it up again in Cursor and it got further. One of my conditions for this experiment was to never look at the code, just the output, and only talk to the LLM about what I wanted, not about how I wanted it done. This seemed to work better.

I'm hitting different problems, now, for sure. Getting it to test everything was tricky, and I'm still not convinced it's not just fixing the test instead of the code every time there's a test failure. Peeking at the code, there are several remnants of previous architectural models littering the codebase. Whole directories of unused, uncalled, code that got left behind. I would not ship this as it is.

But... it works, kinda. It's fast, I got a working demo of something 80% near what I wanted in 1/10 of the time it would have taken me to make that manually. And just focusing on the result meant that I didn't go down all the rabbit holes of how to structure the code or which paradigm to use.

I'm hooked now. I want to get better at using this tool, and see the failures as my failures in prompting rather than the LLM's failure to do what I want.

I still don't know how much work would be involved in turning the code into something I could actually ship. Maybe there's a second phase which looks more like conventional development cleaning it all up. I don't know yet. I'll keep experimenting :)

imhoguy · 4 months ago
> never look at the code, just the output, and only talk to the LLM about what I wanted

Sir, you have just passed vibe coding exam. Certified Vibe Coder printout is in the making but AI has difficulty finding a printer. /s

SkyPuncher · 4 months ago
> Why would any self respecting software developer use this tripe?

Because I can ship 2x to 5x more code with nearly the same quality.

My employer isn't paying me to be a craftsman. They're paying me to ship things that make them money.

ivan_gammel · 4 months ago
How do you define code quality in this case and what is your stack?
leoh · 4 months ago
Good employee, you get cookie and 1h extra pto
bigstrat2003 · 4 months ago
I wholeheartedly agree. When the tools become actually worth using, I'll use them. Right now they suck, and they slow you down rather than speed you up. I'm hardly a world class developer and I can do far better than these things. Someone who is actually top notch will outclass them even more.
Chinjut · 4 months ago
I understand not wanting to use LLMs that with no correctness guarantees that randomly hallucinate, but what's wrong with ordinary LSPs and autocompletion? Those seem like perfectly useful tools.
OsrsNeedsf2P · 4 months ago
I had a professor who used `ed` to write his code. He said only bring able to see one line at a time forces you to think more about what you're doing.

Anyways, Cursor generates all my code now.

x1xx · 4 months ago
If you are like me (same vim, python, no LLM, no autocompletion, no syntax highlighting noise), LSP will make you a better developer: it makes navigating the codebase MUCH easier, including stdlib and 3rd party dependencies.

As a result, you don't lose flow and end up reading considerably more code than you would have otherwise.

jgb1984 · 4 months ago
Actually, I'm kind of cheating because I use https://github.com/davidhalter/jedi-vim for that purpose: allows me to jump to definitions with <leader>d ;) Excellent plugin, and doesn't require an LSP.
incoming1211 · 4 months ago
Can pretty much guarantee with AI I'm a better software developer than you without. And I still love working on software used by millions of people every day, and take pride in what I do.
theonething · 4 months ago
> with debian, python and vim

Why are you cheapening the product, butchering the process and decimating any hope for further skill development by using these tools?

Instead of python, you should be using assembly or heck, just binary. Instead of relying on an OS abstraction layer made by someone else, you should write everything from scratch on the bare metal. Don't lower yourself by using a text editor, go hex. Then your code will truly be "hand crafted". You'll have even more reason to be proud.

dmitrygr · 4 months ago
I am unironically with you. I think people should start to learn from computer architecture and assembly and only then, after demonstrating proper skill, graduate to C, and after demonstrating skill there graduate to managed-memory languages.
CaptainFever · 4 months ago
Relevant XKCD: https://xkcd.com/378/
mock-possum · 4 months ago
Good for you - if that’s what works for you, then keep on keeping on.

Don’t get too hung up on what works for other people. That’s not a good look.

sneak · 4 months ago
This comment presupposes that AI is only used to write code that the (presumably junior-level) author doesn’t understand.

I’m a self-respecting software developer with 28 years of experience. I would, with some caveats, venture to say I am an expert in the trade.

AI helps me write good code somewhere between 3x and 10x faster.

This whole-cloth shallow dismissal of everything AI as worthless overhyped slop is just as tired and content-free as breathless claims of the limitless power or universal applicability of AI.

ookblah · 4 months ago
sorry for the snark, but missing the forest for the trees here. unless it's just some philosophical idea, use the tools that save you time. if anything it saves you writing boilerplate or making careless errors.

i don't need to "hand write" every line and character in my code and guess what, it's still easy to understand and maintain because it's what would have written anyway. that or you're just bikeshedding minor syntax.

like if you want to be proud of a "hand built" house with hammer and nails be my guest, but don't conflate the two with always being well built.

computerex · 4 months ago
Why use a high level language like python? Why not assembly? Are you really proud of the slow unoptimized byte code that’s executed instead of perfectly crafting the assembly implementation optimizing for the architecture? /s

Seriously comments like yours assume, that all the rest of us who DO make extensive use of these AI tools and have also been around the block for a while, are idiots.

Dead Comment

Dead Comment

Deleted Comment

kebokyo · 4 months ago
here's an archive of the original reddit post since it seemed to be instantly nuked: https://undelete.pullpush.io/r/cursor/comments/1jyy5am/psa_c...
rurp · 4 months ago
It's funny seeing all of the comments trying to blame the users for this screwup by claiming they're using it wrong. It is reddit though, so I guess I shouldn't be surprised.
keeganpoppen · 4 months ago
what is it about reddit that causes this behavior, when they otherwise are skeptical only of whatever the "official story" is at all costs? it is fascinating behavior.
bytesandbits · 4 months ago
wow they nuked it for damage control and only caused more damage
ddxv · 4 months ago
Cursor is weird. They have a basically unused GitHub with a thousand unanswered Issues. It's so buggy in ways that VSCode isn't. I hate it. Also I use it everyday and pay for it.

That's when you know you've captured something, when people hate use your product.

Any real alternatives? I've tried continue and was unimpressed with the tab completion and typing experience (felt like laggy typing on a remote server).

adriand · 4 months ago
VS Code with standard copilot for tab completion and Aider in a terminal window for all the heavier lifts, asking questions, architecting etc. And it’s cheap! I’ve been using it with OpenRouter (lets you easily switch models and providers) and my $10 of credits lasted weeks. Granted, I also use Claude a lot in the browser.
dtquad · 4 months ago
The reason many prefer Cursor over VSCode + GitHub Copilot is because of how much faster Cursor is for tab completion. They use some smaller models that are latency optimized specifically to make the tab completion feel as fast as possible.
pfg_ · 4 months ago
Copilot's tab completion is significantly worse than cursor's in my experience (only tried free copilot)
caelinsutch · 4 months ago
If you don't mind leaving VSCode I'm a huge fan of Zed. Doesn't support some languages / stacks yet but their AI features are on-par with VSCode
presentation · 4 months ago
That's the wrong IDE to compare it to though, Cursor's AI features are 10x better than VSCode's. I tried Zed last month and while the editing was great, the AI features were too half-baked so I ended up going back to Cursor. Hopefully it gets better fast!
dkersten · 4 months ago
Agreed. My laptop has never used swap until I started using cursor… it’s a resource hog, I dislike using it, but it’s still the best AI coding aid and for the work I’m doing right now, the speed boost is more valuable than hand crafted code in enough cases that it’s worth it for me. But I don’t enjoy using the IDE itself, and I used vscode for a few years.

Personally, I will jump ship to Zed as soon as it’s agent mode is good enough (I used Zed as a dumb editor for about a year before I used cursor, and I love it)

permo-w · 4 months ago
I find that if you turn off telemetry (i.e. turn on privacy) the resource hogging slows down a lot
d357r0y3r · 4 months ago
Cline is pretty solid and doesn't require you to use a completely unsustainable VSCode fork.
alok-g · 4 months ago
I have heard Roo Code is a fork of Cline that is better. I have never used either so far.

https://github.com/RooVetGit/Roo-Code

smaddox · 4 months ago
I switched to Windsurf.ai when cursor broke for me. Seems about the same but less buggy. Haven't used it in the last couple weeks, though, so YMMV.
omneity · 4 months ago
I found the Windsurf agent to be relatively less capable, but their inline tool (and the “Tab” they’re promoting so much) has been extremely underwhelming, compared to Cursor.

The only one in this class to be even worse in my experience is Github Copilot.

Deleted Comment

htrp · 4 months ago
cant bother fixing their issues because they are too busy vibe coding new features
behnamoh · 4 months ago
Cursor + Vim plugin never worked for me, so I switched back to Nvim and never looked back. Nvim already has: avante, codeCompanion, copilot, and many other tools + MCP + aider if you're into that.
tintor · 4 months ago
"Any real alternatives?"

I use Zed with `3.7 sonnet`.

dkersten · 4 months ago
And the agent beta is looking pretty good, so far, too.
mushufasa · 4 months ago
Last I heard their team was still 10 people. Best size for doing something revolutionary. Way too few people to triage all that many issues and provide support.

They have enough revenue to hire, they probably are just overwhelmed. They'll figure it out soon I bet.

throwaway314155 · 4 months ago
I have never rolled my eyes harder.
ozataman · 4 months ago
Any competing product has to absolutely nail tab autocomplete like Cursor has. It's super fast, very smart (even guessing across modules) and very often correct.
amiantos · 4 months ago
Claude Code CLI is amazing and I am very confused as to why no one in 24 hours has recommended it.
bytesandbits · 4 months ago
Cursor sucks. Not as a product. As a team. Their customer support is terrible.

I was offered in writing a refund by the team who cold reached out to me to ask me why I cancelled my sub one week after start. Then they ignored my 3+ emails in response asking them to refund, and other means of trying to communicate with them. Offering me a refund as a bait to gain me back, then when I accept it they ghost me. Wow. Very low.

The product is not terrible but the team responses are. And this, if you see how they handled it, is also a very poor response. First thing you notice if you open the link is that the Cursor team removed the reddit post! As if we were not going to see it or something? Who do they think they are? Censoring bad comments which are 100% legit.

I am giving it a go to competitors just out of sheer frustration with how they handle customers, and I do recommend everybody to explore other products before you settle on Cursor. I don't intend to ever re-subscribe and have recommended friends to do the same, most of which agree with my experience.

JohnKemeny · 4 months ago
> Their customer support is terrible.

You just don't know how to prompt it correctly.

Crosseye_Jack · 4 months ago
sounds like perfect grounds for a chargeback to me. Company offered a full refund via one of its Agents, company then refused to honour that offer, time to make your bank force them to refund you.

Just because you use AI for customer service doesn't mean you don't have to honour its offers to customers. Air Canada recently lost a case where its AI offered a discount to a customer but then refused to offer it "IRL"

https://www.forbes.com/sites/marisagarcia/2024/02/19/what-ai...

einsteinx2 · 4 months ago
Same exact thing happened to me. I tried out Vursor after hearing all the hype and canceled after a few weeks. Got an email asking if I wanted a refund and asking for any feedback. I replied with detailed feedback on why I canceled and accepted the refund offer, then never heard back from them.
samanator · 4 months ago
Interesting. The same thing happened to me. Was offered a refund (graciously, as I had forgotten to cancel the subscription). And after thanking them and agreeing to the refund, was promptly ignored!

Very strange behavior honestly.

Deleted Comment

pzo · 4 months ago
I had the same exact experience - after disappointment (couldn't use like 2/3 of my premium credits because every second request failed after they upgraded to 0.46) unsubscribed. They offered refund in email. I replied I wanted refund but no reply
gblargg · 4 months ago
Apparently they use AI to read emails. So the future of email will be like phone support now, where you keep writing LIVE AGENT until you get a human responding.
PaulStatezny · 4 months ago
This reminds me of how small of a team they are, and makes me wonder if they have a customer support team that's growing commensurately with the size of the user base.

Dead Comment

andybak · 4 months ago
I just cancelled - not because I thought the policy change was real - but simply because this article reminded me I hadn't used it much this month.
scarface_74 · 4 months ago
This is where Kagi’s subscription policy comes in handy. If you don’t use it for a month, you don’t pay for it that month. There is no need to cancel it and Kagi doesn’t have to pay user acquisition costs.
paxys · 4 months ago
Slack does this as well. It's a genius idea from a business perspective. Normally IT admins have to go around asking users if they need the service (or more likely you have to request a license for yourself), regularly monitor usage, deactivate stale users etc., all to make sure the company isn't wasting money. Slack comes along and says - don't worry, just onboard every user at the company. If they don't log in and send at least N messages we won't bill them for that month.
tshaddox · 4 months ago
That's a fun one. It could be interpreted as a generous implementation of a monthly subscription, or a hostile implementation of a metered plan.
elcritch · 4 months ago
Wow, I wish more services did that.
permo-w · 4 months ago
Kagi should take it a step further and just charge per search
jay_kyburz · 4 months ago
surely the first thing you do when you subscribe to Kagi is set your default browser search to Kagi.
mirekrusin · 4 months ago
Really? Brilliant idea.
dylan604 · 4 months ago
so the old adage no such thing as bad PR shows to be incorrect. had they not been in the news, they'd at least have gotten one more monthly sub from you!
omneity · 4 months ago
This would only be complete in aggregate. We don’t know how many people signed up as a result.