Readit News logoReadit News
Posted by u/haute_cuisine 4 months ago
Tell HN: Twilio support replies with hallucinated features
I was investigating some bug with our voice system and asked support where I can find some debugging information and event logs.

They told me where I should go in the interface to see that and reassured that they checked logs and this event exist.

It turned out these features and information doesn't exist anywhere in the interface and impossible to retrieve in any way. The support message with hallucinated features is mostly AI written.

CEOs tell us AGI is around the corner but in reality it just unreliable information and AI can't even restock the vending machine.

gdulli · 4 months ago
There used to be a contract that a business had something to lose by providing bad service, that customers would leave and seek better service elsewhere.

I believe the most important and least discussed phenomenon of modern consumer culture is that consumers have passed a threshold of passive and docile behavior such that businesses no longer fear losing customers. Partly because the customers have shown willingness to eat shit, partly because there's a new understanding that all businesses will adopt the same customer-hostile behaviors (AI customer service in this case) so consumers don't have significant choice anyway.

jermaustin1 · 4 months ago
It's not so much the willingness to eat shit, but that no matter the service I use, I will have to eat shit, so who's shit tastes least bad for the benefits.

A lot of VoIP/SMS providers exist, but compared to Twilio, they are just DIY API and SIP providers, which might be what we as developers want, but not what a business "needs".

Loading comment...

Loading comment...

Loading comment...

Loading comment...

ivape · 4 months ago
It's interesting that you bring that up because I was just thinking about this concept in an undeveloped form. Egregious salesmanship is to sell an inferior or poor product while bolstering the overall brand reputation. How could that even be possible? With lies. You're absolutely right, the salesman in our world is in his purest and most demonic form.

With Brand management specifically, they specialize in servicing an ornate roof on a house so as to distract from the rest of the house. The ornate roof can be seen from miles away, and so it is the greatest ad you can buy in terms of reach.

I think I was thinking about this because of all the AI startup ads I've been seeing on Youtube. You wouldn't ever know how unworthy their product is based on how much branding and marketing they do. But that is the dance they do, the managing of the delta between product quality and brand quality, the management being the logistics of veiling that delta (not actually closing it).

Taking down a brand means to be diligent and aggressive in exposing that delta. Seems like common sense, but I'd urge you to consider it as more a "classical" formalization of what it is and what needs to be done. There is a terrible phenomenon within the human experience that results in humans trying to lie to each other for money.

It's the classical Theory on Being a Piece of Shit.

Loading comment...

azemetre · 4 months ago
This is mostly due to not trust busting enough in society. If there were actual competitive markets, not monopolies/oligarchies/monopsonies/cartels, the business world would be completely different.

Either that, or legislate workplace democracy.

Loading comment...

Loading comment...

stronglikedan · 4 months ago
> There used to be a contract

That was before crony capitalism became rampant.

Loading comment...

jeromegv · 4 months ago
Air Canada got sued in Canada for having a chatbot that allucinating a policy

And they lost

https://www.cbc.ca/news/canada/british-columbia/air-canada-c...

quinnjh · 4 months ago
These tools are perfect for deployment where providing plausible-but-incorrect info is aligned with business outcomes, like cutting your support staff and giving disgruntled customers fake information.

I’ve seen most of the frontier models hallucinate their capabilities, not surprising they might do so for api completions regarding a product they barely know about.

Unless they lose more money from cancelled subscriptions than they saved on cutting support staff, it’s probably the new normal.

trollbridge · 4 months ago
Twilio registered my business name as “My Twilio Account” and is unable to change it. My application for 10DLC also got rejected since I wanted to do something other than send marketing messages with it and I can’t figure out how to describe an opt-in only service that is strictly for employees, to their provided phone number, with a signature opting in to get payroll information texted to them.

As a test, I set up something to send junk quality marketing texts. Was approved.

Loading comment...

cacozen · 4 months ago
The vending machine mention is about this paper from Anthropic: https://www.anthropic.com/research/project-vend-1

The gist is: Claude AI successfully ran a shop by itself! - Actually a vending machine - Actually a mini-fridge in our office - Actually it gave lots of discounts and free products on our slack - Actually it hallucinated a Venmo account and people sent payments to God-knows-who

sieep · 4 months ago
This is hilarious.

The gall these guys have to say things like '...not-too-distant future in which AI models are autonomously running things in the real economy.'

It's not even close to doing something a little girl at a lemonade stand could do, no?

taf2 · 4 months ago
Fun - I always test the ai support by asking it for a really good sc2 Zerg rush build - as I recall Twilio gave me a pretty good build order
barbazoo · 4 months ago
None of them know what they're doing. Even Google's own AI integrated into their own apps, hallucinates about those very apps, e.g. asking Gemini in Docs about how to do something in Docs. It's laughable. LLM have great utility but this is not it.
elicash · 4 months ago
What distinguishes AI slop customer support from the previous enshittification of customer service is that previously if you wanted to avoid the garbage chat support you could get on the phone and -- even if you had to go through a phone tree -- you could at least eventually ask a person about the problem.

But now, even if it's possible to get a person on the phone, THAT PERSON is just doing the AI chatbot on their end. By talking to a human, you're just adding a middleman who is accessing the same incorrect chatbot that's available to you.

ilamparithi · 4 months ago
I searched on Google to check if banks were open on a certain day. The AI response on top said they were closed because it was a second Saturday, but it was actually a Wednesday.