Readit News logoReadit News
maxbond · a year ago
> What he didn't know at the time is there is no phone number for Facebook customer support.

Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.

ETA: For instance, I notice Facebook appears to own the typo squat `facrbook.com`. I feel like it's the same principle, though I assume toll free numbers are more expensive.

jessriedel · a year ago
It’s untenable from a marketing perspective to advertise a phone line that just talks about the services you don’t offer. One could maybe hope for a statement on a help page that says “Facebook will never ask you to call a support number”.
maxbond · a year ago
I think what you've gotta do is say, "You can't call, but here is the number anyway," because customers aren't necessarily interacting with your page anymore. They're interacting with AI summaries of your page. Those AIs might be in house, or might be provided by a search engine. What is tenable or untenable will have to shift to the realities of how users are interacting with the information you present.

If you can't provide their AI with text answering their direct question (eg, "what is the support number for Facebook"), they'll find a document which does provide such text. If it's not you then it's a scammer or competitor. UX for these customers means presenting information in a way that sorts high in a semantic search and is robust to transformation.

If you provide text indirectly answering the question ("that number doesn't exist" rather than a literal number), you're liable to be scored as less relevant than a wrong but direct answer ("the number is 1555 SCAMMER"). You're also less robust to transformations, because you can't pull a valid phone number out of the text.

Or maybe I'm wrong, take any certainty implied by my language as rhetorical. That's just the pattern I'm seeing in these tea leaves.

QuantumGood · a year ago
I once had a Facebook rep I could call (they later ended this), and they didn't know that were two online newsletters about changes to internal Facebook apps used by advertisers (we used to be able to see who had clicked "interested" on an event). So they put in a bug report when the app stopped working, etc., but we later found it had been deprecated. All to say that dedicated support is often a cause of issues or confusion.
szundi · a year ago
It is easy as hell
chimeracoder · a year ago
> Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.

Contrast with Experian, which has a number for consumers to call, but actually has an elaborate infinite loop in its phone tree that prevents you from actually talking to a human (this is by design).

If you're one of their customers (read: a business paying for their service), there's support you can call, but for individuals who have issues with their online Experian account or credit report, you can't, even if you're a paid subscriber to their consumer-oriented credit reporting services.

worble · a year ago
>Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers.

Frankly it's absurd to me that it's legal to do so. Any public facing company that is sufficiently large should be required by law to operate a phone service where you can talk to a real human being.

All of these huge mega corps are run with absolute impunity and there is often absolutely 0 avenue for regular everyday people to get in touch when they have issues. They direct you in these endless loops to FAQ's and "Community Resources"; even getting an email address is like getting blood from stone sometimes.

tgsovlerkhgsel · a year ago
For some cases, your local small claims court may be an efficient escalation path. If enough people do it, companies will learn that too much stonewalling doesn't actually save money, because now your customer support is done by the legal department.
iknowSFR · a year ago
Just a matter of time. The adoption of expectations is dependent on the visibility of the occurrence.
lazide · a year ago
They all are required to have a process service agent and address, legally.
nullserver · a year ago
My wife and most of her friends have all lost their original accounts. She got an email that password was changed. We immediately took action. They had changed the email associated with account. No way to change it back. Only thing we could accomplish was getting the account disabled. Zero way to contact Facebook. These are all woman that FB was primary storage place for kids photos.
throwaway48476 · a year ago
"Pls fix" proposed a market for bribing meta employees to deal with customer support requests.
hiccuphippo · a year ago
So lobbying?
Thadawan · a year ago
“There is a phone number for Meta online. When CBC called it, an automated recording said, ‘Please note that we are unable to provide telephone support at this time,’ and directed callers to meta.com/help.”
maxbond · a year ago
My mistake. Thanks for the correction.
gravescale · a year ago
> Please note that we are unable to provide telephone support at this time

Mealy-mouthed corporate lying horeshit.

They are able, just unwilling.

If free-market libertarianism is as great as the tech bros want us to think, why do these companies lie so much and so often despite the need for participants in the market to be correctly and full informed so they can make rational decisions?

simonw · a year ago
This one is pretty bad. This guy found a fake Facebook customer support phone number in a Google search, then asked the Meta AI chat in Facebook Messenger if the number he found was a real Facebook help line... and Meta AI said that it was. There's a screenshot of the chat in the article.
idle_zealot · a year ago
The bad thing is that people still think LLMs can be trusted at all. Companies integrating them into their offerings are not helping the public adopt the correct mental framing of these tools as "plausible text generators".
joe_the_user · a year ago
Companies integrating them into their offerings are not helping the public adopt the correct mental framing of these tools as "plausible text generators"

"Not helping" seems a wild understatement. "Deceiving people into taking the wrong frame" seems more accurate.

Gigachad · a year ago
The general public is getting lied to constantly. HN users have a bit more context to see through the bullshit but the marketing getting pushed in people is that these AI tools are super genius incredible world changing tools that make everyone 100x more productive.

Deleted Comment

dorkwood · a year ago
This can be solved with more data. New tech like Windows Recall should be able to scrape enough of the world's data so that this sort of thing doesn't happen anymore.
RecycledEle · a year ago
> The bad thing is that people still think LLMs can be trusted at all.

LLMs are as trustworthy as humans.

Humans have been being wrong for about as long as we have been lying.

Whether you get information from a human or an LLM, check it.

I worry about the people who insist on credible sources rather than checking information for themselves. I think 80% or more of them are trolling me, but there are some who genuinely do not apply the Scientific Method to check facts in their everyday life. I truly feel sorry for them.

CoastalCoder · a year ago
This reminds me of that recent issue with a Canadian airline, where (IIRC) a court ruled that their chatbot made a wrong, but binding, commitment to a customer.

I'm curious if a Canadian court would hold Meta liable for the man's losses in this case as well.

dghlsakjg · a year ago
That was a very interesting case. The chatbot in question was not LLM based (the incident was pre-chatGPT in any case) and was simply parroting an out of date or incorrect policy that it had been explicitly programmed to do. It seemed to gain a lot more traction in the press because of LLMs. "Air Canada forced to honor terms and conditions on their website" is a whole lot less interesting.

This FB thing is a case of an LLM simply hallucinating without direct human intervention.

Very different cases from a computer science perspective. My hope is that legally, they don't get viewed differently.

If you outsource functions of your business to a third party contractor you are still responsible for what they do and say. I don't think we should allow companies to weasel out of their obligations because they were dumb enough to let a sentence generator loose in a way that it could make commitments.

jessriedel · a year ago
Yea, it’s certainly a reasonable argument if the wrong information comes from the company itself.
throwaway48476 · a year ago
I wonder if he has a legal claim like the Air Canada passenger who the AI quoted a ficticious reimbursement policy.
astrange · a year ago
That incident happened before ChatGPT was released and probably didn't involve AI. Anyone can write a wrong customer support script if they try.
p3rls · a year ago
We're going to see a lot more SEO scams coming from social media platforms now that Google is promoting places like reddit and quora. Even on rSEO you can see moderators there asking themselves questions from alt accounts subtly promoting themselves. It's dog shit scammers all the way down.
echoangle · a year ago
I mean that’s kind of on meta, as a customer I shouldn’t really have to care about the internals of the company. If a disgruntled employee lies to customers, that shouldn’t be the customers problem either. To me, that’s all just a statement by the company.
naveen99 · a year ago
Meta ai is so bad. What did they really do with all those h100’s ?
_cs2017_ · a year ago
Seems like this information came from Quora: https://www.quora.com/Is-1-844-457-1420-really-Facebook-supp.... Screenshot: https://postimg.cc/gallery/2nFq5Cm.

I suspect the helpful SEO guy who posted this answer was trying to get more visibility on Quora so answered many questions automatically or semi-automatically without verifying anything.

This is the beginning of the post:

  Ruhul Alom
  Social Media Marketer at Social MediaAuthor has 2.9K answers and 1M answer views6mo
  My dear !
  Yes, 1-844-457-1420 is a valid Facebook support phone number. It is a toll-free number that is available 24/7. You can call this number to get help with a variety of Facebook issues, such as:

  Resetting your password
  Logging in to your account
  Recovering a hacked account
  [...]

ceejayoz · a year ago
The "helpful SEO guy" likely is (or was hired by) the scammer.

StackOverflow gets lots of fake posts like this promoting numbers. Around tax time there's a lot of Quicken ones.

dylan604 · a year ago
See, this is what confuses me to know end. Not once, ever, have I thought of asking an online forum for a phone number. Maybe I'm paranoid enough after all??? Also, I'm old, so I actually visit companies webpages. We've been through enough "don't fall for phishing" enough now, right? You don't trust links, phone numbers, whatever from anything that is not the official places for that information.
jfengel · a year ago
I see a ton of this on Quora. Not just for Facebook, but for a lot of online banks and others. They have hundreds of accounts doing it.

Quora doesn't even pretend to police this kind of thing. Automated moderation might remove it, only after it has been reported. There's far, far too much of it for users to report all of it.

Nobody pays attention to it on Quora, but it's clear that it's out there to poison AI and search engines.

JohnMakin · a year ago
Bit tangential, but what the heck is it with scammers saying "dear" so much? Pretty much every pig butchering or social engineering attempt has had them repeatedly addressing me as "dear."
throwaway48476 · a year ago
It fell out of fashion in western English speaking countries decades ago but not the 3rd world English speaking countries the scammers come from.
djeastm · a year ago
In learning English as a second language, I suspect the textbooks tell them to start all correspondence with "Dear" so as to not appear impolite
IAmNotACellist · a year ago
Ma'am just do one thing for me, go take a coffee or a glass of water and I will take care of each and everything.
bilalq · a year ago
Again and again we see that LLMs are great for creative output and terrible for anything where correctness matters. You should only use it for the latter scenarios when generating answers is slow/hard/expensive, but verification of answers is quick/easy/cheap. Probabilistic and non-deterministic answers have their place, but these companies marketing them in products need to do a better job expressing the limitations.
mrweasel · a year ago
It shows an amazing lack of understanding for what an LLM is, even from the people selling and implementing them. You're exactly right in that they are terrible if correctness matters, but that should be obvious. If they where 100% correct, the size of the models would be much larger, as they'd need to retain all the original training data.

You can use the LLMs for language understand and interpreting questions, but the would need access to databases containing authoritative answers and not answer anything for which they don't have an answer.

noAnswer · a year ago
An older client got scammed by a fake Amazon-Hotline. They bought a XBox-gift-card while on his PC via Teamviewer, till he pulled the power cored.

He then called me and I tried to find the official Amazon-Hotline on amazon.de. Since I was unable to find it I had to asked a search engine. The only results where third-party sites. It where from journalistic magazines I recognize (like chip.de) but still yet another gamble.

baobabKoodaa · a year ago
When I worked on a customer facing chatbot at my previous employer, we specifically wrote in the prompt "our customer service is not reachable by phone", and we tested that the chatbot was able to use that information and respond appropriately.

But I guess you can't expect a tiny startup like Facebook to invest money into having 1 employee part-time tweaking the prompt of the chatbot to respond appropriately to commonly recurring user questions.

resoluteteeth · a year ago
Was the chatbot you worked on using an LLM?
baobabKoodaa · a year ago
Yes
croes · a year ago
This will get worse when scammers get good at data manipulation for AIs.

After SEO we'll get AIO.

Same with prompt injection by malware.

Deleted Comment

jug · a year ago
Yes, AI in its current form is going to be a problem. I'm sure we haven't heard the worst yet. An AI may eventually kill a user.

I believe the heart of the problem is that corporations are riding a hype wave as long as they can, and an AI chat looks like super convincing, next level stuff thanks to the simple interface that hides the fact that you cannot communicate with this one as you would with a human being. You use natural language and it responds with natural language, which makes it not only convenient, but also dangerous.

There's money to gain on all this. While at the same time, hallucinations are an unsolved problem as well as making AI humble enough to realize and tell users that they just don't know. The combination of hallucinating, raising convincing arguments, being confidently incorrect, and not knowing the boundaries of your knowledge base, is a terrible one to let loose as officially sanctioned products.