> Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Here's a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like "Warning: the information you receive may be incorrect and irrelevant." Getting correct and relevant information from a human will be impossible.
If that is such a perfect way to avoid getting sued, why don't they put that on every page of their website and train all of their customer service staff to say that when they talk to customers?
Don't know if you've had the pleasure of interacting with US health insurance, but they do this for coverage and cost estimates all the time, so there is unfortunately precedent in the States.
Well, if the information on the website is too inaccurate to trust then it is also too inaccurate for purposes of contract. The logical next step is to say that all contracts made with chatbots are unenforceable as they lack any trustworthy meeting of the minds.
Because you would choose someone else for things that really matter to you, probably the same person you would chose in that kind of scenario anyway.
Eg. for my initial llc where I was single employee, I didn't care about my accounting basically saying in the contract "we ll do our best but can't be liable for anything", whereas now that I have bigger llc with many employees I picked (what I perceived) the best option on the market, for a premium price. They take the liability for their work and have insurance.
I used to work for a company that sold factory automation technology, and had hundreds of manuals for all the products they sold. In the front matter of every manual was a disclaimer that nothing in the manual was warranted to be true. This was automation equipment running things like steel mills, factories, petroleum plants, where failures could result in many deaths and grave financial losses.
Humans aren't perfect but they're reliable enough to be trusted with customer service work. Humans (generally) have a desire and need to keep their jobs and therefore hold themselves to a high enough standard of performance to keep their job.
And maybe employment contracts have language that offloads liability to an employee if they go rogue and start giving away company resources. Chatbots aren't accountable in any way and we don't know yet if their creators ever will be either.
Because it is a bad look, I assume. If I'm interacting with a company that constantly disclaims everything they say as probable bullshit, I'll go find a competitor that at least pretends to try harder.
IANAL, but AFAIK you can't disclaim liability that you actually have. I'd love to hear an actual lawyer who knows (not a know-it-all amateur) declaim on this, but:
A Ferris Wheel operator cannot make you sign a disclaimer that they're not responsible if it collapses and kills you. Or rather, they can, but it will not hold up in court.
Similarly, you can say in your manual, "We're not responsible for anything we say here" but you still are.
I don't know about chatbots, but I'd expect that judges will look for other precedents that are analogous.
Personal anecdote: A few years back I left my car at a dealership for some warranty work that was going to take a few days. It has a soft top and they left it in their gated lot overnight, where it got broken into (slashed the top, ripped open the glove box, stole a cheap machete I got from a white elephant exchange). They claimed that they weren't liable at all since I signed a waiver and should go through my own insurance. After a little push back, they caved and covered it under their insurance like they should have from the beginning. I don't go to that dealership for anything anymore, for that and other reasons.
much like Tesla's Autopilot which cannot be responsible for an accident because you're supposed to be hands-on-wheel and alert at all times while using it.
Also, the chatbot proposed a very reasonable solution. Book the fight, send us a death certificate when you have it and you’ll get the discount.
That’s actually what the policy should be and it’s a quite reasonable error for a human to make too.
Presumably this is why there is a trend in consumer rights legislation towards being explicit that anything a buyer has been told by a seller or actively told the seller before the sale is material to the contract of sale regardless of the seller's small print that says "This contract of sale we wrote by ourselves and won't negotiate with you and only this contract means anything". Then they can't promise the world to get the sale and then wash their hands of any resulting commitments two seconds after they get your money. Which seems entirely fair and reasonable to me, whether the promise came from a real person or a chatbot.
Part of disclaimer at irs.gov for their interactive assistant: "Answers do not constitute written advice in response to a specific written request of the taxpayer within the meaning of section 6404(f) of the Internal Revenue Code."
Can’t they link to sources? Like Perplexity.ai or Arc Search does.
I don’t even need it to tell me anything. Links are all that is relevant. Google Analytics on the Web does something similar. You can ask questions in the search box and it takes you to a relevant page.
“Can I get refund on my flight 2 hours in advance?”
“Here is a link to refund policies w.r.t time before flight”
Air Canada's chatbot returned sources along with its answer. Quotes from another article[1].
> Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.
choosing to accept an unnecessary quantifiable liability and wrapping it in a disclaimer as part of a critical business process is not a recipe for sustained growth or profit.
AC and other corporations would do well to put the brakes on this instead. identify ways to transfer risk (AI Insurance for example) or avoid risk (scrap the AI bot until the risk is lowered demonstrably.)
savvy advertisers would jump on this opportunity to show just how much AC cares about the customer and eat the loss quietly before it ever went to trial.
> "Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot."
This is very reasonable-- AI or not, companies can't expect consumers to know which parts of their digital experience are accurate and which aren't.
Forget about digital experiences for a moment. Forget entirely about chatbots.
> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives
That includes EMPLOYEES. So they tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms? That's absolutely fucked.
I once booked a flight to meet my then-fiancee in Florida on vacation. Work travel came up unexpectedly, and I booked my work travel from ORD > SFO > TPA.
Before I made that booking, I called the airline specifically to ask them if skipping the ORD > TPA leg of my personal travel was going to cause me problems. The agent confirmed, twice that it would not. This was a lie.
Buried in the booking terms is language meant to discourage gaming the system by booking travel where you skip certain legs. So if you skip a leg of your booking, the whole thing is invalidated. It's not suuuuper clear, I had to read it a few times, but I guess it kinda said that.
Anyways - my itinerary was invalidated by skipping the first flight, and I got lucky enough that someone canceled at the last minute and I could buy my own seat back on the now-full flight for 4x the original ticket price I paid (which was not refunded!).
I followed up to try and get to the bottom of it, but they were insistent they had no record of my call prior, and just fell back on "It's in the terms, and I do not know why you were told wrong information". Very painful lesson to learn.
I try and make a habit of recording phone conversations with agents now, if legal in where I'm physically located at the time.
> They tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms. That's fucked.
Pretty standard behavior for big companies. Airlines and telcos are the utter worst... you have agent A on the phone on Monday, who promises X to be done by Wednesday. Thursday, you call again, get agent B, who says he doesn't see anything, not even a call log, from you, but of course he apologizes and it will be done by Friday. (Experienced customers of telcos will know that the drama will unfold that way for months... until you're fed up and involve lawyers)
I can sort of see it. On the one hand, it's reasonable to hold them accountable when an employee gives you the wrong discount. But if an employee, on their last day at work, decides to offer the next person calling all of the seats on a single flight for just $10, I think we'd all agree that it would be unreasonable to expect the airline to honor that offer.
It's the degree of misinformation that's relevant.
I once ordered a gift for my father for christmass. The order page indicated that it would arrive on time. When it didn't arrive, I requested a refund. They then pointed to their FAQ page where they said that orders during the holidays would incur extra processing time, and refused the refund.
I wrote back that unless they issused a refund, I would issue a charge back. You don't get to present the customer with one thing and then do otherwise because you say so on a page the customer has never read when ordering.
This actually sounds like an interesting case to me because the details make a huge legal difference in my mind. (But IANAL, maybe I'm entirely off base here.)
E.g., did they tell you the shipping date after you placed the order, or before? If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship. If they told you before, then was it before they knew your shipping address, or after? If it was beforehand, then again, it should've been clear that they wouldn't be able to guarantee it without knowing the address. If it was after they got the address but before you placed the order, then that makes for a strong case, since it was specific to your order and what you agreed to before placing it.
I expect employees to know the correct answers and give them to me. When an employee says something that contradicts other policy pages I'm expecting that to be a change to company policy to me - they represent the company.
If the company doesn't agree to that, then they need to show the employee was trained on company policy and was disciplined (on first offense maybe just a warning, but this needs to be a clear step on the path to firing the employee) for failing to follow it. Even then they should stand by their employee if the thing said was reasonable (refund you $million may be unreasonable, but refund purchase price is reasonable)
This line of argument is crazy and infuriating. "Air Canada essentially argued, 'the chatbot is a separate legal entity that is responsible for its own actions,' a court order said." Do they expect people to sue the chatbot? Are they also implying that people have to sue individual agents if they cause a problem?
> 27. Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.
> https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/202...
Real legal comedy. Since this was in small claims court maybe it was an amateur on Air Canada's side?
If they could reasonably expect to be able to hire people who would agree to accept all liability incurred during their work for the company, they absolutely would.
Same with chatbots. Even better, because once it's "trained", you don't have to pay it.
There's a few instances of expecting digital entities to shoulder the entirety of legal liability here in the last few years; DAOs are another example of this in the crypto space.
I don't like Ars Technica because they break reader mode and load articles chunk by chunk – I consider it hostile towards the user and am glad since Wired work much better.
What browser? Just tried it out on Firefox for Android (version 122.1.0, with uBlock Origin enabled but JS still allowed on ars) and for the link above, I see the whole article after immediately switching to reader mode.
In early days of computerization, companies tried to dodge liability due to "computer errors". That didn't work, and I hope the "It was the AI, not us" never gets allowed either.
The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
Good. If you use a tool that does not give the correct answers you should be held liable for the mistake. The takeaway is, you better vet your tool. If the amount of money you loses from mistakes with the tool is less than the money you saved using it then you make money, if not, you may want to reconsider that cost saving measure.
I'm glad to see that cases are starting to be decided about the liability of using AI generated content. This is something the general public should not need to second-guess.
Honestly LLMs aren’t ready for customer service. If I’m talking to a company I need to have a high degree of accuracy. LLMs are less accurate than trained humans.
This is my personal perception, but I think it's important that there is a clear definition of liability so that companies are able to make their own determinations of what is ready and what isn't.
Few front-line agents have deep knowledge about their company's products or services. They trace their finger through some branches on a flowchart then dictate from a knowledgebase.
> Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Here's a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like "Warning: the information you receive may be incorrect and irrelevant." Getting correct and relevant information from a human will be impossible.
And maybe employment contracts have language that offloads liability to an employee if they go rogue and start giving away company resources. Chatbots aren't accountable in any way and we don't know yet if their creators ever will be either.
And disclaimers are used in lots of contexts too.
I still.remember when Microsoft updated thr 360 TOS to force arbitration the day after it was deemed legal in a completely separate case.
Rest assured there is an incoming flood of TOS updates.
A Ferris Wheel operator cannot make you sign a disclaimer that they're not responsible if it collapses and kills you. Or rather, they can, but it will not hold up in court.
Similarly, you can say in your manual, "We're not responsible for anything we say here" but you still are.
I don't know about chatbots, but I'd expect that judges will look for other precedents that are analogous.
Now, if you are talking about 'Full Self Driving' - then yea, there's a waiver and a point there.
The chatbot's error cost them what, $200? And it probably replaced at $100000/year employee?
if you read them there is often stuff like that, the most flagrant one I read said “everything above should be considered apocryphal”
I don’t even need it to tell me anything. Links are all that is relevant. Google Analytics on the Web does something similar. You can ask questions in the search box and it takes you to a relevant page.
“Can I get refund on my flight 2 hours in advance?”
“Here is a link to refund policies w.r.t time before flight”
> Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.
[1] https://techhq.com/2024/02/air-canada-refund-for-customer-wh...
AC and other corporations would do well to put the brakes on this instead. identify ways to transfer risk (AI Insurance for example) or avoid risk (scrap the AI bot until the risk is lowered demonstrably.)
savvy advertisers would jump on this opportunity to show just how much AC cares about the customer and eat the loss quietly before it ever went to trial.
Dead Comment
This is very reasonable-- AI or not, companies can't expect consumers to know which parts of their digital experience are accurate and which aren't.
> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives
That includes EMPLOYEES. So they tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms? That's absolutely fucked.
I once booked a flight to meet my then-fiancee in Florida on vacation. Work travel came up unexpectedly, and I booked my work travel from ORD > SFO > TPA.
Before I made that booking, I called the airline specifically to ask them if skipping the ORD > TPA leg of my personal travel was going to cause me problems. The agent confirmed, twice that it would not. This was a lie.
Buried in the booking terms is language meant to discourage gaming the system by booking travel where you skip certain legs. So if you skip a leg of your booking, the whole thing is invalidated. It's not suuuuper clear, I had to read it a few times, but I guess it kinda said that.
Anyways - my itinerary was invalidated by skipping the first flight, and I got lucky enough that someone canceled at the last minute and I could buy my own seat back on the now-full flight for 4x the original ticket price I paid (which was not refunded!).
I followed up to try and get to the bottom of it, but they were insistent they had no record of my call prior, and just fell back on "It's in the terms, and I do not know why you were told wrong information". Very painful lesson to learn.
I try and make a habit of recording phone conversations with agents now, if legal in where I'm physically located at the time.
Pretty standard behavior for big companies. Airlines and telcos are the utter worst... you have agent A on the phone on Monday, who promises X to be done by Wednesday. Thursday, you call again, get agent B, who says he doesn't see anything, not even a call log, from you, but of course he apologizes and it will be done by Friday. (Experienced customers of telcos will know that the drama will unfold that way for months... until you're fed up and involve lawyers)
Deleted Comment
It's the degree of misinformation that's relevant.
I wrote back that unless they issused a refund, I would issue a charge back. You don't get to present the customer with one thing and then do otherwise because you say so on a page the customer has never read when ordering.
They eventually caved, but man, the nerve.
E.g., did they tell you the shipping date after you placed the order, or before? If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship. If they told you before, then was it before they knew your shipping address, or after? If it was beforehand, then again, it should've been clear that they wouldn't be able to guarantee it without knowing the address. If it was after they got the address but before you placed the order, then that makes for a strong case, since it was specific to your order and what you agreed to before placing it.
If the company doesn't agree to that, then they need to show the employee was trained on company policy and was disciplined (on first offense maybe just a warning, but this needs to be a clear step on the path to firing the employee) for failing to follow it. Even then they should stand by their employee if the thing said was reasonable (refund you $million may be unreasonable, but refund purchase price is reasonable)
It’s not the consumers fault that the AI hallucinated a result (as they are known to do with high frequency).
Real legal comedy. Since this was in small claims court maybe it was an amateur on Air Canada's side?
Same with chatbots. Even better, because once it's "trained", you don't have to pay it.
There's a few instances of expecting digital entities to shoulder the entirety of legal liability here in the last few years; DAOs are another example of this in the crypto space.
"This story originally appeared on Ars Technica."
Give the clicks to the original article:
https://arstechnica.com/tech-policy/2024/02/air-canada-must-...
It worked in the British Post Office Scandal: https://en.m.wikipedia.org/wiki/British_Post_Office_scandal
And AFAICT "the computer did it" wasn't the argument, it was "the computer did it so it must be correct because the experts said so".
With Air Canada, the question is whether or not a chat bot can be treated as a company representative that makes binding commitments.
With the British Post Office, the issue is whether or not a software system is inscrutable during legal proceedings.
Deleted Comment
The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
After all, we're still not 100% sure how LLMs make their decisions in what they string together as output, so the company's not _technically_ lying.
Chatbot: <waffle>
Me: Please put me through to a person that can articulate $COMPANY's legal position. This conversation can serve no more purpose.