Readit News logoReadit News
isp · 3 years ago
I've been following this from legal circles

Original court documents: https://www.courtlistener.com/docket/63107798/mata-v-avianca...

The lawyer didn't only cite "bogus" cases, but when challenged attached entire "bogus" case contents hallucinated by ChatGPT (attachments to #29 on link above)

In the second #32 affidavit, there are screenshots of ChatGPT itself! https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...

A legendary example for the legal risks of hallucination in LLMs https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...

asmithmd1 · 3 years ago
Thanks for the great context. The lawyer should be disbarred. He doubled down when he was caught, and then blamed chatGPT. What do you bet he was trying to settle really quickly to make this all go away.

Here is the direct link to the chatGPT hallucination the lawyer filed in response to the judge's order to produce the actual text of the case: https://www.courtlistener.com/docket/63107798/29/1/mata-v-av...

BaseballPhysics · 3 years ago
Did he "double down" or did he genuinely not understand that ChatGPT was making stuff up the whole time?
isp · 3 years ago
The above link wasn't the only hallucination(!)

The lawyer kept digging the hole deeper and deeper, and (as a non-expert) I agree that it seems that the lawyer is at serious risk of being disbarred.

Interesting documents are from #24 onwards:

- #24 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): "unable to locate most of the case law cited in Plaintiff’s Affirmation in Opposition, and the few cases which the undersigned has been able to locate do not stand for the propositions for which they are cited"

- #25 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...) & #27 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): order to affix copies of cited cases

- #29: attached the cases - later revealed to be a mixture of made up (bogus) for some, vs irrelevant for others

- #30 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): "the authenticity of many of these cases is questionable" - polite legal speak for bogus. And "these cases do exist but submits that they address issues entirely unrelated to the principles for which Plaintiff cited them" - irrelevant. And a cutting aside that "(The Ehrlich and In re Air Crash Disaster cases are the only ones submitted in a conventional format.)" - drawing attention to the smoking gun for the bogus cases

- #31 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): an unhappy federal judge: "The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. ... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations" ---- this PDF is worth reading in full, it is only 3 pages & excoriating

- #32 affidavits, including the ChatGPT screenshot

- #33 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): an even more unhappy judge: invitation for the lawyer & law firm to explain why they "ought not be sanctioned"

_-____-_ · 3 years ago
That the screenshots are from the mobile website for some reason makes this look even worse.
spondylosaurus · 3 years ago
It does. It's like this lawyer is charging you his hourly rate just to prompt ChatGPT while he's on the toilet.
bandyaboot · 3 years ago
I’m quite amused that ChatGPT hallucinated a frivolous lawsuit brought by someone who was denied an exit row seat.
smsm42 · 3 years ago
I wouldn't be surprised if it were based on a number of real lawsuits of the same exact nature.
Obscurity4340 · 3 years ago
Wouldn't "confabulation" be a better term for this?

Dead Comment

shalalalaw · 3 years ago
We're a very tech forward law firm, and we're bullish on AI. The issue is that lawyers are traditionally tech illiterate, and they treat Gen AI like a search engine that puts results in narrative form. Realistically, I think AI generated motions and contracts are the future, and this instance will be looked at by every tech averse lawyer to try and stymie progress in the field. These lawyers deserve their sanctions for being so reckless with things they don't understand, but rather than take away that lawyers need to learn tech, lawyers will say tech is bad. I almost wish this was non-news so it wouldn't further push the legal industry into the past, but those clients were wronged and I guess people need to know what to beware of when hiring a lawyer.

Personally, we get really good results from using AI, it's already present in all of our processes, but we tell it what to generate, rather than rely on it to know better.

codeflo · 3 years ago
It's not just lawyers who think that ChatGPT is a search engine. I've observed this many times in my vicinity, people from all walks of life think that Star Trek is here and computers now respond accurately to natural language queries. For non-techies, "just asking the computer" is so much more convenient than translating your question into traditional search queries.

So I guarantee you that stuff like this is happening daily across all industries. Depending on the profession, people will lose money or get hurt as a result of someone blindly trusting this technology. I can't prove it, but statistically, that's basically a certainty.

In my opinion, you can't overstate the importance of articles like this, which point out the limitations and highlight the dangers. I'm also against banning. But lay people need to be informed what ChatGPT is and is not, and OpenAI won't do it because they want to ride the hype train.

gpm · 3 years ago
In local-to-me politics we have a report on changing the admissions process for specialty programs in the public school system - which had a bunch of fake citations and people suspect was written with the "aid" of ChatGPT.

https://www.thestar.com/news/gta/2023/05/26/tdsb-fires-resea...

Xenoamorphous · 3 years ago
> It's not just lawyers who think that ChatGPT is a search engine.

Let’s not forget that Google often puts incorrect information in their snippets/factboxes or whatever they call them.

cratermoon · 3 years ago
Computer, what's the formula for transparent aluminum? Seriously, I got ChatGPT to spit out a scientific-seeming paper on the formula and manufacturing process for transparent aluminum. It did note that there's a real thing, aluminum oxynitride, which is the closest thing we have to the Star Trek material. It even wrote the following abstract, based on my prompt:

> This scientific description provides an overview of the formula and manufacturing process of transparent aluminum, a material used in applications where both structural strength and transparency are required. Transparent aluminum finds extensive use in diverse fields, including public aquaria, where it allows for the display of large marine organisms. The description outlines the chemical composition, key properties, and the manufacturing steps involved in creating transparent aluminum.

Whether or not the six-step manufacturing process it came up with is correct or not, I haven't the expertise to say.

fennecfoxy · 3 years ago
Idk it's pretty reliable even now if you chuck a vector db of "knowledge" in and inform the GPT in the overall prompt that it is must not go outside the bounds of knowledge provided with the user's query, and that if no hard knowledge is provided that it should respond along the lines of "I don't know" (or search the internet and parse results for them).

I imagine at some point this behaviour will be built in. People are treating GPTs like they're knowledge bases, but they're not. The fact that they can answer simple or complex questions correctly is only a byproduct of the transformer being taught how to string language together.

It's like an artist who learns to draw by making paintings of raccoons. You can't then ask them "what do raccoons like to eat?" or "what foods are poisonous to a raccoon?" just because they learnt to draw by painting raccoons. This is how people are treating GPTs atm. They believe in it because they ask the artist "what colour fur do raccoons have?" and because the artist can answer that correctly, they assume all other answers are factual.

Deleted Comment

Zafira · 3 years ago
> [L]awyers are traditionally tech illiterate

This is the main reason I think disbarment as the punishment in this specific instance may not be fair. There are people who are unaware of the limitations of these systems and the risk of these confabulations occurring.

While I don’t think disbarment is inappropriate, I would rather see the New York State Bar use this to require some better understanding of these emergent technologies or even better have all the State Bars start discussing some standardized training about this because it’s easy to see a person trying to treat this as LexisNexis.

ryandrake · 3 years ago
If your doctor asked ChatGPT to tell him how to remove your appendix, followed the directions, and subsequently removed a kidney instead, would you want him to lose his medical license?
NoMoreNicksLeft · 3 years ago
Is disbarment about fairness? Is the primary goal of such proceedings to rehabilitate and apply a sort of justice?

Certainly, civil and criminal courts have those as their raison d'être. But I thought licensing boards had an entirely different purpose. If I surgeon was a good guy who genuinely wanted to help people and who didn't engage in any sort of malfeasance... but even so, he just kept slicing aortas open accidentally through incompetence, the board should say "aw shucks, he's had some bad luck but he really wants to heal people".

This is the same. The court system is replete with circumstances where a client does not get a second chance at pursuing justice. A lawyer that fucks that up, even if doing so in good faith, leaves them with zero remedies. This might have been a bullshit "Slippin' Jimmy" case this time, but the stakes could've easily been higher.

I don't think I want to live in a world where fairness plays any part in the decision by the bar on this matter.

nocoiner · 3 years ago
Out of curiosity, why do you think AI generated contracts are the future? Do you draw a distinction between contracts generated by AI and, say, contracts “generated” by first-year associates (i.e., using precedent to generate a first draft appropriate for the deal that’s then iterated by more experienced lawyers)?

Also, how is this incident going to push the legal industry further into the past? Do you think lawyers are going to, like, stop using email because of this?

Der_Einzige · 3 years ago
... retrieval augmented search is here today and is available in ChatGPT with plugins or integrations with vectorDB systems. A lot of AI systems are "search engines that give you narrative outputs"
drumhead · 3 years ago
You sound like an AI generated post.
ouid · 3 years ago
>ai generated contracts are the future.

Man, why do you guys get paid so much? Contracts need semantic correctness, ie the verification of the logical consequences of natural language. This is an AGI determining problem. At the point this exists, humans are basically obsolete as workers, and you don't have to worry about your law firm keeping up anymore.

hnfong · 3 years ago
Let me get this straight, you're saying if AI can be a good lawyer AI can do anything humans can do?

So you're trying to say that lawyering is the hardest job in the world and only the smartest people can do it...?

vidarh · 3 years ago
Contracts have lots of standard clauses or near standard variations of such, and as a consequence plenty of "dumb" template based generators already exists, and you may come across contracts that have just the occasional line manually written.

At least one company already integrates or is about to (not sure if it's in production yet) LLMs in theirs to do effectively smarter completions.

It won't be fully automated any time soon, but it will certainly eat into a lot of the simpler work.

hartator · 3 years ago
> Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal submission replete with “bogus judicial decisions, with bogus quotes and bogus internal citations.” He ordered a hearing for June 8 to discuss potential sanctions.

Disbarment should be a no-brainer and a minimum.

Just “Southern China Airlines” should have raised eyebrows. This lawyer has shown obvious disrespect to the court, the court time, and to his client.

paul_f · 3 years ago
The lawyer should lose his license. Imagine they had turned in this brief and follow up and ChatGPT had not been involved. Instant disbarment. ChatGPT is not an excuse for a professional to produce nonsense in front of the court. Bye bye.
psychphysic · 3 years ago
Disbarment seems excessive to me.

I don't think this is indictiative of a wider issue. Or likely to be repeated substantially.

In terms of severity it's painfully foolish but then again ChatGPT is a totally new tool and a lot of people will be caught off guard.

I am stunned the lawyer didn't at least look up the case notes or even prepare a pocket brief if he believed they were real but hard to find

eastbound · 3 years ago
I suspect a lot of lawyers should be disbarred, GPT aside. They don’t always care about their customers. I’ve often been more expert than my lawyers in France, and I’ve seen at least one guy going to prison where the lawyer, publicly shamed by a dozen of youtubers for not actioning the various correct levers, told the excuse that he “only had an hour to review the case”.

What are we paying for, if the guy spends 4 months in prison before the faulty judgement being overruled, if the lawyer says he didn’t even work on the file.

When you hire a lawyer, you have no guarantee he will work for you.

LelouBil · 3 years ago
Do you have a name or links for the incident you mention ?
raverbashing · 3 years ago
The title of the made up case seems to be "China Southern Airlines", which is correct. But it is misquoted on Document 31 (Order to Show Cause) as China South Airlines
not_a_shill · 3 years ago
If disbarment is the minimum, what's the maximum? I'm unfamiliar with other cases in which lesser offenses have led to disbarment.
bombcar · 3 years ago
I wouldn’t be surprised if prison is on the table. Intentionally filing bullshit to the court can get you some penalties indeed.
andrewmg · 3 years ago
Some context: any litigator will have access to Westlaw or Lexis-Nexis to look up and verify cited authorities like cases. It’s considered bad practice, at best, to cite authorities that one has not reviewed—for example, case citations drawn from a treatise or article.

As a practical matter, it is inconceivable to me that the attorney here, at least upon being ordered by the court to provide copies of the cases he cited, did not look them up in West or Lexis and see that they don’t exist. That he appears to have pressed on at that point, and asked ChatGPT to generate them—which would take some pointed prompting—was just digging his own hole. That, more than anything, may warrant professional discipline.

nightowl_games · 3 years ago
The precision with which you wrote this is refreshing. I'm guessing you're a lawyer?

There are interesting parallels between lawyering and programming.

I'm often surprised at how poorly many programmers write English.

I think I'd have been a good lawyer.

jaclaz · 3 years ago
Another thread where the poster linked to the relevant Court .pdf's:

https://news.ycombinator.com/item?id=36092914

Particularly relevant is the affidavit in which the lawyer tries to explain to the Court what happened:

https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...

lionkor · 3 years ago
Hahaha that second link is brilliant.

"Hm, maybe I should double check ChatGPTs output... Hey, ChatGPT, does your output make sense?" - "Yeah, my output definitely makes sense". "Are you sure?" - "Yeah".

Well, then.

namaria · 3 years ago
>Mr. Schwartz said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.”

Why did he think it was true, if he had never used it before then?

"I found it online" levels of competence here.

montroser · 3 years ago
> Mr. Schwartz said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.”

> He had, he told Judge Castel, even asked the program to verify that the cases were real.

These two ideas are incompatible with each other. You can't claim that you didn't know to question the source, and then also that you questioned the source, even if it was done in the least effective possible manner.

NeoTar · 3 years ago
Perhaps logically incompatible, but within legal proceedings you are allowed to use what is called alternative pleading, or alternative defence. https://en.m.wikipedia.org/wiki/Alternative_pleading

To quote Richard "Racehorse" Haynes in the Wikipedia article:

"Say you sue me because you say my dog bit you. Well, now this is my defense: My dog doesn't bite. And second, in the alternative, my dog was tied up that night. And third, I don't believe you really got bit. And fourth, I don't have a dog."

So, here the defence is:

* I didn't believe the content could be false,

* Even if it is legally determined that I (beyond a reasonable doubt) knew the the content could be false, I asked the program to verify that the cases were real.

There are more details in the wikipedia article, but I believe this is legally valuable because a defendant is required to legally enter a defence and cannot easily change this.

piaste · 3 years ago
While I have no reason to doubt that it's valid to argue "A || (!A && B)", in this particular case B => !A.

When the lawyer brought evidence that he had tried to verify the information (B), that evidence itself automatically disproves his plead that he didn't think it could be false (A).

(Note that didn't have to necessarily be the case: for example, if he had claimed that it was his assistant who asked the verification question, rather than he himself.)

So, unless he made the second plead and brought the evidence at a latter time, shouldn't he have just skipped A altogether, and preserved his credibility? (And possibly avoided a perjury charge? IDK if in an American court both claims would be sworn statements)

codeflo · 3 years ago
I don't even think it's logically incompatible. Isn't this just ((A → B) ∧ (¬A → B)) → B?
julienchastang · 3 years ago
Related article: "End of the Billable Hour? Law Firms Get On Board With Artificial Intelligence, Lawyers start to use GPT-4 technology to do legal research, draft documents and analyze contracts" [0]

Critical thinking skills are more important than ever in the age of AI. Used correctly, ChatGPT(4) can sometimes be a huge time saver, but you cannot believe all the bullshit it serves you.

[0] https://www.wsj.com/articles/end-of-the-billable-hour-law-fi...

simonw · 3 years ago
I tried pulling together a full timeline from the various documents, it's a fascinating story: https://simonwillison.net/2023/May/27/lawyer-chatgpt/