Readit News logoReadit News
alwa · 6 months ago
> Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn’t know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. [...] “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”

Wow. Seems like he really took the lesson to heart. We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?

21 of 23 citations are fake, and so is whatever reasoning they purport to support, and that's casually "adding some citations"? I sometimes use tools that do things I don't expect, but usually I'd like to think I notice when I check their work... if there were 2 citations when I started, and 23 when I finished, I'd like to think I'd notice.

OptionOfT · 6 months ago
> He thinks it is unrealistic to expect lawyers to stop using AI.

I disagree. It worked until now, and using AI is clearly doing more harm than good, especially in situations where you hire an expert to help you.

Remember, a lawyer is someone who actually has passed a bar exam, and with that there is an understanding that whatever they sign, they validate as correct. The fact that they used AI here actually isn't the worst. The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.

We can make the argument that this might be pushed from upper management, but remember, the license is personal. So it's not that they can hide behind such a mandate.

It's the same discussions I'm having with colleagues about using AI to generate code, or to review code. At a certain moment there is pressure to go faster, and stuff gets committed without a human touching it.

Until that software ends up on your glucose pump, or the system used to radiate your brain tumor.

sonofhans · 6 months ago
> The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.

Yes, this is the crux of it. More than any other thing you pay a lawyer to get the details right.

beambot · 6 months ago
I disagree with your disagreement. The legal profession is not "working until now" unless you're quite wealthy and can afford good representation. AI legal assistants will be incredibly valuable for a large swath of the population -- even if the outputs shouldn't be used to directly write briefs. The "right" answer is to build systems to properly validate citations and arguements.
alach11 · 6 months ago
> using AI is clearly doing more harm than good

How do you know this? Wouldn't we expect the benefits of AI in the legal industry to be way less likely to make the front page of HN?

LiquidSky · 6 months ago
His response is absurd. This is no different than having a human associate draft a document for a partner and then the partner shrugging their shoulders when it's riddled with errors because they didn't bother to check it themselves. You're responsible for what goes out in your name as an attorney representing a client. That's literally your job. What AI can help with is precisely this first level of drafting, but that's why it's even more important to have a human supervising and checking the process.
bawolff · 6 months ago
> He thinks it is unrealistic to expect lawyers to stop using AI

Sure. Its also unrealistic to expect nobody to murder anyone. That's why we invented jail.

FireBeyond · 6 months ago
> We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?

Same with FSD at Tesla, there's many people who think that accidents and fatalities are "worth it" to get to the goal. And who cares if you, personally, disagree? They're comfortable that the risk to you of being hit by a Tesla that failed to react to you is an acceptable price of "the mission"/goal.

neilv · 6 months ago
I came here to quote that exact part of the article.

My guess is that he probably doesn't believe that, but that he's smart enough to try to spin it that way.

Since his career should be taking at least a small hit right now, not only for getting caught using ChatGPT, but also for submitting blatant fabrications to the court.

The court and professional groups will be understanding, and want to help him and others improve, but some clients/employers will be less understanding.

tdeck · 6 months ago
The thing is, this statement is doing as much harm to his reputation as the original act, if not more. Who would hire this lawyer after he said something like that?
yieldcrv · 6 months ago
> 21 of 23 citations are fake

This was from the model available in June 2023

I've taken this hallucination issue to heart since the first time this headline occurred, but if you just started with leading LLM's just today, you wouldn't have this issue. I'd say it would be down to like 1 out of 23 at this point.

Definitely keep verifying especially because the models available to you keep changing if you use cloud services, but this September 2025 is not June 2023 anymore and the conversation needs to be much more nuanced.

tdeck · 6 months ago
Frankly I'd argue that something that produces 1 in 23 fake citations may be worse than producing 21 fake citations. It's more likely to make people complacent and more likely to go undetected.

People have more car crashes in areas they know well because they stop paying attention. The same principle applies here.

elpakal · 6 months ago
A lot of both defeatist and overly optimistic pro AI comments in here. Having built legaltech and interfaced heavily with attorneys over the last 5 years of my career, I will say that there is a wide spectrum of experience, ethics and intelligence in the field. Blindly copying output from anything and submitting it to the court seems like a mind boggling move, it doesn’t really make a difference if it was AI or Google or Bing or Thompson Reuters. This attorney is not representative of the greater population and probably had it coming imho.

There is definitely benefit to using language models correctly in law, but they are different than most users in that their professional reputation is at stake wrt the output being created and the risk of adoption is always going to be greater for them.

jerf · 6 months ago
Do lawyers and judges not by now have software that turns all these citations into hyperlinks into some relevant database? Software that would also flag a citation as not having a referent? Surely this exists and is expensive but in wide usage...?

It's not a large step after that to verify that a quote actually exists in the cited document, though I can see how perhaps that was not something that was necessary up to this point.

I have to think the window on this being even slightly viable is going to close quickly. When you ship something to a judge and their copy ends up festooned with "NO REFERENT" symbols it's not going to go well for you.

KittenInABox · 6 months ago
Part of an issue is that there's already in existence a lot of manual entry and a lot of small/regional courts with a lot of specificity for individual jurisdictions. Unification of standards is a long way away, I mean, tech hasn't even done it
alansaber · 6 months ago
Lots of hallucination verification tools exist, but legal tech tools usually charge an arm and a leg. This bloke probably used gemini with the prompt "create law"
freejazz · 6 months ago
>Do lawyers and judges not by now have software that turns all these citations into hyperlinks into some relevant database? Software that would also flag a citation as not having a referent? Surely this exists and is expensive but in wide usage...?

Why would I pay for software what I could do with my own eyes in 2 minutes?

Dylan16807 · 6 months ago
Two minutes per what? Two minutes per citation is a huge time waste. Two minutes per filing full of citations is unrealistically fast and also adds up.

Deleted Comment

rdtsc · 6 months ago
Wonder what the State Bar of CA would have to say about this:

https://apps.calbar.ca.gov/attorney/Licensee/Detail/282372

Doesn't seems like there isn't kind of disciplinary action. You can just make up stuff and if you're caught, pay some pocket change (in lawyer money level territory) and move on.

patrickhogan1 · 6 months ago
Fines above $1k must be reported to state bar in CA. So they will know about this one.
us0r · 6 months ago
An OnlyFans case that is ongoing now the plaintiffs attorneys made a filling with i believe entirely fabricated case references:

https://www.courtlistener.com/docket/68990373/nz-v-fenix-int...

narrator · 6 months ago
The hallucinations in legal briefs get really out of hand when the attorney wants to make an argument not supported by the case law. The LLM wants to do a good job defending a case, so it invents the legal precident, because otherwise it'd be impossible to make the argument credibly. This invites a rule 11 challenge from the other side where you claim the lawyer is so full of crap with his claim that he deserves sanction for not understanding the law and wasting everyone's time.

What's interesting about the rules of civil procedure is that it has been built up over centuries to prevent all kinds of abuse by sneaky, clever, unscrupulous litigants. Most systems are not so hardened against bad faith actors like the legal system is and AI just thinks it can pathologically lie its way through cause most people trust somebody who sounds authoritative.

yieldcrv · 6 months ago
I just read the initial complaint, what do you think about that case? Is there a community that wants disclosure of "chatter's" existence? It seems to be going the other way with AI personalities doing the chatting
abeppu · 6 months ago
> In recent weeks, she’s documented three instances of judges citing fake legal authority in their decisions.

So lawyers use it, judges use it ... have we seen evidence of lawmakers submitting AI-generated language in bills or amendments?

at-fates-hands · 6 months ago
>> lawmakers submitting AI-generated language in bills or amendments?

Most people would be shocked to find the majority of bills are simply copycat bills or written by lobbyists.

https://goodparty.org/blog/article/who-actually-writes-congr...

Bank lobbyists, for example, authored 70 of the 85 lines in a Congressional bill that was designed to lessen banking regulations – essentially making their industry wealthier and more powerful. Our elected officials are quite literally, with no exaggeration, letting massive special interests write in the actual language of these bills in order to further enrich and empower themselves… because they are too lazy or disinterested in the actual work of lawmaking themselves.

a two-year investigation by USA Today, The Arizona Republic, and The Center for Public Integrity found widespread use of “copycat bills” at both federal and state levels. Copycat legislation is the phenomenon in which lawmakers introduce bills that contain identical langauge and phrases to “model bills” that are drafted by corporations and special interests for lobbying purposes. In other words, these lawmakers essentially copy-pasted the exact words that lobbyists sent them.

From 2011 to 2019, this investigation found over 10,000 copycat bills that lifted entire passages directly from well-funded lobbyists and corporations. 2,100 of these copycat bills were signed into law all across the country. And more often than not, these copycat bills contain provisions specifically designed to enrich or protect the corporations that wrote the initial drafts

milesvp · 6 months ago
I know a lawyer who almost took a job in state government where one of the primary duties was to make sure that the punctuation in the bills going through the state legislature were correct and accurate. For her, part of the appeal of the job, was that it would allow her to subtly alter the meaning of a bill being presented. Apparently it is a non trivial skill to be able to determine how judges are likely to rule on cases due to say, the presence, or the absence of an oxford comma.

There was an entire team dedicated to this work, and the hours were insane when the legislature was in session. She ended up not taking the job because of the downsides associated with moving to the capital, so I don't know more about the job. I'd be curious how much AI has changed what that team does now. Certainly, they still would want to meticulously look at every character, but it is certainly possible that AI has gotten better at analyzing the "average" ruling, which might make the job a little easier. What I know about law though, is that it's often defined by the non average ruling, that there's sort of a fractal nature to it, and it's the unusual cases that often forever shape future interpretations of a given law. Unusual scenarios are something that LLMs generally struggle with, and add to that the need to creatively come up with scenarios that might further distort the bill, and I'd expect LLMs to be patently bad at creating laws. So while, I have no doubt that legislators (and lobbyists) are using AI to draft bills, I am positive that there is still a lot of work that goes into refining bills, and we're probably not seeing straight vibe drafting.

teraflop · 6 months ago
Here's a fairly recent example of a $5M lawsuit that hinged on the interpretation of an Oxford comma in a Maine law about overtime pay: https://www.fedbar.org/wp-content/uploads/2018/10/Commentary...
unmole · 6 months ago
> have we seen evidence of lawmakers submitting AI-generated language in bills or amendments?

MPs are definitely using AI to write their speeches in parliament: https://www.telegraph.co.uk/business/2025/09/11/chatgpt-trig...

Deleted Comment

dylan604 · 6 months ago
I mean, we've seen laws that were written by lobbyists with zero changes. Does it matter if it was AI generated or not at that point? The congress critters are not rewriting what they've been told to do if they've even read it after being told what to do.
lordnacho · 6 months ago
This is why there are certain jobs AI can never take: we are wired for humans to be responsible. Even though a pilot can do a lot of his work via autopilot, we need a human to be accountable. For the pilot, that means sitting in the plane. But there are plenty of other jobs, mostly high-earning experts, where we need to be able to place responsibility on a person. For those jobs, the upside is that the tool will still be available for the expert to use and capture the benefits from.

This lawyer fabricating his filings is going to be among the first in a bunch of related stories: devs who check in code they don't understand, doctors diagnosing people without looking, scientists skipping their experiments, and more.

unshavedyak · 6 months ago
> This is why there are certain jobs AI can never take

You're thinking too linearly imo. Your examples are where AI will "take", just perhaps not entirely replace.

Ie if liability is the only thing stopping them from being replaced, what's stopping them from simply assuming more liability? Why can't one lawyer assume the liability of ten lawyers?

lordnacho · 6 months ago
Then there will still be lawyers. More productive, higher income lawyers.

Just like with a lot of other jobs that got more productive.

observationist · 6 months ago
People who think like this cannot be convinced; they're unaware of the acceleration of the rate of progress, and it won't change until they clash with reality. Don't waste your time and energy trying to convince them.

They don't understand how to calibrate their model of the world with the shape of future changes.

The gap between people who've been paying attention and those who haven't is going to increase, and the difficulty in explaining what's coming is going to keep rising, because humans don't do well with nonlinearities.

The robots are here. The AI is here. The future is now, it's just not evenly distributed, and by the time you've finished arguing or explaining to someone what's coming, it'll have already passed, and something even weirder will be hurtling towards us even faster than whatever they just integrated.

Sometime in the near future, there won't be much for people to do but stand by in befuddled amazement and hope the people who set this all in motion knew what they were doing (because if we're not doing that, we're all toast anyway.)

pjc50 · 6 months ago
The book https://en.wikipedia.org/wiki/The_Unaccountability_Machine introduces the term "accountability sink", which is very useful for these discussions. Increasingly complicated systems generate these voids, where ultimately no human can be singled out or held responsible.

AI offers an incredible caveat emptor tradeoff: you can get a lot more done more quickly, so long as you don't care about the quality of the work, and cannot hold anyone responsible for that quality.

gdulli · 6 months ago
Where could lawyers be learning this behavior?

https://www.theguardian.com/us-news/2025/apr/24/california-b...