Readit News logoReadit News
cmiles8 · 13 days ago
There will be many more things like this and it’s an elephant in the room for the supposed mass replacement of people with AI.

Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.

The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.

PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.

kace91 · 13 days ago
I think this is an even clearer case than usual. With software engineers and office work you don’t have legal limitations on who can perform the work, but they exist for lawyers and doctors for example.

So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.

jacquesm · 13 days ago
> With software engineers and office work you don’t have legal limitations on who can perform the work

Technically true, but if you want the IP to be covered by copyright you better make sure they're not using AI or you'll find out that there are some serious legal limitations in your future when you aim to either pick up investment or sell your IP.

epgui · 13 days ago
> With software engineers […] you don’t have legal limitations on who can perform the work

While in practice that is true, in theory this is why professional engineering accreditations (I mean like P.Eng., not little certificates) exist. Perhaps we will see a broader professionalization of the profession one day.

gortok · 13 days ago
> So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.

I am particularly against this point of view, because we as a community have long touted how computers can do the job better and faster, and that computers don’t make mistakes. When there are bugs, they’re seen as flaws in the system and rectified, by programmers.

When there are gaps between user expectations and how the software works, it’s our job to manage those gaps and reduce the gap.

In the case of AI, we are somehow, probably because we know it’s non-deterministic, turning that social contract we had developed with users on its head.

Now, that’s just the way it is and it’s up to them to know if the computer is lying to them. We have absolved ourselves of both the technical and the non-technical responsibilities to ensure the computer doesn’t lie to the user, or subverts their expectations, or acts in a way contrary to human logic.

AI may be different to us in that it’s non-deterministic, but that’s all the more reason that we’re responsible to ensure AI adoption aligns to the social contract we created with users. If we can’t do that with AI then it’s up to us to stop chasing endless dollars and be forthright with users that facts are optional when it comes to AI.

bookofjoe · 13 days ago
>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

I remember growing up and always hearing "The computer is down" as an excuse for why things were cancelled/offices closed/buses and trains not running/ad infinitum.

At some point I read a article that pointed out that the reason the computer was down was because a person made a [coding] error: the computer itself was fine.

I've yet to read about how a person who caused the computer to be down was disciplined.

21asdffdsa12 · 13 days ago
You are running on a outdated model of the world. That one of only discipline keeps people working, keeps them productive, keeps the in line.

We saw how that worked out in Soviet Russia and the culture it gave birth to in its aftermath. Artificially held up discipline by institutions and hierarchies is worthless. It only encourages subversion and thus most of the productivity is wasted on hunting for laziness and updating of ever more intricate behavioral programing rules, which make the organization ever more unable to react fast and decisive.

The only discipline worth a damn is intrinsic. People who want something, want to get somewhere. They need no shepards and prison guards, they need only a support harness, they need resources and people concerned about them. The culture that produces such people is required for things to succeed. Any culture that does not, can not succeed and is basically a parasite to cultures who do.

Gud · 13 days ago
Why does a person need to be disciplined because they made a mistake?
mrwh · 13 days ago
And here perhaps was the greatest mistake the software profession made! Not making ourselves into a real profession, with actual accountability. It was terribly convenient for so long not to have consequences when things went wrong. It's less convenient now.
amelius · 13 days ago
We should have more hygiene when it comes to AI.

Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.

Failing to do so (or tampering with it) should be considered bad hygiene, and should be treated like a doctor who doesn't wash their hands before surgery.

jacquesm · 13 days ago
> Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.

That's exactly my proposed solution:

https://jacquesmattheij.com/classes-of-originality/

sharpy · 13 days ago
What will that accomplish? Does it give license to developers to check in code that they don't understand/trust fully?

Ultimately, people should be responsible for the code they commit, no matter how it was written. If AI generates code that is so bad that it warrants putting up warning sign, it shouldn't be checked in.

chrisjj · 13 days ago
> Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.

Why not start with manual tagging, like "Ad"?

raincole · 13 days ago
I don't believe most countries hold judges accountable for bad ruling at all even before AI era.

"Check and balance, except judiciary."

RobotToaster · 13 days ago
In the UK lower court judges are sometimes removed for misconduct.

Only the king (at the petition of parliament) can remove a high court or appeal court judge, and that's only ever happened once, in 1830.

AnimalMuppet · 13 days ago
In the US, local/state judges often are elected (probably varies by state). Federal judges can be impeached.
chrisjj · 13 days ago
It wasn't just a bad ruling. It was judicial misconduct.
JimTheMan · 12 days ago
The appeals court should make a ruling on this.
ForHackernews · 13 days ago
Counterpoint: No one ever gets fired or goes to jail when big tech firms break the law. Companies will put out an apology, pay whatever small fine is imposed, and continue with illegal AI usage at scale.
WarmWash · 13 days ago
>why so many companies are now saying they see zero ROI from AI efforts.

I strongly suspect this is because workers are pocketing the gains for themselves. Report XYZ usually takes a week to write. It now takes a day. The other 4 days are spent looking busy.

The MIT report that found all these companies were getting nowhere with AI, also found that almost every worker was using AI almost daily. But using their personal account rather than the corporate one.

onionisafruit · 13 days ago
If that were the case, this site and certain subreddits would have a lot of posts and comments with people crowing about how much time they are getting back. I haven’t seen that, but I haven’t gone looking for it either.
everforward · 13 days ago
While not dispositive of your idea, I think some portion of people using their personal accounts is because we collectively lack good feedback loops on the effectiveness of “AI addons” like RAG. The corporate accounts can be legitimately less useful than a “stock” account because the AI team integrates everything under the sun to show value, but the integrations become a net negative.

Ie ones that index entire company wikis. It ends up regurgitating rejected or not implemented RFCs, or docs from someone’s personal workflow that requires setting up a bunch of stuff locally to work, or etc.

A lot of tasks are not dependent on internal documentation, and it just ends up polluting the context with irrelevant, outdated or just wrong information.

adithyassekhar · 13 days ago
Quiet the contrary, companies layoff all roles (frontend, backend, qa, devops, even ui/ux) and handle a project to one competent dev. And asks them to deliver it in 1/3rd the time it would have took with a proper team. It's happening at places I know. This thread on reddit is 100% same: https://www.reddit.com/r/developersIndia/s/EIksvB15tm

I can't even imagine the stress from context switching, and since people don't realize this is still work, they do this late into the night as well.

beachtaxidriver · 13 days ago
Because the amount of ai slop code from peers and the amount of ai slop emails to read from management has exploded.
toraway · 13 days ago
That’s certainly a … convenient … explanation.
alok-g · 12 days ago
>> Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

It does not happen this way there even with just humans presiding. Judgments written by humans there are on an average total garbage.

Edit: Someone wrote a similar comment here: https://news.ycombinator.com/item?id=47244909

littlecorner · 12 days ago
>>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

Now I'm trying to imagine a way they could apply a criminal charge against an AI in such a way that it would prevent the AI from being used in official capacity or something

blackoil · 13 days ago
> You can make humans more productive

If productivity is 10x unless work increases 10x jobs will be gone.

kudokatz · 13 days ago
In about 1930, Keynes wrote "Economic Possibilities for our Grandchildren" [1] wherein he wrote:

"I believe that this is a wildly mistaken interpretation of what is happening to us.

We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another. The increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption; the improvement in the standard of life has been a little too quick ...

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come--namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour."

While there's no guarantee that what Smith got wrong then is the same as now, it can be a reasonable outcome that "the jobs" won't just disappear.

----

Keynes also speculated on what to do with newfound time as a result of investment returns on the back of productivity [1]:

"Let us, for the sake of argument, suppose that a hundred years hence we are all of us, on the average, eight times better off in the economic sense than we are to-day. Assuredly there need be nothing here to surprise us ... Thus for the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well."

The modern FIRE movement shows that living at a dated "standard of living" for 10-15 years can free one from work forever. Yet that's not what most people do today. I would suggest that there are deeper aspects of human drive, psychology, and varying concepts of "morality" that are actually bigger factors in what happens to "jobs".

[1] http://www.econ.yale.edu/smith/econ116a/keynes1.pdf

coldtea · 13 days ago
>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

Why? The logic of ever less personal pride, involvement, and care, is eventually to just put the blame on AI and be done with it.

Issues? Casualties? It's a bug, somebody fixes it and we move on. Or is just a cost we need to get used to to live in the great new world of AI.

We're in an era where nobody involved goes to jail for the Epstein case, and the world keeps turning, and we think people will care if nobody goes to jail if somebody loses their pension or gets wrongly imprisoned or dies on an operating table because of AI mistake?

If anything, legal, union and other limitations like that on who gets to decide (having to have a human ultimately responsible) might be torn down, to fullu embrace the blame-shifting capabilities of the digital bureucracy.

ChrisMarshallNY · 13 days ago
> Someone has to get fired / go to jail when something screws up.

In law, someone always hangs. I think a number of American lawyers have been sanctioned for using AI slop.

In other vocations ... not so much. I think that one of the reasons that insurance likes AI so much, is that they can say that it was "the computer" that made the decision that killed Little Timmy.

idontwantthis · 13 days ago
I think there was an SMBC comic recently on this subject. Basically a whole responsibility industry crops up. You get paid to be the fall guy for an AI if it ever screws up since Someone needs to be held accountable.
general_reveal · 13 days ago
Or, AI is going to be like when land lines became unnecessary when cellphones showed up in India. India may get to skip an entire intellectual generation due to the ability of a cheap model to educate (in any language).

The narrative that an entire population are “worth” less, paid less , know less, live less …

Fuck this less shit, embrace the paradigm shift. God is finally providing the remedial support through the miracle of AI.

jazzypants · 13 days ago
We've had YouTube for two decades now. Cheap education was already available for those who wanted it.
AlotOfReading · 13 days ago
I don't know if you've ever been to India, but one of its characteristic features is that it has lots of local languages. LLMs are awful at almost all of them. Plus, there's 20ish% of the population that falls below the literacy threshold. It's hard to imagine how those people would be educated by LLMs even if that was a good idea and they all had reliable Internet access, which they often don't.
21asdffdsa12 · 13 days ago
Or you are proven wrong entirely, again and again. And it turns out, that the legacy, unimportant and derelict of the past - culture is all decisive It turns out that only some cultures can generate high trust societies, capable to form institutions. And you prolonged suffering, by declaring that all cultures are created equal. History may write you down a monster.
delaminator · 13 days ago
Some people are worth more than others.

Some cultures are better than others.

fidotron · 13 days ago
> Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

The turning point will be when threatening an AI with being unplugged for screwing up works in motivating it to stop making things up.

Some people will rightly point out that is kind of what the training process is already. If we go around this loop enough times it will get there.

Hendrikto · 13 days ago
You are making a lot of assumptions here. You assume, among other things, that AI has self-preservation drive, can be threatened, can be motivated, and above all that we know how to accomplish that and are already doing so. I would dispute all of that.
hek2sch · 13 days ago
Isn't just the issue stemming simply from not using the right tool? When the stakes are high and you should be checking details, the right tools are grounded Ai solutions like nouswise and notebooklm and not the general purpose chatbots that almost everyone knows they might hallucinate. I also do believe that this use case is definitely a low hanging fruit to automat a lot of manual work but it comes with new requirements like transparency to help with verifying the responses.
chrisjj · 13 days ago
> Isn't just the issue stemming simply from not using the right tool?

What suggests this judge was not using the very best chatbot?

edgarvaldes · 13 days ago
Is this a solved problem using the right tools?
codegladiator · 13 days ago
> She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote

I don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.

Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.

fidotron · 13 days ago
Using an LLM to automate is simply the newer cheaper outsourcing with much of the same entertainment, but less food poisoning and air travel.

Over the last 20 years a lot of engineering (proper eng, not software) work in the west has been outsourced to cheaper places, with the certified engineers simply signing off on the work done elsewhere. This results in a cycle of doing things ever faster/more cheaply and safeguards disappearing under the pressure to go ever cheaper and faster.

As someone else pointed out, LLMs have just really exposed what a degraded state we have headed into rather than being a cause of it themselves. It's going to be very tough for people with no standards - they'll enjoy cheap stuff for a while and then it will all go away. Surprised Pikachu faces all round.

(I'm pro AI btw, just be responsible.)

Sharlin · 13 days ago
LLMs also solve the timezone and language challenges. Sadly one problem that remains is that they too tell you they have understood something even if they haven't.
mtrovo · 13 days ago
At least that's the story LLM labs leaders wanna tell everyone, just happen to be a very good story if you wanna hype your valuation before investment rounds.

Working with LLM on a daily basis I would say that's not happening, not as they're trying to sell it. You can get rid of a 5 vendor headcount that execute a manual process that should have been automated 10 years ago, you're not automating the processes involving high paying people with a 1% error chance where an error could cost you +10M in fines or jail time.

When I see Amodei or Sam flying on a vibe coded airplane is the day I believe what they're talking about.

kingstnap · 13 days ago
> excuse to blame the tool

The issue is ultimately blaming people doesn't really solve things. Unless its genuinely a one-of-a-kind case. But if this happened once its probably going to happen again, and this isn't the first such case of LLM hallucinations in law.

It's weird to think this way, because its easy to just point at a person for a specific instance, but when you see something repeat over and over again you need to consider that if your ultimate goal is to stop something from happening you have to adjust the tools even if the people using them were at fault in every case.

Deleted Comment

RobotToaster · 13 days ago
Intentionality normally has to be taken into account in common law countries.

That doesn't mean she hasn't done something wrong, but obviously it's more serious to do something intentionally than it is to do it carelessly or recklessly.

the_af · 13 days ago
They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this.

So the judge was lazy, incompetent, or both.

ghywertelling · 13 days ago
Or she was conniving like Skylar in Breaking Bad as she convinced the investigator that she got hired because she seduced the owner.
nerdjon · 13 days ago
I do think that for this particular situation we need to step outside of our tech bubble a little bit.

I am still having regular conversations with people that either don't know about hallucinations or think they are not a big problem. There is a ton of money in these companies pushing that their tools are reliable and its working for the average user.

I mean there are people that legitimately think these tools are conscious or we already have AGI.

So I am not fully sure if I would jump too quick to attack the judge when we see the marketing we are up against.

lukan · 13 days ago
Not just discussed, but under every chat interface explicitely mentioned "This tool can make misstakes"

(Sure, more honest would be "this tool makes stuff up in a convincing way")

Deleted Comment

hypeatei · 13 days ago
This is why LLMs won't replace humans wholesale in any profession: you can't hold a machine accountable. Most of the chatbot experiences I have with various support channels always end up with human intervention anyway when it involves money.

Maybe true general intelligence would solve these issues, but LLMs aren't meeting that threshold anytime soon, imo. Stochastic parrots won't rule the world.

lazide · 13 days ago
Even ‘true general intelligence’ (if we count humans as that) screws up frequently, sometimes (often?) intentionally for it’s own benefit - which is why accountability is such a necessary element.

If someone won’t be held liable for the end result at some point, then there is no reason to ensure an even somewhat reasonable end result. It’s fundamental.

Which is also why I suspect so many companies are pushing ‘AI’ so hard - to be able to do unreasonable things while having a smokescreen to avoid being penalized for the consequences.

direwolf20 · 13 days ago
This is exactly why LLMs will replace humans: even if the work is crap, nobody will be accountable for the crap work, and it saves money.
delaminator · 13 days ago
> Not knowing the law isn't excuse to break law,

Yeah, about that ...

https://metro.co.uk/2016/07/03/rapist-struck-again-after-dep...

> A Somalian rapist who had his deportation overturned went on to rape two more women after he was freed.

> But he had his deportation overturned after serving his time because he didn’t know it was unacceptable in the UK.

voidUpdate · 13 days ago
How many of these cases do we have to have before lawyers realise that they need to check that the things an LLM tells them are actually true?
Latty · 13 days ago
It doesn't matter, because any process that seems right most of the time but occasionally is wrong in subtle, hard to spot ways is basically a machine to lull people into not checking, so stuff will always slip through.

It's just like the cars driving themselves but you need to be able to jump in if there is a mistake, humans are not going to react as fast as if they were driving, because they aren't going to be engaged, and no one can stay as engaged as they were when they were doing it themselves.

We need to stop pretending we can tell people they "just" need to check things from LLMs for accuracy, it's a process that inevitably leads to people not checking and things slipping through. Pretending it's the people's fault when essentially everyone using it would eventually end up doing that is stupid and won't solve the core problem.

chii · 13 days ago
> won't solve the core problem.

what's the core problem tho? Because if the core problem is "using ai", then it's an inevitable outcome - ai will be used, and there are always incentive to cut costs maximally.

So realistically, the solution is to punish mistakes. We do this for bridges that collapse, for driver mistakes on roads, etc. The "easy" fix is to make punishment harsher for mistakes - whether it's LLM or not, the pedigree of the mistake is irrelevant.

dw_arthur · 13 days ago
As someone who has done QA on white collar work it's tiring looking for little errors in work reports. Most people are not cut out for it.
voidUpdate · 13 days ago
Probably worth including a "bibliography" section of citations that can be automatically checked that they actually exist then
macintux · 13 days ago
Even disregarding self driving features, it seems like the smarter we make cars the dumber the drivers are. DRLs are great, until they allow you to drive around all night long with no tail lights and dim front lighting because you’re not paying enough attention to what’s actually turned on.
duskdozer · 13 days ago
I'm continually amazed at how much faith people have in them. I guess since they can sound like people and output really authoritative and confident text it just overrides any skepticism subconsciously?
moron4hire · 13 days ago
It's mind boggling how much people claim to like LLMs when you would never design any other piece of software to operate like LLMs do. Designing a system that interact with the user through natural text creates an awful experience. It slows down every interaction as you dig through all the prose to get to the key information. It turns every computer interaction into a school math word problem.
ben_w · 13 days ago
Much as I like them, I do frequently remind myself of two things:

1) https://en.wikipedia.org/wiki/Clever_Hans

2) https://archive.org/details/nextgen-issue-26 as an example of how in the 90s we has rapid cycles of a new tech (3d graphics) astounding us with how realistic each new generation was compared to the previous one, and forgetting with each new (game engine) how we'd said the same and felt the same about (graphics) we now regarded as pathetic.

So yes, they do sound "authoritative and confident text it just overrides any skepticism subconsciously", but you shouldn't be amazed, we've always been like this.

pjc50 · 13 days ago
The advertising campaign is incredible.
PunchyHamster · 13 days ago
Yes, just as with politicians. And LLMs have been thoroughly tuned to appear that
LunaSea · 13 days ago
It doesn't matter anymore.

LLMs just revealed what a decadent society we have setup for ourselves worldwide.

coffeefirst · 13 days ago
It’s worse than that. We’re hearing about the lawyers and Ars Technica because the consequences are public and the errors are egregious.

It’s likely happening to everyone.

probably_wrong · 13 days ago
Just this week I tracked down the citations of a scientific paper (whose authors could very well be here) where 25% of the citations were made up and 50% of the remaining ones were wrong, taking ArXiv papers and citing them as belonging to (say) IJCLR.

It's not just lawyers.

AJ007 · 13 days ago
This whole thing is silly, LLMs can automate reference validation.

If someone is a lawyer, accountant, doctor, teacher, surgeon, engineer etc, and is regurgitating answers that were pumped out with with GPT-5-extra-low or whatever mediocre throttled model they are using, they should just be fired and de-credentialed. Right now this is easy.

The real problem is ahead: 99.999% of future content that exists will be made using generative AI. For many people using Facebook, Instagram, TikTok, or some other non-sequential, engagement weighted feed, 50%+ of the content they consume today is fake. As that stuff spreads in to modern culture it's going to be an endless battle to keep it out of stuff that should not be publishing fake content (e.g. the New York Times or Wall Street Journal; excluding scientific journals who seem to abandoned validation and basic statistics a long time ago.)

Much of the future value and profit margins might just be in valid data?

raincole · 13 days ago
> Right now this is easy.

Easy? In the US you need house impeachment to fire a judge. In some countries judges are completely immune unless they are sentenced for crimes.

miltonlost · 13 days ago
> This whole thing is silly, LLMs can automate reference validation.

Can they though with 100% accuracy and no hallucinations? Wouldn't you still need to validate that they validated correctly?

zthrowaway · 13 days ago
Do we see this a lot in the US? This seems to be more unique to India.
tw04 · 13 days ago
It’s happening A LOT in the US too. Mainstream media just doesn’t seem to find it that newsworthy.

https://arstechnica.com/tech-policy/2026/02/randomly-quoting...

malshe · 13 days ago
From the article:

> In October, two federal judges in the US were called out for the use of AI tools which led to errors in their rulings. In June 2025, the High Court of England and Wales warned lawyers not to use AI-generated case material after a series of cases cited fictitious or partially made up rulings.

YeGoblynQueenne · 13 days ago
What kind of AI is this that you constantly need a human to check its job? Do you think Jean-Luc Piccard had to constantly check the output of the Enterprise computer? No he didn't. If AI is not better than humans, then what the heck is the point? You might as well just use humans.
thisislife2 · 13 days ago
The high court also advocated for the "exercise of actual intelligence over artificial intelligence". Hehe.
kaptainscarlet · 13 days ago
There will be loads of papers and publications with fake citation. AI will be trained on these. In the end, we'll have more and more hallucinated information that true content on the internet.
alansaber · 13 days ago
This is a big problem in the US and UK too. Lawyers are not technical at all and they need a robust system of governance, since currently they're (directly editing, not even diffing) documents with a chatbot which makes these mistakes inevitable. See https://insights.doughtystreet.co.uk/post/102mi96/38-uk-case...
jfengel · 13 days ago
I feel like this points out a very general problem with the law: it generates a lot of boilerplate text. Lawyers don't really read it; they skim it for the relevant bits.

Obviously lawyers should not be cheating with AI, especially when they don't even check it. But it does sound to me as if this is an opportunity to re-factor the process. We're carrying forward some ideas originally implemented in Latin, and which can be dramatically simplified.

I'm not a lawyer; I know this only in passing. And I am aware that there are big differences between law and code. But every time I encounter the law, and hear about cases like this, what I see are vast oceans of text that can surely be made more rigorous. AI is not the problem; it's pointing out the opportunity.

petcat · 13 days ago
> problem with the law: it generates a lot of boilerplate text

I think the problem fundamentally is that matters of law require thorough, precise language, and unambiguous context. If you remove "the boilerplate" then you introduce a vast gray area left to interpretation.

Usually attempts (by humans or computers) to "summarize" or frame things in "plain language" will apply a bias since it intentionally omits all the myriad context and legal/societal "gray areas" that will inform one perspective or another.

Legalese exists the way it is because it is an attempt to remove doubt. And even then, doubt still creeps in.

ryandrake · 13 days ago
This is only the case when you care more about the letter of the law than the spirit of the law, which is, I suppose, most of the world. It doesn't have to be this way, it's a choice that society has made.

When I bought my house, in an alternate universe the paperwork could have been one sheet of paper that said "[My name] purchases home at [address] from [Seller's name] for [price]." and we'd all rely on our shared understanding of what it means to buy something and shared cultural expectations around home ownership and commerce. But our society did not make that choice, we don't live in that universe, so I had to sign a 300 page stack of papers 30 times.

Deleted Comment

loremium · 13 days ago
law texts feel like a layering problem, like just decoration around decoration to avoid breaking existing 'code' without ever simplifying it
MagicMoonlight · 13 days ago
Okay so let’s try simplifying it.

We’ll change the existing murder legislation to “Killing someone is a crime”. It’ll save us thousands of pages.

But does that mean a soldier shooting an enemy is a crime? What about shooting someone who is raping you? What if you shoot someone by mistake, thinking they’re going to kill you? What if you hit them with a car? What if you fail to provide safety equipment which eventually results in their accidental death?

Oopsie woopsie, I guess we need to add another thousand pages of exceptions back to our simplistic laws. It turns out people didn’t just write them for the fun of it.

eaglehead · 13 days ago
This is going to be a huge problem in every sector. I have been exploring solution in this space for fintech and so far what Resemble AI is doing [1] is probably the best way to defend.

The surface level for us is not just LLM generated text, it is also the combination of AI augmented audio (for incoming calls) and then for our own voice agents being able to protect and identify services cloning our own agent voices with watermarking.

It's not fun, as we are constantly catching up.

[1] https://www.resemble.ai/detect/