Readit News logoReadit News
ksherlock · 2 years ago
https://archive.is/jdryA

Confirmation of what seemed obvious before - https://www.politico.com/news/2023/12/12/michael-cohen-court...

But he used Bard instead of ChatGPT.

lsy · 2 years ago
What is it about text generation that makes us so credulous about its relationship to reality in a way that we aren't with other generative tech? I assume that a lawyer would implicitly understand that they can't, for example, ask MidJourney to generate a photo of a certain crime's perpetrator and then submit that as evidence in court. The phenomena of what Bard or GPT-4 does with text is almost exactly analogous to what MidJourney does with pixels insofar as its relation to reality, and yet when it comes to text, even the creators of a generated work don't understand it's not real.
add-sub-mul-div · 2 years ago
A lazy person can't pretend that a camera existed at a point in the past taking the picture that an AI is showing them. But there's nothing stopping them from pretending that any given text output is truth.
pylua · 2 years ago
This is a complete disaster.

On the other hand, are there any companies focusing on tuning llm or plugins to help with legal research and verification of research? Is that even possible?

The scale and size of the law really does necessitate that these tools exist. Especially since citizens are expected to know the law.

more_corn · 2 years ago
With RAG it should be possible and with strong business logic outside the LLM it should be safe. A company called Pacer Pro is active in the space.

It should also be reasonably easy to identify hallucinatory / fabricated material. If we give those tools to law schools and judges the hammer comes down pretty quick on anyone who tries it again.

pylua · 2 years ago
What is RAG?
erur · 2 years ago
Not an expert on the space but LexisNexis for example is using Anthropic's Claude 2 and GPT-4.