Readit News logoReadit News
laborcontract · 2 years ago
Local municipalities are so poorly run that I cannot help but wonder if the mundanities of a RLHF-aligned model is at all bad for the state from a pure legislative perspective.

Couple more thoughts:

1. It's not certain that what this world needs is a greater tangle of convoluted laws – ChatGPT inflating legislators' output seems like a net loss. The naive model-UN piece of me thinks that LLMs could help recontextualize political views and help lawmakers cross partisan divides.

2. LLMs are ironic in their utility in that their killer use case is summarization and, yet, they are incapable of critical reduction, ie "which laws should we get rid of?"

esperent · 2 years ago
> they are incapable of critical reduction, ie "which laws should we get rid of?"

Why do you say that?

I've had success in cases where I want to simplify an article or table of contents. I've shared it with GPT4 and said something along the lines of "this article/TOC is overly complex and I want to keep the only the most important parts. What would you suggest I remove to make this easier to read" or similar prompts.

I've used this quite a lot as I've been writing tutorials recently and I tend to be too wordy/over explain things in my first drafts. The results have been pretty good, especially with detailed TOCs (i.e. a TOC of all the headings and subheadings in a long article).

laborcontract · 2 years ago
You know what, you're right. They're relatively much better at critical reduction than expansion. Maybe the fact that LLMs are so good at summarization means we should attempt deploying them to trim the books.
justinclift · 2 years ago
> Finegold said by phone on Wednesday that ChatGPT can help with some of the more tedious elements of the lawmaking process, including correctly and quickly searching and citing laws already on the books.

You'd really hope they double check the results, famously unlike the legal people who didn't a few months ago to bad effect.

iurisilvio · 2 years ago
It says more about brazilian municipalities than about chatgpt. Probably nobody read this one and many other voted documents.
fdgjgbdfhgb · 2 years ago
It's a one page document that deals with a very specific situation - if a human were to write it, I'm sure they would come up with pretty much the same thing. I actually have no problems with this particular case
andersa · 2 years ago
Why does it matter if chatgpt wrote it, if the result is sensible?
fnordpiglet · 2 years ago
According to a friend who is a senior partner at a major class action firm many law firms are heavily relying on GPT4 (including his own). However in his firm it’s used to generate drafts and summarize large boiler plate documents. Humans always meticulously review and revise the drafts and the summaries done by humans are so likely to be flawed as documents grow just due to signal to noise ratios that the GPT4 analysis is more accurate.
gpderetta · 2 years ago
> Humans always meticulously review and revise the drafts

The risk of course is that they might get complacent, trust the output too much, and errors will start slipping in. Especially if the error rate is lower than human and the throughput is higher than they can effectively review. But this is not necessarily an LLM issue per-se.

dieselgate · 2 years ago
I agree for select ad hoc cases but a chatbot seems unreliable enough to not be, at current, sustainable or scalable for this purpose.

The article mentions "dangerous precedent" along with

> It may not always be able to account for the nuances and complexities of the law. Because ChatGPT is a machine learning system, it may not have the same level of understanding and judgment as a human lawyer when it comes to interpreting legal principles and precedent. This could lead to problems in situations where a more in-depth legal analysis is required

which seem fair at present moment.

repelsteeltje · 2 years ago
This line of reasoning sounds like the Chinese Room argument

- https://en.wikipedia.org/wiki/Chinese_room

A corollary might be: Why does it matter if an intern wrote it, if the result is sensible?

vincnetas · 2 years ago
I have suspicion that it was jus skimmed by readers. Would by nice to get original document.
xigoi · 2 years ago
If the city is using taxpayers’ money to pay an externist to write laws, shouldn’t that be approved first?
andybak · 2 years ago
I have never seen the word "externist" before.
wongarsu · 2 years ago
Better than having lobbyists write the laws for free
gnopgnip · 2 years ago
Would you use a dictionary if it gave a plausible but wrong answer 5% of the time?
jstummbillig · 2 years ago
That depends on the alternative, no?

If we could conjure a human legislator that is wrong only 20% of the time I would be pleasantly shocked.

15457345234 · 2 years ago
Many people will say yes to this question until you give them a very obviously hazardous example of what can result.
6510 · 2 years ago
This seems like an excellent argument for robot overlords - at least eventually.
wjb3 · 2 years ago
I tend to agree with you.
Daviey · 2 years ago
What matters more if the output had the intent of the author, the idea and principle, surely?

Imagine 30 years ago, if it was revealed a grammar/spell-checker was used to improve a document, it would probably have had the same outrage.

fl7305 · 2 years ago
The first spell-checkers were great.

I worked at Ericsson then. The word processor replaced every occurrence of "Ericsson" with "Erection".

fdgjgbdfhgb · 2 years ago
Of course it's Porto Alegre lol. I was curious to see the text of the law [1], and it obviously looks fine - it's short and shows good command of "legalease". I'd love to see the prompt...

[1] https://dopaonlineupload.procempa.com.br/dopaonlineupload/49...

quickthrower2 · 2 years ago
See, it can write code! I don't think this is a bad thing, at least not in 2023. It is still the tool and we are it's master.
belugacat · 2 years ago
“We shape our tools and thereafter our tools shape us”
jstummbillig · 2 years ago
It happens simultaneously and continuously and I don't see why that would be either surprising or alarming.

Deleted Comment