Local municipalities are so poorly run that I cannot help but wonder if the mundanities of a RLHF-aligned model is at all bad for the state from a pure legislative perspective.
Couple more thoughts:
1. It's not certain that what this world needs is a greater tangle of convoluted laws – ChatGPT inflating legislators' output seems like a net loss. The naive model-UN piece of me thinks that LLMs could help recontextualize political views and help lawmakers cross partisan divides.
2. LLMs are ironic in their utility in that their killer use case is summarization and, yet, they are incapable of critical reduction, ie "which laws should we get rid of?"
> they are incapable of critical reduction, ie "which laws should we get rid of?"
Why do you say that?
I've had success in cases where I want to simplify an article or table of contents. I've shared it with GPT4 and said something along the lines of "this article/TOC is overly complex and I want to keep the only the most important parts. What would you suggest I remove to make this easier to read" or similar prompts.
I've used this quite a lot as I've been writing tutorials recently and I tend to be too wordy/over explain things in my first drafts. The results have been pretty good, especially with detailed TOCs (i.e. a TOC of all the headings and subheadings in a long article).
You know what, you're right. They're relatively much better at critical reduction than expansion. Maybe the fact that LLMs are so good at summarization means we should attempt deploying them to trim the books.
> Finegold said by phone on Wednesday that ChatGPT can help with some of the more tedious elements of the lawmaking process, including correctly and quickly searching and citing laws already on the books.
You'd really hope they double check the results, famously unlike the legal people who didn't a few months ago to bad effect.
It's a one page document that deals with a very specific situation - if a human were to write it, I'm sure they would come up with pretty much the same thing. I actually have no problems with this particular case
According to a friend who is a senior partner at a major class action firm many law firms are heavily relying on GPT4 (including his own). However in his firm it’s used to generate drafts and summarize large boiler plate documents. Humans always meticulously review and revise the drafts and the summaries done by humans are so likely to be flawed as documents grow just due to signal to noise ratios that the GPT4 analysis is more accurate.
> Humans always meticulously review and revise the drafts
The risk of course is that they might get complacent, trust the output too much, and errors will start slipping in. Especially if the error rate is lower than human and the throughput is higher than they can effectively review. But this is not necessarily an LLM issue per-se.
I agree for select ad hoc cases but a chatbot seems unreliable enough to not be, at current, sustainable or scalable for this purpose.
The article mentions "dangerous precedent" along with
> It may not always be able to account for the nuances and complexities of the law. Because ChatGPT is a machine learning system, it may not have the same level of understanding and judgment as a human lawyer when it comes to interpreting legal principles and precedent. This could lead to problems in situations where a more in-depth legal analysis is required
Of course it's Porto Alegre lol. I was curious to see the text of the law [1], and it obviously looks fine - it's short and shows good command of "legalease". I'd love to see the prompt...
Couple more thoughts:
1. It's not certain that what this world needs is a greater tangle of convoluted laws – ChatGPT inflating legislators' output seems like a net loss. The naive model-UN piece of me thinks that LLMs could help recontextualize political views and help lawmakers cross partisan divides.
2. LLMs are ironic in their utility in that their killer use case is summarization and, yet, they are incapable of critical reduction, ie "which laws should we get rid of?"
Why do you say that?
I've had success in cases where I want to simplify an article or table of contents. I've shared it with GPT4 and said something along the lines of "this article/TOC is overly complex and I want to keep the only the most important parts. What would you suggest I remove to make this easier to read" or similar prompts.
I've used this quite a lot as I've been writing tutorials recently and I tend to be too wordy/over explain things in my first drafts. The results have been pretty good, especially with detailed TOCs (i.e. a TOC of all the headings and subheadings in a long article).
You'd really hope they double check the results, famously unlike the legal people who didn't a few months ago to bad effect.
The risk of course is that they might get complacent, trust the output too much, and errors will start slipping in. Especially if the error rate is lower than human and the throughput is higher than they can effectively review. But this is not necessarily an LLM issue per-se.
The article mentions "dangerous precedent" along with
> It may not always be able to account for the nuances and complexities of the law. Because ChatGPT is a machine learning system, it may not have the same level of understanding and judgment as a human lawyer when it comes to interpreting legal principles and precedent. This could lead to problems in situations where a more in-depth legal analysis is required
which seem fair at present moment.
- https://en.wikipedia.org/wiki/Chinese_room
A corollary might be: Why does it matter if an intern wrote it, if the result is sensible?
If we could conjure a human legislator that is wrong only 20% of the time I would be pleasantly shocked.
Imagine 30 years ago, if it was revealed a grammar/spell-checker was used to improve a document, it would probably have had the same outrage.
I worked at Ericsson then. The word processor replaced every occurrence of "Ericsson" with "Erection".
[1] https://dopaonlineupload.procempa.com.br/dopaonlineupload/49...
Deleted Comment