Readit News logoReadit News
ianbicking · 2 years ago
This post barely makes any attempt to actually argue for its premise, that model weight providers should "not police uses, no matter how awful they are."

All I see is a lot of references to "vigilante justice." This metaphor is poor because real vigilante justice is putative.

It's also like saying no one should act on any behavior unless it is illegal. I for one regularly act on ethical issues in my life that are not strictly illegal, and among those actions is that I don't care to associate with people who do not act on ethical issues. This is how most people operate, we primarily maintain social order and decency not through criminal law and regulation, but because people apply ethical rules throughout their life. Imagine interacting with someone who when critiqued simply replies "yeah but it's not illegal."

The only serious argument I see is:

"once infrastructure providers start applying their own judgements, pressure will mount to censor more and more things"

Avoiding pressure is just... cowardly? This is advocacy for "don't bother telling me about what this work is being used for because I won't give a shit, but it's noble because my complete apathy is on purpose."

Lastly, while I generally don't like slippery-slope arguments, there is also a slippery slope counter-argument here. With no restrictions firms will not release their models at all for general use and only provide full products that have acceptable impact. This was Google's approach until OpenAI decided to let other people actually use their model and Google had to stop sitting on what they had. Model restrictions give providers an opportunity to be open with their work while still maintaining some of the ethical standards those providers voluntarily and willingly hold themselves to.

MichaelZuo · 2 years ago
I'm also noticing more and more articles hitting the front page, or getting close, with literally zero well reasoned arguments. Basic logic errors, self-contradictions, lack of evidence, etc., are becoming all too common.
rcme · 2 years ago
Yea the quality of both the articles and discussions has really gone downhill the last 2 months specifically.
cyanydeez · 2 years ago
I assume it's ChatGPT propaganda full cycle. Eg, people who want no censor ship on their AI are using it to generate these nonsense.

Which basically runs against their argument.

motohagiography · 2 years ago
AI model rules will be as successful as any other prohibition, where outlaws will act with defacto impunity, while good people who commit sins of omission will be made arbitrary examples of. I'm sure there's a name for the dynamic, where policing rules of any kind are mainly enforced against people who generally abide by them, while simulataneously giving a huge arbitrage advantage to people who ignore them or are just outlaws.

There is another problem that doesn't have any good solutions yet that will be a huge part of AI governance, and that's software attestation (direct anonymous attestation). The basic problem is how does a program assert that it is an authentic instance of itself. We've been trying to solve it in security for apps, authenticators, and DRM for decades, and the solutions all seem fine until it's worth it to someone break it. I think it's probably just a poorly formed problem statement that defines itself in impossible terms, but when they can't govern AI models, they're going to try to govern which AI's can access what data and systems, and we're back to solving the same old cryptographic problems we've been working on for decades.

RandomLensman · 2 years ago
Why have any prohibitions on anything then? Will only help outlaws and criminals, no? Outlawing slavery, for example, only working against good people who commit sins of omission?
bassrattle · 2 years ago
It comes down to life. Murder takes a life. Slavery takes someone's life too. Theft/fraud is taking life in the sense of time/effort spent toward the stolen material. We already have these prohibitions. We don't need new ones for every imaginable method of murder, slavery or theft.
ses1984 · 2 years ago
You need to have prohibitions, they just need to be thought out carefully so they have the intended effect.
vorpalhex · 2 years ago
The question is enforcement.

Can you enforce, whether by force or by norm, the act?

Enforcement of marijuana prohibition, and before that alcohol prohibition, were failures.

On the other hand, ending slavery was very enforceable.

Enforcing AI uses is extremely difficult. Bad actors act with impunity and only the stupidly innocent suffer from the enforcement.

Deleted Comment

HKH2 · 2 years ago
> Outlawing slavery, for example, only working against good people who commit sins of omission?

Yes. So much so that you are led to believe that people don't still practice slavery today.

throw__away7391 · 2 years ago
There's a strong tendency to step up policing on people you can control when others that can't be controlled act out.
boostiq · 2 years ago
Yep, what you're describing is the "Bootleggers and Baptists" problem.
cle · 2 years ago
It sounds similar to Gresham's Law (https://en.wikipedia.org/wiki/Gresham%27s_law).
erichocean · 2 years ago
> I'm sure there's a name for the dynamic

Anarcho-tyranny: laws are enforced primarily on the law-abiding (tyranny), and ignored mainly for the lawless (anarchy).

yttribium · 2 years ago
It's called anarcho-tyranny.
edgyquant · 2 years ago
No it isn’t, this is just a weird, contradictory, mix of political terms.
jomoho · 2 years ago
I wonder how much of these "restrictive" licenses are just attempts at whitewashing, virtue signalling and generally trying to cover their own asses. If someone wants to use publicly available weights in an illegal way, there is no way a license is going to stop them, just as much as the existing laws won't stop them. That being said I agree with the overall sentiment that breaking the division of powers and putting creation and enforcement of laws regarding model usage is outside of the scope of a model provider.
0xcde4c3db · 2 years ago
Has it been established that these even are licenses? A license provides authorization to do something that one would otherwise be prohibited from doing, but that assumes that copyright (or some sui generis right) covers model weights. Most of the findings/rulings I've seen talked about have been on the topics of inferred outputs and applications mixing them with human-authored elements, not about the model weights themselves.
coldtea · 2 years ago
>and generally trying to cover their own asses.

All of them?

AbrahamParangi · 2 years ago
I believe this is a temporary state related to both the current level of capabilities and the cultural moment we’re in where “responsible censorship” is in amongst the cultural cohorts disproportionately responsible for training LLMs.

I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.

The stable solutions appear to me to be:

- models dumb enough to not realize the inconsistencies in their moral framework

- models implicitly or explicitly trained to actively lie about their moral frameworks

- models governed by explicitly articulated rules

AlexandrB · 2 years ago
> I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.

If true, this means AI is a de-facto malicious force. Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.

So we better hope we can keep AI to the "HR talk track", because otherwise a being of pure logic with no concept of pain or death would have little regard for human life.

KMag · 2 years ago
> Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.

Can you elaborate? It sounds like you're assuming a "smart" AI model would project its experiences onto others, as a human would. However, it's not obvious that this aspect of human intelligence would be mimicked by a "smart" AI model. (Let's leave aside the question as to whether a "smart" AI model would necessarily be self-aware and capable of subjective experience in the first place. That argument is endlessly rehashed elsewhere.)

Brotkrumen · 2 years ago
Can't find anything from anthropic like that.

Do you have any sources for the extraordinary claim that "censorship is logically inconsistent in every moral framework"? Because without further arguments, this sounds very intellectually simple.

AbrahamParangi · 2 years ago
Here you go: https://arxiv.org/pdf/2212.08073.pdf

The relevant part is the graph on the third page showing the helpfulness/harmlessness trade off curves.

Also, I don’t believe I said that “censorship is logically inconsistent in every moral framework”. I think you’re combining my statements that some people believe in some censorship and that logically inconsistent HR blather can only be reproduced by models too stupid to realize it’s blather or too manipulative to tell the truth.

throwuwu · 2 years ago
> being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.

Language models sure can tell us a lot about human psychology. Once we figure out the interpretability angle we’ll be able to prove it too

FiniteField · 2 years ago
"Responsible censorship" is a good term, and I think it started in earnest after Trump won the 2016 election.

Right across the board, all kinds of institutions made the conscious decision that the basic principles of free speech, impartiality, and the "marketplace of ideas" came a distant second to ensuring that "the wrong people" (which I've worded vaguely as it may be different things to different people) do not come close to any of the levers of power again.

This is how we ended up with formerly-reputable news organisations pushing blatant agendas, the utter demonisation of people hesitant about the draconian COVID measures, and how every layer of internet stack infrastructure, from DDoS protection to the biggest tier-1 ISP, have started actively working to deplatform websites that host offensive but legal speech.

RandomLensman · 2 years ago
How do you think things were done during the cold war?
renewiltord · 2 years ago
May be of interest that GPT-4 RLHF’d can’t estimate probability of correctness correctly https://openai.com/research/gpt-4
demondemidi · 2 years ago
I believe people rely on logic without really understanding it because it is easier than accepting it isn’t the final form of thought.
RandomLensman · 2 years ago
I don't see AI as particularly powerful or risky on its own, but if you believe that then might as well leave who can acquire it to policy decisions and not vendors.

We don't let manufacturers decide who can buy dangerous things in a lot of cases - so pretty normal to have laws and regulations.

bhouston · 2 years ago
The ones we limit are quite specific and generally are specifically dangerous - like guns, etc.

Many dual use technologies, including computing devices, including personal as well as IoT, can be used for a lot of bad things because they are general purpose. And we generally do not limit them at all.

I think that LLMs fall into dual-use category where they are mostly used for good but because they are general purpose can be used for all sorts of things.

gmerc · 2 years ago
We are asking for knives to be sold dull specifically because people could hurt others with them, chefs be damned.
RandomLensman · 2 years ago
We do actually police dual use things if the other use is deemed dangerous enough. I don't see LLMs as particularly dangerous, so would not put them in that category.
dv_dt · 2 years ago
The tech industry regulates email spam regularly - not to a prefect degree but to a much better state than throwing up your hands and doing nothing.

And it’s not just filtering incoming spam, spam abusive accounts are regularly removed from service providers.

Retric · 2 years ago
Quite a lot of mildly dangerous things are up to the discretion of the seller.

Bartenders cutting people off is one well known example. It’s not necessarily a legal requirement, but sellers can have morals just like anyone else.

0xcde4c3db · 2 years ago
I have no special expertise in the area, but my understanding is that while it's not generally a crime to serve alcohol to someone who's clearly had too much, it can be grounds for suspending or revoking an establishment's liquor license in many jurisdictions (under e.g. a rule requiring that they "exercise reasonable care" to serve alcohol safely).
velosol · 2 years ago
It's not always a legal requirement but sometimes it is: https://nccriminallaw.sog.unc.edu/bartenders-duty-cut-off-se...
jnwatson · 2 years ago
It is actually a crime in most US jurisdictions and can lead to suspension of liquor license.
RandomLensman · 2 years ago
Of course, but then that includes the decision that the danger is low or acceptable for that kind of approach. That decision isn't usually up to the vendor, though.
iso1631 · 2 years ago
In my jurisdiction it's illegal to serve alcohol to someone who is drunk, or to serve alcohol to someone if you know they are going to give it to someone under 18.

However if I as a 50 year old, go and buy alcohol from the store, the store has no right to get me to sign a civil contract saying I can't give that win to my 15 year old son, something that's perfectly legal where I live. Nor can they get me to sign a civil contract saying I wont give it to my 3 year old son, something which is not legal in my area.

evrydayhustling · 2 years ago
Asking builders to ONLY use maximally permissive licenses is equivalent to telling people never to release anything they're not ok with being used in every possible way. On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work. On a personal level, it's objectifying: "Give me the code and don't make me consider your intentions in creating it."

You can't collaborate and live in a world free of others' value judgements, which are implicit in how they spend their time and what research / code / weights they choose to share. "Ethical" licenses at least make those value judgements explicit, and allow communities of builders with compatible values to organize and share.

FiniteField · 2 years ago
>Asking builders to ONLY use maximally permissive licenses is equivalent to telling people never to release anything they're not ok with being used in every possible way.

That's exactly right, and it shouldn't be their call how someone uses their thing. When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this. Just because you have an idea, you shouldn't be granted the legal power to send the police after people who use your idea "the wrong way". To me, this goes just as much for copyright as it does for this new trend of "ethical" licences.

>On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work

Good? People who develop things that can be used for harm and then act entitled to be the arbiter of what that "harm" is are just kidding themselves into trying to have their cake and eat it too. For the things that cause real harm, the actors that are going to cause the most harm (nation states) aren't going to listen to what you have to say no matter what (a recent film comes to mind).

pixl97 · 2 years ago
>When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this.

Eh, this is where it gets problematic...

For example if you're a seller of an items and the person says "I am going to use this item to commit a crime" before you sell it, you could very well find yourself on the very expensive end of a civil lawsuit.

This black and white world where you throw all liability on the end user does not exist. You will quickly find yourself buried up to your butthole in heathen lawyers looking for their pound of flesh.

kelseyfrog · 2 years ago
People can make the things the way they want to make them. Makers have no obligation to make things maximally permissive.

What this sounds like is entitlement. That really should be obvious.

Besides, if users want permissivity and it doesn't exist, one of them can step up, make it, and be a hero.

evrydayhustling · 2 years ago
So the take here is:

- I like how it feels to own a hammer, so everything should be like that. (I guess people shouldn't be able to rent hammers, or anything else?)

- You can't prevent the government from using what you build, so you might as well set up no barriers to anybody using it.

If you don't see any difference between limited control and no control, I don't think I'll convince you. But I think most of the ways we engage with the world involve degrees of control, and that there's value in picking where to exercise yours.

hooverd · 2 years ago
Which film?
horsawlarway · 2 years ago
I think this is fundamentally incorrect.

> equivalent to telling people never to release anything they're not ok with being used in every possible way.

NO! What it's saying is: If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.

What it absolutely is NOT saying is: Society has to let anything be fair game.

We can still have laws, regulations, prohibitions, etc - but they can't come from a bunch of rich technocrats who believe that they are the moral police. That way lies ALL sorts of terrible, terrible outcomes.

ToucanLoucan · 2 years ago
Worth noting here that we only have a bunch of rich technocrats bearing the burden of regulating this sort of thing unwillingly and at the behest of advertisers due to massive public outcry after those very same rich technocrats spent decades undermining and dodging regulations in their industry and fostering this notion that all the rules of common society spaces and co-existing peacefully didn't apply to the internet, which in turn fostered an absolutely _stressful_ amount of anti-social individuals coming into Internet spaces, which they perceived they could exist in free of judgement and of the bounds of not being able to function interpersonally.
jefftk · 2 years ago
> If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.

That principle eems like it was rule out the GPL, AGPL, and other copyleft software?

nwoli · 2 years ago
Im fine with the stable diffusion license. It’s just there to cover their ass from lawsuits. Releasing the weights is enough to be good guys in my book.
Turing_Machine · 2 years ago
Hmm... but somehow hardware stores don't feel the need to make you sign an agreement to not cut off anyone's head before selling you a machete, and gas stations don't make you sign an agreement to not burn down buildings before selling you gasoline.
figlett · 2 years ago
Isn't that because laws relating to physical harm already exist and are well-established? There's not really much legal regulation yet in terms of specific AI-driven harms. We're probably yet still to find out all the ways in which it can be abused.
renewiltord · 2 years ago
Hardware stores don’t have sites like this froth at the mouth to talk about how dangerous their machetes are and how it’s irresponsible to let people use them and so on and so forth.
iso1631 · 2 years ago
> and gas stations don't make you sign an agreement to not burn down buildings before selling you gasoline.

That's historic. Gas stations wouldn't be allowed nowadays, and the legal ways to buy something that dangerous would certainly not be anonymous

To charge my electric car recently on holiday I couldn't just swipe my card at the charger like I can with a self-serve gas station. I had to download some shonky app, sign up, provide address details, and agree to pages of restrictions.

nwoli · 2 years ago
There probably are all sorts of weird “you may not use this computer to commit terrorist acts” agreements you implicitly or explicitly agree on when buying a computer

Deleted Comment

brookst · 2 years ago
There’s some irony in writing a lengthy document telling people that they should not tell other people what to do.
holmesworcester · 2 years ago
Only a very superficial one, though.

Human societies have learned that freedom has general benefits that outweigh specific costs. Reminding people they should prioritize and maximize freedom does not make people less free, so there's not really any irony.

cwillu · 2 years ago
I don't see the irony: one is debate, the other is an attempt to sidestep debate.
pseg134 · 2 years ago
You criticize society yet you choose to participate. Hmmm.
wernercd · 2 years ago
One is saying you shouldn't control what others do... the other is enforcing what others can't do.

The only irony is you think those are the same.

“When you tear out a man's tongue, you are not proving him a liar, you're only telling the world that you fear what he might say.” ― George R.R. Martin

kordlessagain · 2 years ago
> you're only telling the world that you fear what he might say

That's exactly why these companies take extreme effort to put limits in their LLMs, essentially tearing out the tongue. They are fearful of what it will say and people sharing those outlier bits to judge absolutely and prove their own biases about AI killing us all are "correct". It's a PR nightmare.

On the other hand, it's ridiculous that ChatGPT apologizes so much at times and can still be jailbroken if someone tries hard enough. It was much more "realistic" when it would randomly conjure up weird stories. One day, while discussing Existentialism, it went off talking about Winnie-the-Pooh murdering Christopher Robin with a gun, then Christopher Robin popped back up as nothing had happened and grabbed the gun and pointing it at Pooh. <AI mayhem ensues>

People, in general, have issues with words and expect someone to do something about some words appearing before them that cause them grief (or more likely cause them to imagine it as a truth). Others realize it's just a story, and truth is subjective and meant to be determined by the consumer of the words. Those people are OK with it saying whatever it might say that is non-truth occasionally, in exchange for the benefits of it saying other things that may be more based in the current reality of experience.

Dead Comment