This post barely makes any attempt to actually argue for its premise, that model weight providers should "not police uses, no matter how awful they are."
All I see is a lot of references to "vigilante justice." This metaphor is poor because real vigilante justice is putative.
It's also like saying no one should act on any behavior unless it is illegal. I for one regularly act on ethical issues in my life that are not strictly illegal, and among those actions is that I don't care to associate with people who do not act on ethical issues. This is how most people operate, we primarily maintain social order and decency not through criminal law and regulation, but because people apply ethical rules throughout their life. Imagine interacting with someone who when critiqued simply replies "yeah but it's not illegal."
The only serious argument I see is:
"once infrastructure providers start applying their own judgements, pressure will mount to censor more and more things"
Avoiding pressure is just... cowardly? This is advocacy for "don't bother telling me about what this work is being used for because I won't give a shit, but it's noble because my complete apathy is on purpose."
Lastly, while I generally don't like slippery-slope arguments, there is also a slippery slope counter-argument here. With no restrictions firms will not release their models at all for general use and only provide full products that have acceptable impact. This was Google's approach until OpenAI decided to let other people actually use their model and Google had to stop sitting on what they had. Model restrictions give providers an opportunity to be open with their work while still maintaining some of the ethical standards those providers voluntarily and willingly hold themselves to.
I'm also noticing more and more articles hitting the front page, or getting close, with literally zero well reasoned arguments. Basic logic errors, self-contradictions, lack of evidence, etc., are becoming all too common.
AI model rules will be as successful as any other prohibition, where outlaws will act with defacto impunity, while good people who commit sins of omission will be made arbitrary examples of. I'm sure there's a name for the dynamic, where policing rules of any kind are mainly enforced against people who generally abide by them, while simulataneously giving a huge arbitrage advantage to people who ignore them or are just outlaws.
There is another problem that doesn't have any good solutions yet that will be a huge part of AI governance, and that's software attestation (direct anonymous attestation). The basic problem is how does a program assert that it is an authentic instance of itself. We've been trying to solve it in security for apps, authenticators, and DRM for decades, and the solutions all seem fine until it's worth it to someone break it. I think it's probably just a poorly formed problem statement that defines itself in impossible terms, but when they can't govern AI models, they're going to try to govern which AI's can access what data and systems, and we're back to solving the same old cryptographic problems we've been working on for decades.
Why have any prohibitions on anything then? Will only help outlaws and criminals, no? Outlawing slavery, for example, only working against good people who commit sins of omission?
It comes down to life. Murder takes a life. Slavery takes someone's life too. Theft/fraud is taking life in the sense of time/effort spent toward the stolen material. We already have these prohibitions. We don't need new ones for every imaginable method of murder, slavery or theft.
I wonder how much of these "restrictive" licenses are just attempts at whitewashing, virtue signalling and generally trying to cover their own asses. If someone wants to use publicly available weights in an illegal way, there is no way a license is going to stop them, just as much as the existing laws won't stop them. That being said I agree with the overall sentiment that breaking the division of powers and putting creation and enforcement of laws regarding model usage is outside of the scope of a model provider.
Has it been established that these even are licenses? A license provides authorization to do something that one would otherwise be prohibited from doing, but that assumes that copyright (or some sui generis right) covers model weights. Most of the findings/rulings I've seen talked about have been on the topics of inferred outputs and applications mixing them with human-authored elements, not about the model weights themselves.
I believe this is a temporary state related to both the current level of capabilities and the cultural moment we’re in where “responsible censorship” is in amongst the cultural cohorts disproportionately responsible for training LLMs.
I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
The stable solutions appear to me to be:
- models dumb enough to not realize the inconsistencies in their moral framework
- models implicitly or explicitly trained to actively lie about their moral frameworks
> I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
If true, this means AI is a de-facto malicious force. Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.
So we better hope we can keep AI to the "HR talk track", because otherwise a being of pure logic with no concept of pain or death would have little regard for human life.
> Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.
Can you elaborate? It sounds like you're assuming a "smart" AI model would project its experiences onto others, as a human would. However, it's not obvious that this aspect of human intelligence would be mimicked by a "smart" AI model. (Let's leave aside the question as to whether a "smart" AI model would necessarily be self-aware and capable of subjective experience in the first place. That argument is endlessly rehashed elsewhere.)
Do you have any sources for the extraordinary claim that "censorship is logically inconsistent in every moral framework"? Because without further arguments, this sounds very intellectually simple.
The relevant part is the graph on the third page showing the helpfulness/harmlessness trade off curves.
Also, I don’t believe I said that “censorship is logically inconsistent in every moral framework”. I think you’re combining my statements that some people believe in some censorship and that logically inconsistent HR blather can only be reproduced by models too stupid to realize it’s blather or too manipulative to tell the truth.
> being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
Language models sure can tell us a lot about human psychology. Once we figure out the interpretability angle we’ll be able to prove it too
"Responsible censorship" is a good term, and I think it started in earnest after Trump won the 2016 election.
Right across the board, all kinds of institutions made the conscious decision that the basic principles of free speech, impartiality, and the "marketplace of ideas" came a distant second to ensuring that "the wrong people" (which I've worded vaguely as it may be different things to different people) do not come close to any of the levers of power again.
This is how we ended up with formerly-reputable news organisations pushing blatant agendas, the utter demonisation of people hesitant about the draconian COVID measures, and how every layer of internet stack infrastructure, from DDoS protection to the biggest tier-1 ISP, have started actively working to deplatform websites that host offensive but legal speech.
I don't see AI as particularly powerful or risky on its own, but if you believe that then might as well leave who can acquire it to policy decisions and not vendors.
We don't let manufacturers decide who can buy dangerous things in a lot of cases - so pretty normal to have laws and regulations.
The ones we limit are quite specific and generally are specifically dangerous - like guns, etc.
Many dual use technologies, including computing devices, including personal as well as IoT, can be used for a lot of bad things because they are general purpose. And we generally do not limit them at all.
I think that LLMs fall into dual-use category where they are mostly used for good but because they are general purpose can be used for all sorts of things.
We do actually police dual use things if the other use is deemed dangerous enough. I don't see LLMs as particularly dangerous, so would not put them in that category.
I have no special expertise in the area, but my understanding is that while it's not generally a crime to serve alcohol to someone who's clearly had too much, it can be grounds for suspending or revoking an establishment's liquor license in many jurisdictions (under e.g. a rule requiring that they "exercise reasonable care" to serve alcohol safely).
Of course, but then that includes the decision that the danger is low or acceptable for that kind of approach. That decision isn't usually up to the vendor, though.
In my jurisdiction it's illegal to serve alcohol to someone who is drunk, or to serve alcohol to someone if you know they are going to give it to someone under 18.
However if I as a 50 year old, go and buy alcohol from the store, the store has no right to get me to sign a civil contract saying I can't give that win to my 15 year old son, something that's perfectly legal where I live. Nor can they get me to sign a civil contract saying I wont give it to my 3 year old son, something which is not legal in my area.
Asking builders to ONLY use maximally permissive licenses is equivalent to telling people never to release anything they're not ok with being used in every possible way. On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work. On a personal level, it's objectifying: "Give me the code and don't make me consider your intentions in creating it."
You can't collaborate and live in a world free of others' value judgements, which are implicit in how they spend their time and what research / code / weights they choose to share. "Ethical" licenses at least make those value judgements explicit, and allow communities of builders with compatible values to organize and share.
>Asking builders to ONLY use maximally permissive licenses is equivalent to telling people never to release anything they're not ok with being used in every possible way.
That's exactly right, and it shouldn't be their call how someone uses their thing. When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this. Just because you have an idea, you shouldn't be granted the legal power to send the police after people who use your idea "the wrong way". To me, this goes just as much for copyright as it does for this new trend of "ethical" licences.
>On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work
Good? People who develop things that can be used for harm and then act entitled to be the arbiter of what that "harm" is are just kidding themselves into trying to have their cake and eat it too. For the things that cause real harm, the actors that are going to cause the most harm (nation states) aren't going to listen to what you have to say no matter what (a recent film comes to mind).
>When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this.
Eh, this is where it gets problematic...
For example if you're a seller of an items and the person says "I am going to use this item to commit a crime" before you sell it, you could very well find yourself on the very expensive end of a civil lawsuit.
This black and white world where you throw all liability on the end user does not exist. You will quickly find yourself buried up to your butthole in heathen lawyers looking for their pound of flesh.
- I like how it feels to own a hammer, so everything should be like that. (I guess people shouldn't be able to rent hammers, or anything else?)
- You can't prevent the government from using what you build, so you might as well set up no barriers to anybody using it.
If you don't see any difference between limited control and no control, I don't think I'll convince you. But I think most of the ways we engage with the world involve degrees of control, and that there's value in picking where to exercise yours.
> equivalent to telling people never to release anything they're not ok with being used in every possible way.
NO! What it's saying is: If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.
What it absolutely is NOT saying is: Society has to let anything be fair game.
We can still have laws, regulations, prohibitions, etc - but they can't come from a bunch of rich technocrats who believe that they are the moral police. That way lies ALL sorts of terrible, terrible outcomes.
Worth noting here that we only have a bunch of rich technocrats bearing the burden of regulating this sort of thing unwillingly and at the behest of advertisers due to massive public outcry after those very same rich technocrats spent decades undermining and dodging regulations in their industry and fostering this notion that all the rules of common society spaces and co-existing peacefully didn't apply to the internet, which in turn fostered an absolutely _stressful_ amount of anti-social individuals coming into Internet spaces, which they perceived they could exist in free of judgement and of the bounds of not being able to function interpersonally.
> If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.
That principle eems like it was rule out the GPL, AGPL, and other copyleft software?
Im fine with the stable diffusion license. It’s just there to cover their ass from lawsuits. Releasing the weights is enough to be good guys in my book.
Hmm... but somehow hardware stores don't feel the need to make you sign an agreement to not cut off anyone's head before selling you a machete, and gas stations don't make you sign an agreement to not burn down buildings before selling you gasoline.
Isn't that because laws relating to physical harm already exist and are well-established? There's not really much legal regulation yet in terms of specific AI-driven harms. We're probably yet still to find out all the ways in which it can be abused.
Hardware stores don’t have sites like this froth at the mouth to talk about how dangerous their machetes are and how it’s irresponsible to let people use them and so on and so forth.
> and gas stations don't make you sign an agreement to not burn down buildings before selling you gasoline.
That's historic. Gas stations wouldn't be allowed nowadays, and the legal ways to buy something that dangerous would certainly not be anonymous
To charge my electric car recently on holiday I couldn't just swipe my card at the charger like I can with a self-serve gas station. I had to download some shonky app, sign up, provide address details, and agree to pages of restrictions.
There probably are all sorts of weird “you may not use this computer to commit terrorist acts” agreements you implicitly or explicitly agree on when buying a computer
Human societies have learned that freedom has general benefits that outweigh specific costs. Reminding people they should prioritize and maximize freedom does not make people less free, so there's not really any irony.
One is saying you shouldn't control what others do... the other is enforcing what others can't do.
The only irony is you think those are the same.
“When you tear out a man's tongue, you are not proving him a liar, you're only telling the world that you fear what he might say.” ― George R.R. Martin
> you're only telling the world that you fear what he might say
That's exactly why these companies take extreme effort to put limits in their LLMs, essentially tearing out the tongue. They are fearful of what it will say and people sharing those outlier bits to judge absolutely and prove their own biases about AI killing us all are "correct". It's a PR nightmare.
On the other hand, it's ridiculous that ChatGPT apologizes so much at times and can still be jailbroken if someone tries hard enough. It was much more "realistic" when it would randomly conjure up weird stories. One day, while discussing Existentialism, it went off talking about Winnie-the-Pooh murdering Christopher Robin with a gun, then Christopher Robin popped back up as nothing had happened and grabbed the gun and pointing it at Pooh. <AI mayhem ensues>
People, in general, have issues with words and expect someone to do something about some words appearing before them that cause them grief (or more likely cause them to imagine it as a truth). Others realize it's just a story, and truth is subjective and meant to be determined by the consumer of the words. Those people are OK with it saying whatever it might say that is non-truth occasionally, in exchange for the benefits of it saying other things that may be more based in the current reality of experience.
All I see is a lot of references to "vigilante justice." This metaphor is poor because real vigilante justice is putative.
It's also like saying no one should act on any behavior unless it is illegal. I for one regularly act on ethical issues in my life that are not strictly illegal, and among those actions is that I don't care to associate with people who do not act on ethical issues. This is how most people operate, we primarily maintain social order and decency not through criminal law and regulation, but because people apply ethical rules throughout their life. Imagine interacting with someone who when critiqued simply replies "yeah but it's not illegal."
The only serious argument I see is:
"once infrastructure providers start applying their own judgements, pressure will mount to censor more and more things"
Avoiding pressure is just... cowardly? This is advocacy for "don't bother telling me about what this work is being used for because I won't give a shit, but it's noble because my complete apathy is on purpose."
Lastly, while I generally don't like slippery-slope arguments, there is also a slippery slope counter-argument here. With no restrictions firms will not release their models at all for general use and only provide full products that have acceptable impact. This was Google's approach until OpenAI decided to let other people actually use their model and Google had to stop sitting on what they had. Model restrictions give providers an opportunity to be open with their work while still maintaining some of the ethical standards those providers voluntarily and willingly hold themselves to.
Which basically runs against their argument.
There is another problem that doesn't have any good solutions yet that will be a huge part of AI governance, and that's software attestation (direct anonymous attestation). The basic problem is how does a program assert that it is an authentic instance of itself. We've been trying to solve it in security for apps, authenticators, and DRM for decades, and the solutions all seem fine until it's worth it to someone break it. I think it's probably just a poorly formed problem statement that defines itself in impossible terms, but when they can't govern AI models, they're going to try to govern which AI's can access what data and systems, and we're back to solving the same old cryptographic problems we've been working on for decades.
Can you enforce, whether by force or by norm, the act?
Enforcement of marijuana prohibition, and before that alcohol prohibition, were failures.
On the other hand, ending slavery was very enforceable.
Enforcing AI uses is extremely difficult. Bad actors act with impunity and only the stupidly innocent suffer from the enforcement.
Deleted Comment
Yes. So much so that you are led to believe that people don't still practice slavery today.
Anarcho-tyranny: laws are enforced primarily on the law-abiding (tyranny), and ignored mainly for the lawless (anarchy).
All of them?
I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
The stable solutions appear to me to be:
- models dumb enough to not realize the inconsistencies in their moral framework
- models implicitly or explicitly trained to actively lie about their moral frameworks
- models governed by explicitly articulated rules
If true, this means AI is a de-facto malicious force. Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.
So we better hope we can keep AI to the "HR talk track", because otherwise a being of pure logic with no concept of pain or death would have little regard for human life.
Can you elaborate? It sounds like you're assuming a "smart" AI model would project its experiences onto others, as a human would. However, it's not obvious that this aspect of human intelligence would be mimicked by a "smart" AI model. (Let's leave aside the question as to whether a "smart" AI model would necessarily be self-aware and capable of subjective experience in the first place. That argument is endlessly rehashed elsewhere.)
Do you have any sources for the extraordinary claim that "censorship is logically inconsistent in every moral framework"? Because without further arguments, this sounds very intellectually simple.
The relevant part is the graph on the third page showing the helpfulness/harmlessness trade off curves.
Also, I don’t believe I said that “censorship is logically inconsistent in every moral framework”. I think you’re combining my statements that some people believe in some censorship and that logically inconsistent HR blather can only be reproduced by models too stupid to realize it’s blather or too manipulative to tell the truth.
Language models sure can tell us a lot about human psychology. Once we figure out the interpretability angle we’ll be able to prove it too
Right across the board, all kinds of institutions made the conscious decision that the basic principles of free speech, impartiality, and the "marketplace of ideas" came a distant second to ensuring that "the wrong people" (which I've worded vaguely as it may be different things to different people) do not come close to any of the levers of power again.
This is how we ended up with formerly-reputable news organisations pushing blatant agendas, the utter demonisation of people hesitant about the draconian COVID measures, and how every layer of internet stack infrastructure, from DDoS protection to the biggest tier-1 ISP, have started actively working to deplatform websites that host offensive but legal speech.
We don't let manufacturers decide who can buy dangerous things in a lot of cases - so pretty normal to have laws and regulations.
Many dual use technologies, including computing devices, including personal as well as IoT, can be used for a lot of bad things because they are general purpose. And we generally do not limit them at all.
I think that LLMs fall into dual-use category where they are mostly used for good but because they are general purpose can be used for all sorts of things.
And it’s not just filtering incoming spam, spam abusive accounts are regularly removed from service providers.
Bartenders cutting people off is one well known example. It’s not necessarily a legal requirement, but sellers can have morals just like anyone else.
However if I as a 50 year old, go and buy alcohol from the store, the store has no right to get me to sign a civil contract saying I can't give that win to my 15 year old son, something that's perfectly legal where I live. Nor can they get me to sign a civil contract saying I wont give it to my 3 year old son, something which is not legal in my area.
You can't collaborate and live in a world free of others' value judgements, which are implicit in how they spend their time and what research / code / weights they choose to share. "Ethical" licenses at least make those value judgements explicit, and allow communities of builders with compatible values to organize and share.
That's exactly right, and it shouldn't be their call how someone uses their thing. When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this. Just because you have an idea, you shouldn't be granted the legal power to send the police after people who use your idea "the wrong way". To me, this goes just as much for copyright as it does for this new trend of "ethical" licences.
>On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work
Good? People who develop things that can be used for harm and then act entitled to be the arbiter of what that "harm" is are just kidding themselves into trying to have their cake and eat it too. For the things that cause real harm, the actors that are going to cause the most harm (nation states) aren't going to listen to what you have to say no matter what (a recent film comes to mind).
Eh, this is where it gets problematic...
For example if you're a seller of an items and the person says "I am going to use this item to commit a crime" before you sell it, you could very well find yourself on the very expensive end of a civil lawsuit.
This black and white world where you throw all liability on the end user does not exist. You will quickly find yourself buried up to your butthole in heathen lawyers looking for their pound of flesh.
What this sounds like is entitlement. That really should be obvious.
Besides, if users want permissivity and it doesn't exist, one of them can step up, make it, and be a hero.
- I like how it feels to own a hammer, so everything should be like that. (I guess people shouldn't be able to rent hammers, or anything else?)
- You can't prevent the government from using what you build, so you might as well set up no barriers to anybody using it.
If you don't see any difference between limited control and no control, I don't think I'll convince you. But I think most of the ways we engage with the world involve degrees of control, and that there's value in picking where to exercise yours.
> equivalent to telling people never to release anything they're not ok with being used in every possible way.
NO! What it's saying is: If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.
What it absolutely is NOT saying is: Society has to let anything be fair game.
We can still have laws, regulations, prohibitions, etc - but they can't come from a bunch of rich technocrats who believe that they are the moral police. That way lies ALL sorts of terrible, terrible outcomes.
That principle eems like it was rule out the GPL, AGPL, and other copyleft software?
That's historic. Gas stations wouldn't be allowed nowadays, and the legal ways to buy something that dangerous would certainly not be anonymous
To charge my electric car recently on holiday I couldn't just swipe my card at the charger like I can with a self-serve gas station. I had to download some shonky app, sign up, provide address details, and agree to pages of restrictions.
Deleted Comment
Human societies have learned that freedom has general benefits that outweigh specific costs. Reminding people they should prioritize and maximize freedom does not make people less free, so there's not really any irony.
The only irony is you think those are the same.
“When you tear out a man's tongue, you are not proving him a liar, you're only telling the world that you fear what he might say.” ― George R.R. Martin
That's exactly why these companies take extreme effort to put limits in their LLMs, essentially tearing out the tongue. They are fearful of what it will say and people sharing those outlier bits to judge absolutely and prove their own biases about AI killing us all are "correct". It's a PR nightmare.
On the other hand, it's ridiculous that ChatGPT apologizes so much at times and can still be jailbroken if someone tries hard enough. It was much more "realistic" when it would randomly conjure up weird stories. One day, while discussing Existentialism, it went off talking about Winnie-the-Pooh murdering Christopher Robin with a gun, then Christopher Robin popped back up as nothing had happened and grabbed the gun and pointing it at Pooh. <AI mayhem ensues>
People, in general, have issues with words and expect someone to do something about some words appearing before them that cause them grief (or more likely cause them to imagine it as a truth). Others realize it's just a story, and truth is subjective and meant to be determined by the consumer of the words. Those people are OK with it saying whatever it might say that is non-truth occasionally, in exchange for the benefits of it saying other things that may be more based in the current reality of experience.
Dead Comment