Like the reluctance for the folks working on DALL-E or Stable Diffusion to release their models or technology, or the whole restrictions on what it can be used for on their online services?
It makes me wonder when tech folks suddenly decided to become the morality police, and refuse to just release products in case the 'wrong' people make use of them for the 'wrong' purposes. Like, would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset?
So is there anyone working in this field who isn't worried about this? Who is willing to just work on a product and release it for the public, restrictions be damned? Someone who thinks tech is best released to the public to do what they like with, not under an ultra restrictiveset of guidelines?
99% of those using this tactics use it to justify not releasing their models to avoid giving competition a leg up(Google, openAI) and to pretend they are for "open research". As I said this is 100% bull.
The remaining 1% are either doing this to inflate their egos ("hey look how considerate and enlightened we are in everything we do!"), or they pander to media/silly politicians/various clueless commentators whose level of knowledge about this technology is null. They regurgitate the same set of "what ifs and horror stories" to scare the public into standing by when they attempt to over regulate another field so they can be kingmakers within it(if you want an example how it works look at the energy sector).
All this sillyness accomplishes is to raise a barrier to entry for potential commercial competition. Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.
Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this. One important point is that there is value even in just publishing a paper demonstrating some achievements of a proprietary model, but if the experiment can't be reproduces based on description given that is not science and for sure it is not open.
1. This is absolutely a gatekeeping, ladder pulling measure. 2. The commercial face of this industry is rife with dogmas of self importance. It’s nearly comical.
Add to this mix the USA trying to prevent China copying cutting edge AI technology and this was bound to happen sooner or later.
It's no longer research it's near market viability.
I'm not quite sure what the solution is to this by the way. I don't think the answer is no regulation, but how to improve the quality of regulation.
This happends with every new technology.
Organised crime? If a big enough organised crime group wanted it you'd get stuffed into the back of a van.
CIA? MI6? You'd get stuffed into the back of a van. FSB? They'd invade the country to stuff you into the back of a van then bum 20 quid off you for diesel. Mossad? They'd break into your house and replace your computer with an identical fully-functional one made of polonium *while you were using it* and say they didn't do it, while showing AI-generated footage of a Russian doing it on TV and crediting you for the software they totally didn't steal.
Had a good chuckle. Didn’t know Tom Clancy frequented hacker news.
Deleted Comment
As you say, the big bads have means regardless, and probably don't need it because they're often big enough to hire a lot of humans to do e.g. propaganda for them.
Now, imagine a conspiracy theorist. Not a harmless moon landing denier, but someone who is convinced that pizzagate was a real thing, and gets AI to fake mountains of "evidence" until the politicians that he or she is convinced did those things, are lynched, or at the very least become politically toxic.
Think it can't happen? Fake porn is already being used to harm random women, and real pictures from veterinary text books have been used as fake "evidence" of animal abuse.
AI is a tool that can make anyone more competent, but it doesn't make us better or wiser or kinder.
The worst case scenario of the currently public models is why they want to take it slow.
It's like the lottery in one of the episodes of the TV show Sliders: you probably won't win, but if you do win, you die.
Unfortunately, most people are really bad at grokking probably, and, by extension, risk, especially in scenarios like this.
> Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.
Indeed, totally correct.
But this is also on the list titled "why AI alignment is hard and why we need to go slow because we have not solved these things yet".
Saying "it doesn't matter if we keep this secret, someone else will publish anyway" is exactly the same failure mode as "it doesn't matter if we keep polluting, someone else will pollute anyway".
As for what they say to the public, well, fraudsters gonna fraud. It's a tale as old as time [1].
1: https://www.biography.com/news/famous-grifters-stealers-swin...
These models are capable of generating photorealistic child pornography. That's why they're not being released.
I have no evidence to back this up, other than anecdotal. All the hand waving vague posts from SD, DALLE etc read like PR tactics to avoid getting the spotlight from the media and governments.
No one wants to post about this but I think everyone knows it's true, as soon as Congress realizes what these models can be used for there's going to be a moral panic.
EDIT: All of the replies are assuming I'm saying this is a good reason for not releasing these models. I don't say that anywhere nor do I agree with it.
That’s why this whole argument is BS. It panders to those that think these things are illegal because “ew that’s gross” without actually taking the time to argue how AI slots in ethically to their worldview. Discussions about AI license laundering content are far more relevant and interesting.
Is this true, have you seen it or just heard about it? because what I seen is that the AI is bad, the faces are wrong, the fingers are wrong,t he hands and legs are often connected wrong, you can get very often more then 2 legs more then 5 fingers. I seen a community that focus on waifus, this images are not realistic, are obvious carton, animation styles and similar images were created already with digital painting tools.
I also did not seen any porn scene generated by AI, I don't think the model was trained on porn scenes so at best you will get puritans removing breasts from the models, including art , but for sure I want to see how they will ensure that the AI can only generate male nipples and not women ones.
Deleted Comment
Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.
Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.
Yes you can. Your hammer doesn't magically stop functioning when it discovers that you're building stairs without railings.
You don't want tools to discriminate on what you can and can't do with it, because if you can discriminate, then you will get hammers from Hammer Co that can only use nails from Hammer Co.
I deal with a moderate vision impairment and everything I do to make computers more usable is bespoke hacks and workarounds I’ve put together myself.
In MacOS, for example, I can’t even replace the system fonts with more readable options, or increase the font size of system tools.
Most software ships with fixed font sizes (electron is exceptionally bad here — why is it that I can easily resize fonts in web browsers and not electron?) and increasingly new software doesn’t render correctly with an effective resolution below 1080p.
Games don’t care at all about the vision impaired. E.g. RDR2 cost half a billion dollars to make yet there is no way to size fonts up large enough for me to read.
I welcome regulation if it means fixing these sorts of problems.
Substandard buildings, medical procedures, and cars maim and kill.
AI image generation is speech.
I won’t accept prior restraint on speech as being necessary or inevitable.
Sure, but I don’t think an AI model is speech, at least not one trained on billions of parameters and massive quantities of compute. Comparing it to regulated heavy machinery or architecture is apt.
You can’t create an AI model without huge, global industries working together to create the tools to produce said model. And baked in are all sorts of biases that if met with wide-spread adoption could likely have profound social consequences, particularly because these models aren’t easy to make, so the likelihood that they will see large amounts of adoption/use is likely, they are useful tools after all. Group prejudice in these models jumps off the page here, whether race, sex, religion, etc. but black box algorithms are fundamentally dangerous.
Speech is fundamental to the human experience, large ML models are not, and calling them speech is nuts.
But yea, if speech can become law, it can matter quite a bit. Do we really want DALL-E generated state laws?
People are already making txt2porn sites. I'm sure they will get crazier and creepier (from my boring vanilla perspective, not judging people with eclectic tastes) as time goes by.
Deleted Comment
"Self enforced limits" could also be an attempt to avoid formal governmental regulation.
The maturity of CS as an industry is still in it's infancy compared to other engineering disciplines. I'm sure if you went back to the 1880s, five or six decades after the start of industrial revolution, there was very little limitation on the design of mechanical equipment. Now, there are all kinds of industry standards and government regulations. We could lament how much this stifles progress, but we're generally not even cognizant of the amount of risk it has reduced. For example, most people don't give a second thought to the idea of their water heater being capable of blowing up their house because it happens so infrequently.
Code is information, and information wants to, and will be free.
Short of a North Korea like setup, regulations can only slow down the spread of information around the world.
AI is all fun and games when it's data, but if it's being used to make decisions about how to take actions in the physical world I think it's fair that it follows some protocols each society gives it. Making a picture of a cat from a learned model, writing a book with a model, cool, whatever. Deciding who gets their house raided, or when to apply the brakes on an EV, or what drugs should be administered to a patient, we probably want to make sure the apparatus that does this, which includes the code, the data and the models, is held to some rules a society agrees upon.
"A metal strip on the floor of Eurode Business Center marks the border between Germany and the Netherlands.
On one side of the building, there's a German mailbox and a German policeman. On the other side, a Dutch mailbox and a Dutch policeman."
https://www.npr.org/sections/money/2012/08/09/158375183/the-...
I'm not sure that's factual, but even if it were, built objects certainly do.
Cybersecurity is a mess and 100% the reason is "no skin in the game." If a car manufacturer promises or even implies "safety," and something bad happens, they get sued or they take real action.
The big tech companies must be held to do the same.
We have civil law. If a company harms you, you can sue. No need to create regulation to make markets less competitive.
The difference here is like not being able to own a hammer or car mechanics tools for your own personal use, and only being able to use them under corporate guidelines/surveilance in a restricted area, which is ridiculous.
Deleted Comment
There are many examples of this.
- Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.
- Does anyone seriously think they can stop human cloning, once it's technically feasible, from happening somewhere on the planet sooner or later? By fiat, by legislation, by moral appeals, etc? Will not happen. If clones can be made, clones will be made. Descriptive, not normative claim.
- AI-generated content has reached a certain point where we have to worry about a whole host of issues, most obvious being more sophisticated fakes. "Please think of potential consequences", ad-hoc restrictions, self-imposed or otherwise, are moot in the long run. It's part of our world now.
It looks to me that you're shifting the goalposts here: nonproliferation has effectively reduced the number of countries with access to nukes. Or is worrying about the number of direct military conflicts between nuclear-armed powers an example of what you call 'missing the point'?
I disagree with the idea that putting restrictions in place shouldn't be done because 'the problem exists'. The problem exists but that doesn't mean measures can't be taken to keep it manageable. I don't think the majority of people are in your intended demographic of wanting to stop the problem. Most just want to prevent exacerbating the problem.
AI image generation is not a build-your-own-weaponized-virus kit.
It’s a useful tool that can be used to produce creative expression. What people produce is up to them, and the fact that they might misuse their capacity for free speech isn’t an argument for curtailing it.
Lets take a current potential problem. That is a low powered application capable of facial recognition. You can now strap that on to any number of dumb weapons and you've created a smart weapon.
In itself it's not a problem, until it starts happening a lot. If you think like a house cat, you tend to think that society owes you its existence and you're the king of the hill. But say weapons proliferation occurs all those ideas of "I have rights" go right out the window, and this loss of rights will be supported by the masses that don't want to get droned in the head out on a date. The tyranny you want to avoid will be caused by the pursuit of absolute freedom.
As technology becomes more complex the line will blur even further. AI as a build your own 'terrible thing' will happen. Physics demands it, everything is just information at the end of the day.
Now it's up to you to avoid the worst possible outcomes between now and then.
Crispr has changed a lot of things and make possible for an outsider, with 10.000$ and a little dedication, to alter genome of every living form.
https://www.ft.com/content/9ac7f1c0-1468-4dc7-88dd-1370ead42...
Not very good but:
a) the people who currently have this tech are not what I'd call trustworthy so why should I leave dangerous tech only in the hands of dangerous people?
b) it would probably kickstart a "build your own vaccine kit" industry
I think this is just fear of the unknown at work. Biomedical knowledge is complicated and requires effort to learn therefore most consider it a known unknown therefore something to be feared. Some people do have such knowledge therefore they are to be feared because who knows what nefarious intentions they have and what conspiracies they are part of. Therefore they are dangerous people using dangerous tech.
Were the physicists who discovered how to split the atom also dangerous people?
As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.
This is a scenario for a dystopian science fiction novel, as opposed to a a rational plan for our children's future.
because handing it to everyone doesn't make things better? I don't like that Putin has nukes, but it's much better than Putin and every offshoot of Al-Qaeda having nukes.
Civilization ending tech in the hands of powerful actors is usually subject to some form of rational calculus. Having it in the hands of everyone means basically it's game over. For a lot of dangeorus technologies there is no 'vaccine' (in time).
That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.
So you can make pictures and 3d models from text descriptions. So you can get a voice to say something. But if you were determined to do bad things, you already could. It would be easy enough to hire an actor who sounds like Obama and make him say something outrageous. It would be easy enough to use Photoshop to make disgusting images.
Are you sure it's the capabilities you fear, and not the people who now for the first time will get access to them?
Are you sure "we", the wealthy, the technologically and educationally resourceful, the powerful, are so much better custodians?
Ultimately, I don't think we'll able to keep the cat in the bag though. If nothing else, nation actors like Russia or China will get their hands on it and crank the propaganda machine with it. We might be better prepared if we just shorten the learning process and give everyone access. That might open some hope that we'll be able to adapt. It's a really scary dice roll though.
You mean like the online advertising industry? That shit has been making many of us uncomfortable since the early 2000s.
Now that the technology is sufficiently decentralized the morality police comes along.
But when morality suddenly is reinforced in an area where the same people espousing it are trying to rapidly earn billions of dollars, I am skeptical.
Transformers are a form of translation and information compression, ultimately.
The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.
What is the risk of open-sourcing the product? Very few individuals could assemble a dataset or train a full competitive model on their own hardware. So not really a competitive risk there. But every big corp could.
The morality angle protects the startups from the big six. SD is a product demo. I view it the same way at the highest level as an alpha version of Google translate.
And that they’re buggy and hard to fix and generally more limited than the buzz would have you believe.
public high minded talk morality also cynically keeps the money coming in :)
I suspect that quite a lot of this caution is driven by Google and other large companies which want to slow everyone else down so they can win the new market on this tech. The remaining part of the caution appears to come from our neo-puritan era where there is a whole lot of pearl clutching over everything. Newsflash, humans are violent brutes, always have been.
You might be right in the second paragraph about the motivations for slowing this down. There clearly are reasons to be cautious here though, even if this isn't the real reason for the current caution.
Why should only the few have access to such a technology? Because some people will use it for naughty things? And that's what we are talking about here, about whether a minority should have permissioned access to a new technology, and particularly one that cannot directly actually cause physical harm. I can glean motivations surrounding all this from that observation alone.
This isn't even a hypothetical but a reality celebrities already face to some extent.
I'd say the usage of the tech will stabilize, but it's also the case that we have dark days ahead of us.
Your discomfort is not enough reason to stop the free speech of everyone else. It never was, and never will be.
The most effective way to deal with the damage of deepfakes is to make them ubiquitous and accessible. The more familiar people are with the ability to create deepfake content, the more prepared they are to think critically and distrust it.
The average person already knows that still images aren't flawless evidence. The world didn't fall apart after the first release of Photoshop.
Also you seem to be affected by American sensibilities. If your AI decides to go full hiel in Germany you may find that the authorities have a lot of talk about with you.
However, as a person who has been closely following the developments in this field, I share a similar perspective to a few of the other commentators here. Most of the noise is just virtue-signalling to deflect scrutiny and/or protect business interests. Scrutiny from governments is something we absolutely do not want right now.
Humanity is on a path towards artificial general intelligence. Today, the concerns are "what about the artists" and "what if people make mean/offensive things"? As we start to develop AI/ML systems that do more practical things we will start to trend towards more serious questions like "what about everybody's job?". These are the things that will get governments to step in and regulate.
There is a pathway to AGI in which governments and corporations end up with a monopoly on it. I personally view that as a nightmare scenario, as AGI is a power-multiplier the likes of which we've never seen before.
It's important that current development efforts remain mostly unfettered, even if one has to put on a "moral" facade. The longer it takes for governments to catch on, the less likely it will be that they will manage to monopolize the technology.
Dead Comment
https://www.livescience.com/59809-horsepox-virus-recreated.h...
As far as AI, maybe the immediate risks aren't quite so dramatic but it's going to create a real lack-of-trust problem with images, video and data in general. Manipulated still photographs are already very difficult if impossible to detect and there's an ongoing controversy over whether they are admissible in court. AI modification of video is steadily getting harder to identify, and there are already good reasons to suspect the veracity of video clips put out by nation-states as evidence for their claims (who likely already have unrestricted access to the necessary technology - for example, Iran recently released a suspicious 'accidental death' video of a woman arrested for not covering her head, which could be a complete fabrication).
Similarly, AI opens the door to massive undetectable research fraud. Many such incidents in the past have been detected as duplicated data or copied images, but training an AI on similar datasets and images to create original frauds would change all that.
A more alarming application is the construction of AI-integrated drones capable of assassinating human beings with zero operator oversight, just load the drone with a search image and facial recognition software, and then launch-and-forget, which doesn't sound like that good of an idea. Basically Ray Bradbury's Mechanical Hound in Farenheit 451, only airborne.