Readit News logoReadit News
Posted by u/CM30 3 years ago
Ask HN: Is Anyone Else Tired of the Self Enforced Limits on AI Tech?
Like the reluctance for the folks working on DALL-E or Stable Diffusion to release their models or technology, or the whole restrictions on what it can be used for on their online services?

It makes me wonder when tech folks suddenly decided to become the morality police, and refuse to just release products in case the 'wrong' people make use of them for the 'wrong' purposes. Like, would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset?

So is there anyone working in this field who isn't worried about this? Who is willing to just work on a product and release it for the public, restrictions be damned? Someone who thinks tech is best released to the public to do what they like with, not under an ultra restrictiveset of guidelines?

Roark66 · 3 years ago
For anyone who actually used those models for more than few days and learns their strengths and weaknesses it is completely obvious all this talk of "societal impact", or as you called it self imposed limits are 100% bulls**. Everyone in the field knows it.

99% of those using this tactics use it to justify not releasing their models to avoid giving competition a leg up(Google, openAI) and to pretend they are for "open research". As I said this is 100% bull.

The remaining 1% are either doing this to inflate their egos ("hey look how considerate and enlightened we are in everything we do!"), or they pander to media/silly politicians/various clueless commentators whose level of knowledge about this technology is null. They regurgitate the same set of "what ifs and horror stories" to scare the public into standing by when they attempt to over regulate another field so they can be kingmakers within it(if you want an example how it works look at the energy sector).

All this sillyness accomplishes is to raise a barrier to entry for potential commercial competition. Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.

Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this. One important point is that there is value even in just publishing a paper demonstrating some achievements of a proprietary model, but if the experiment can't be reproduces based on description given that is not science and for sure it is not open.

no-dr-onboard · 3 years ago
As someone who has worked for OpenAI and with Google as a consultant, I completely and wholeheartedly agree.

1. This is absolutely a gatekeeping, ladder pulling measure. 2. The commercial face of this industry is rife with dogmas of self importance. It’s nearly comical.

chuckster563 · 3 years ago
These models cost millions of dollars to create. Not only do they have incentive to keep them when the technology has the ability to change industry like alphafold(which alone could have been a multimillion dollar company).

Add to this mix the USA trying to prevent China copying cutting edge AI technology and this was bound to happen sooner or later.

It's no longer research it's near market viability.

f0e4c2f7 · 3 years ago
There is a move large companies do where after innovating they pull the ladder up behind them by trying to drag regulators into the space. Regulation establishes a moat and allows them to cruise on that innovation until enough external pressure builds up that the dam breaks (people innovating in countries without that regulation for example).

I'm not quite sure what the solution is to this by the way. I don't think the answer is no regulation, but how to improve the quality of regulation.

imnotreallynew · 3 years ago
Is there any evidence that companies (or rather senior staff at those companies) knowingly do that, or is it simply that once some innovation results in a new, large market, a government tries to get involved?
dd36 · 3 years ago
Name a large company that has successfully done that recently?
silvestrov · 3 years ago
This of how dangerous it will be if everybody can write a book. We must limit bookwriting to only educated and responsible people with the proper education in ethics and morals. /s

This happends with every new technology.

loa_in_ · 3 years ago
Except nobody has issues when the tech is meant to exploit somebody but isn't public. The morality angle disappears with sunlight
Gordonjcp · 3 years ago
Like what kind of "bad actors" are they even talking about when they say they don't want their code to fall into the wrong hands?

Organised crime? If a big enough organised crime group wanted it you'd get stuffed into the back of a van.

CIA? MI6? You'd get stuffed into the back of a van. FSB? They'd invade the country to stuff you into the back of a van then bum 20 quid off you for diesel. Mossad? They'd break into your house and replace your computer with an identical fully-functional one made of polonium *while you were using it* and say they didn't do it, while showing AI-generated footage of a Russian doing it on TV and crediting you for the software they totally didn't steal.

no-dr-onboard · 3 years ago
> while you were using it* and say they didn't do it

Had a good chuckle. Didn’t know Tom Clancy frequented hacker news.

Deleted Comment

ben_w · 3 years ago
Small and medium-sized bad actors.

As you say, the big bads have means regardless, and probably don't need it because they're often big enough to hire a lot of humans to do e.g. propaganda for them.

Now, imagine a conspiracy theorist. Not a harmless moon landing denier, but someone who is convinced that pizzagate was a real thing, and gets AI to fake mountains of "evidence" until the politicians that he or she is convinced did those things, are lynched, or at the very least become politically toxic.

Think it can't happen? Fake porn is already being used to harm random women, and real pictures from veterinary text books have been used as fake "evidence" of animal abuse.

AI is a tool that can make anyone more competent, but it doesn't make us better or wiser or kinder.

ben_w · 3 years ago
The actual quality of the currently public models is why they decided to release them.

The worst case scenario of the currently public models is why they want to take it slow.

It's like the lottery in one of the episodes of the TV show Sliders: you probably won't win, but if you do win, you die.

Unfortunately, most people are really bad at grokking probably, and, by extension, risk, especially in scenarios like this.

> Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.

Indeed, totally correct.

But this is also on the list titled "why AI alignment is hard and why we need to go slow because we have not solved these things yet".

Saying "it doesn't matter if we keep this secret, someone else will publish anyway" is exactly the same failure mode as "it doesn't matter if we keep polluting, someone else will pollute anyway".

moralestapia · 3 years ago
It's profit all the way-o. Simple as.

As for what they say to the public, well, fraudsters gonna fraud. It's a tale as old as time [1].

1: https://www.biography.com/news/famous-grifters-stealers-swin...

hintymad · 3 years ago
As history repeated shows, righteousness works really well. I don't have to painstakingly make the nuanced arguments. Instead, I can simply cite "social responsibility" or whatever social justice in fashion or simply attack my critique's morality.
throwaway382938 · 3 years ago
Personal opinion:

These models are capable of generating photorealistic child pornography. That's why they're not being released.

I have no evidence to back this up, other than anecdotal. All the hand waving vague posts from SD, DALLE etc read like PR tactics to avoid getting the spotlight from the media and governments.

No one wants to post about this but I think everyone knows it's true, as soon as Congress realizes what these models can be used for there's going to be a moral panic.

EDIT: All of the replies are assuming I'm saying this is a good reason for not releasing these models. I don't say that anywhere nor do I agree with it.

Salgat · 3 years ago
Photoshop can achieve the same thing. So can women hired by pornography companies that are 18 but look younger and intentionally dress like school girls. This excuse is nonsense in the context of what's already available.
dcow · 3 years ago
Let’s also remember why CP is illegal in the first place: because children are manipulated and ultimately harmed in order to create it. I don’t see an ethical or moral problem with digitally generated content of that nature since children aren't being harmed. Am I missing something?

That’s why this whole argument is BS. It panders to those that think these things are illegal because “ew that’s gross” without actually taking the time to argue how AI slots in ethically to their worldview. Discussions about AI license laundering content are far more relevant and interesting.

can16358p · 3 years ago
Using the same logic lets ban the sale of kitchen knifes, they can be used for stabbing people.
simion314 · 3 years ago
>These models are capable of generating photorealistic child pornography. That's why they're not being released.

Is this true, have you seen it or just heard about it? because what I seen is that the AI is bad, the faces are wrong, the fingers are wrong,t he hands and legs are often connected wrong, you can get very often more then 2 legs more then 5 fingers. I seen a community that focus on waifus, this images are not realistic, are obvious carton, animation styles and similar images were created already with digital painting tools.

I also did not seen any porn scene generated by AI, I don't think the model was trained on porn scenes so at best you will get puritans removing breasts from the models, including art , but for sure I want to see how they will ensure that the AI can only generate male nipples and not women ones.

Deleted Comment

WhatsName · 3 years ago
Let's be realistic, just like building codes, medical procedures and car manufacturing sooner or later we will also be subject to regulations. The times where hacking culture and tech was left unbothered are over.

Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.

Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.

Aerroon · 3 years ago
>You can't build stairs without railings though.

Yes you can. Your hammer doesn't magically stop functioning when it discovers that you're building stairs without railings.

You don't want tools to discriminate on what you can and can't do with it, because if you can discriminate, then you will get hammers from Hammer Co that can only use nails from Hammer Co.

jevgeni · 3 years ago
Do you want to take a test and get a license to work on AI? Because that's how you are going to get there.
jnovek · 3 years ago
This is particularly visible with the sorry state of accessibility options for disabled individuals.

I deal with a moderate vision impairment and everything I do to make computers more usable is bespoke hacks and workarounds I’ve put together myself.

In MacOS, for example, I can’t even replace the system fonts with more readable options, or increase the font size of system tools.

Most software ships with fixed font sizes (electron is exceptionally bad here — why is it that I can easily resize fonts in web browsers and not electron?) and increasingly new software doesn’t render correctly with an effective resolution below 1080p.

Games don’t care at all about the vision impaired. E.g. RDR2 cost half a billion dollars to make yet there is no way to size fonts up large enough for me to read.

I welcome regulation if it means fixing these sorts of problems.

concordDance · 3 years ago
You don't need regulation, you need technologies that can be freely modified by their users and the changes distributed.
catiopatio · 3 years ago
This is paternalistic, overbearing, culturally-corrosive nonsense.

Substandard buildings, medical procedures, and cars maim and kill.

AI image generation is speech.

I won’t accept prior restraint on speech as being necessary or inevitable.

riversflow · 3 years ago
> AI image generation is speech.

Sure, but I don’t think an AI model is speech, at least not one trained on billions of parameters and massive quantities of compute. Comparing it to regulated heavy machinery or architecture is apt.

You can’t create an AI model without huge, global industries working together to create the tools to produce said model. And baked in are all sorts of biases that if met with wide-spread adoption could likely have profound social consequences, particularly because these models aren’t easy to make, so the likelihood that they will see large amounts of adoption/use is likely, they are useful tools after all. Group prejudice in these models jumps off the page here, whether race, sex, religion, etc. but black box algorithms are fundamentally dangerous.

Speech is fundamental to the human experience, large ML models are not, and calling them speech is nuts.

kuramitropolis · 3 years ago
Speech can get people to maim and kill, too. Sometimes with surprising efficiency. And when a technology is outlawed, the power that it bestows upon humanity concentrates in the outlaws. For sure, information and communication technology are an interesting edge case of that general principle.
nixpulvis · 3 years ago
One day we will crack the "platforming" issue. Until that day, free speech remains under attack, for reasons I cannot wrap my head around completely. It is pervasive though, to the point it feels like a gag at times.

But yea, if speech can become law, it can matter quite a bit. Do we really want DALL-E generated state laws?

galangalalgol · 3 years ago
Also, I thought stable diffusion did release their models and methodology? You just need a 3080 with enough ram to do the inference with now boundaries, and if you have the money and time can train new models.

People are already making txt2porn sites. I'm sure they will get crazier and creepier (from my boring vanilla perspective, not judging people with eclectic tastes) as time goes by.

thfuran · 3 years ago
Sometimes they just inconvenience one group of people or fail to sufficiently support local industry and those aspects are regulated too. Anything big enough is going to get regulatory scrutiny in one way or another because it's moving around a lot of money and affecting a lot of people.
colinmhayes · 3 years ago
Well you're not the decision maker here. If the public gets sufficiently upset about ai generated lookalike porn these models will be regulated and maybe banned. And as you seem to realize that doesn't mean people will stop using them, it just means the companies that make them will stop getting paid. Obviously this is what they're trying to avoid, and the morality stuff is just because they don't want to say "we're worried about being regulated."

Deleted Comment

datavirtue · 3 years ago
It's not speech. It's data.
rubidium · 3 years ago
You’re free bud. You can make it and release. But you have no standing to just complain about it.
bumby · 3 years ago
>just like building codes, medical procedures and car manufacturing...

"Self enforced limits" could also be an attempt to avoid formal governmental regulation.

The maturity of CS as an industry is still in it's infancy compared to other engineering disciplines. I'm sure if you went back to the 1880s, five or six decades after the start of industrial revolution, there was very little limitation on the design of mechanical equipment. Now, there are all kinds of industry standards and government regulations. We could lament how much this stifles progress, but we're generally not even cognizant of the amount of risk it has reduced. For example, most people don't give a second thought to the idea of their water heater being capable of blowing up their house because it happens so infrequently.

zarzavat · 3 years ago
Buildings don’t cross borders.

Code is information, and information wants to, and will be free.

Short of a North Korea like setup, regulations can only slow down the spread of information around the world.

ehnto · 3 years ago
Code is also machinery and infrastructure though, it can interact with the physical world in material ways, and because of that it will probably end up regulated.

AI is all fun and games when it's data, but if it's being used to make decisions about how to take actions in the physical world I think it's fair that it follows some protocols each society gives it. Making a picture of a cat from a learned model, writing a book with a model, cool, whatever. Deciding who gets their house raided, or when to apply the brakes on an EV, or what drugs should be administered to a patient, we probably want to make sure the apparatus that does this, which includes the code, the data and the models, is held to some rules a society agrees upon.

z9znz · 3 years ago
> Buildings don’t cross borders

"A metal strip on the floor of Eurode Business Center marks the border between Germany and the Netherlands.

On one side of the building, there's a German mailbox and a German policeman. On the other side, a Dutch mailbox and a Dutch policeman."

https://www.npr.org/sections/money/2012/08/09/158375183/the-...

freedom2099 · 3 years ago
Informations is just that… has no will nor wants
bumby · 3 years ago
We regulate all kinds of things that cross borders.
oriolid · 3 years ago
Drugs, dangerously poor quality consumer products and other unwanted stuff does cross borders, and most countries are making efforts to stop them too.
ouid · 3 years ago
slow might be nice
bigbillheck · 3 years ago
> Buildings don’t cross borders

I'm not sure that's factual, but even if it were, built objects certainly do.

jrm4 · 3 years ago
The sooner the better.

Cybersecurity is a mess and 100% the reason is "no skin in the game." If a car manufacturer promises or even implies "safety," and something bad happens, they get sued or they take real action.

The big tech companies must be held to do the same.

hellojesus · 3 years ago
Regulation by which only massive players can abide is effectively killing any small or not well funded business.

We have civil law. If a company harms you, you can sue. No need to create regulation to make markets less competitive.

CM30 · 3 years ago
You can do any of those things for your personal use, the difference is that you can't legally do things like build stairs without railings when taking on construction jobs for clients or customers. The tools you use don't care if you're what you're doing is legal or what not, they work the same way regardless. It's just there are (rightfully) legal restrictions on what you can do with products or services you sell to the public.

The difference here is like not being able to own a hammer or car mechanics tools for your own personal use, and only being able to use them under corporate guidelines/surveilance in a restricted area, which is ridiculous.

SpicyLemonZest · 3 years ago
There are many such restrictions on tools and components which could be dangerous to people other than you. Refrigerant, for example, cannot be sold to the general public for personal use (outside of a fully sealed AC system), and licensed refrigerant users must follow specific usage procedures to ensure it doesn't vent to the atmosphere.

Deleted Comment

jstummbillig · 3 years ago
Imagine everyone used stairs as much as they use tech. Buff people everywhere.
horns4lyfe · 3 years ago
If that’s the case then innovation in the US can be tossed out the window. Once that ball gets rolling it only ends in crony capitalism and protectionism for existing players.
mckirk · 3 years ago
I really, really hope that there aren't any people who think the way you've outlined. Technology has empowered small groups or even single individuals to create things that have the potential to change the course of civilization, so I for sure hope those individuals think twice about the potential consequences of their actions. How would you feel about people releasing a $100 'build your own Covid-variant' kit?
kspacewalk2 · 3 years ago
Once the cat is out of the bag, the problem exists. Worrying about how long exactly it takes for $irresponsible_person to make it slightly worse by reducing the barrier to access even further is, in my opinion, missing the point.

There are many examples of this.

- Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.

- Does anyone seriously think they can stop human cloning, once it's technically feasible, from happening somewhere on the planet sooner or later? By fiat, by legislation, by moral appeals, etc? Will not happen. If clones can be made, clones will be made. Descriptive, not normative claim.

- AI-generated content has reached a certain point where we have to worry about a whole host of issues, most obvious being more sophisticated fakes. "Please think of potential consequences", ad-hoc restrictions, self-imposed or otherwise, are moot in the long run. It's part of our world now.

chalst · 3 years ago
> Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.

It looks to me that you're shifting the goalposts here: nonproliferation has effectively reduced the number of countries with access to nukes. Or is worrying about the number of direct military conflicts between nuclear-armed powers an example of what you call 'missing the point'?

derangedHorse · 3 years ago
>Worrying about how long exactly it takes for $irresponsible_person to make it slightly worse by reducing the barrier to access even further is, in my opinion, missing the point

I disagree with the idea that putting restrictions in place shouldn't be done because 'the problem exists'. The problem exists but that doesn't mean measures can't be taken to keep it manageable. I don't think the majority of people are in your intended demographic of wanting to stop the problem. Most just want to prevent exacerbating the problem.

catiopatio · 3 years ago
I really, really hope that there aren’t any people who think they way you’ve outlined.

AI image generation is not a build-your-own-weaponized-virus kit.

It’s a useful tool that can be used to produce creative expression. What people produce is up to them, and the fact that they might misuse their capacity for free speech isn’t an argument for curtailing it.

ethanbond · 3 years ago
OP doesn’t sound like it’s talking exclusively about image generation. Sounds like a general, “I should be able to build, propagate, and use whatever tech however I want no matter the negative externalities.”
pixl97 · 3 years ago
The problem I would say here is you're thinking in binary, but real life doesn't operate this way.

Lets take a current potential problem. That is a low powered application capable of facial recognition. You can now strap that on to any number of dumb weapons and you've created a smart weapon.

In itself it's not a problem, until it starts happening a lot. If you think like a house cat, you tend to think that society owes you its existence and you're the king of the hill. But say weapons proliferation occurs all those ideas of "I have rights" go right out the window, and this loss of rights will be supported by the masses that don't want to get droned in the head out on a date. The tyranny you want to avoid will be caused by the pursuit of absolute freedom.

As technology becomes more complex the line will blur even further. AI as a build your own 'terrible thing' will happen. Physics demands it, everything is just information at the end of the day.

Now it's up to you to avoid the worst possible outcomes between now and then.

w1nst0nsm1th · 3 years ago
It's already the case.

Crispr has changed a lot of things and make possible for an outsider, with 10.000$ and a little dedication, to alter genome of every living form.

https://www.ft.com/content/9ac7f1c0-1468-4dc7-88dd-1370ead42...

ethanbond · 3 years ago
Right and every new technology that enters this “high power, high availability” domain, the more civilizational risk we all carry.
brigandish · 3 years ago
> How would you feel about people releasing a $100 'build your own Covid-variant' kit?

Not very good but:

a) the people who currently have this tech are not what I'd call trustworthy so why should I leave dangerous tech only in the hands of dangerous people?

b) it would probably kickstart a "build your own vaccine kit" industry

alexvoda · 3 years ago
That you even express the problem like this shows an impressive ammount of bias. By calling them dangerous people you are actually implying malice. What makes you believe people with access to biomedical tech are inherently more malicious than the populace? What makes you believe there aren't far more malicious people who do not yet have access to such tech?

I think this is just fear of the unknown at work. Biomedical knowledge is complicated and requires effort to learn therefore most consider it a known unknown therefore something to be feared. Some people do have such knowledge therefore they are to be feared because who knows what nefarious intentions they have and what conspiracies they are part of. Therefore they are dangerous people using dangerous tech.

Were the physicists who discovered how to split the atom also dangerous people?

blululu · 3 years ago
This is just beyond obtuse. More people having access will mean more people who are untrustworthy having access, which means more malicious action. (Unless you want to setup some toy scenario where the only bad faith actors people on the planet are biochemistry researchers).

As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.

permo-w · 3 years ago
this is the gun debate rephrased
mannykannot · 3 years ago
> it would probably kickstart a "build your own vaccine kit" industry.

This is a scenario for a dystopian science fiction novel, as opposed to a a rational plan for our children's future.

Barrin92 · 3 years ago
>so why should I leave dangerous tech only in the hands of dangerous people?

because handing it to everyone doesn't make things better? I don't like that Putin has nukes, but it's much better than Putin and every offshoot of Al-Qaeda having nukes.

Civilization ending tech in the hands of powerful actors is usually subject to some form of rational calculus. Having it in the hands of everyone means basically it's game over. For a lot of dangeorus technologies there is no 'vaccine' (in time).

afarrell · 3 years ago
What questions would you ask to decide if someone is a trustworthy steward of that technology?
Madmallard · 3 years ago
Human life is quite fragile
orangesite · 3 years ago
Historically, tech folk have always pursued the commercialization of technological innovation with net-zero analysis of any negative consequences, mea maxima culpa.

That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.

vintermann · 3 years ago
We do pause, we do reflect. And our conclusion is that it's "us" who have changed, not the impact of technology.

So you can make pictures and 3d models from text descriptions. So you can get a voice to say something. But if you were determined to do bad things, you already could. It would be easy enough to hire an actor who sounds like Obama and make him say something outrageous. It would be easy enough to use Photoshop to make disgusting images.

Are you sure it's the capabilities you fear, and not the people who now for the first time will get access to them?

Are you sure "we", the wealthy, the technologically and educationally resourceful, the powerful, are so much better custodians?

ajmurmann · 3 years ago
I have no string opinion on this very complex topic, but want to add that this is an area where the quote "sometimes quantity had a quality in its own" applies. Brennan's flooding the zone with shit propaganda methodology to me is One of the biggest challenges to democracy functioning. We might be able to debunk a handful of fake videos on the public discourse. What would happen if they're are suddenly thousands? Maybe we'd fight better ways to establish truth and a shared reality. Maybe democracy would utterly collapse.

Ultimately, I don't think we'll able to keep the cat in the bag though. If nothing else, nation actors like Russia or China will get their hands on it and crank the propaganda machine with it. We might be better prepared if we just shorten the learning process and give everyone access. That might open some hope that we'll be able to adapt. It's a really scary dice roll though.

Bakary · 3 years ago
It's not about class-based custodianship but rather the simple fact that the number of attempts like these will multiply like wildfire. You won't need to be determined, you'll just need five minutes with the software before heading to work.
afarrell · 3 years ago
If something is dangerous, that does not justify making it worse.
Bakary · 3 years ago
They aren't uncomfortable. They just aren't sure how to maintain control over the technology and monopolize it. Which is why they are so cagey about releasing anything.
akiselev · 3 years ago
>* That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.*

You mean like the online advertising industry? That shit has been making many of us uncomfortable since the early 2000s.

Now that the technology is sufficiently decentralized the morality police comes along.

TigeriusKirk · 3 years ago
It feels like we're going to "safety" ourselves into an even more extreme oligarchy and congratulate ourselves for being so wise to do so.
jquery · 3 years ago
Yeah, I’m actually a little impressed to see my industry that traditionally has run roughshod over humanity, damn the consequences style, is showing a tiny bit of restraint. Nothing like what we see in medicine or law or anything, but something. I figured we’d get reigned in like banks were before doing any self policing at all (after nearly destroying society of course).
bumby · 3 years ago
IMO is aligns with a more professional industry approach in general. Law, medicine, engineering (in the capital E sense) all have ethical requirements and bodies that govern individuals. I think it’s natural for an industry like CS that has typically been like the Wild West to push back against regulation, but in the end, it’s probably for the better (at least with safety critical applications).
svnt · 3 years ago
The hesitancy came from a good place. In some senses this is a very disruptive technology stack.

But when morality suddenly is reinforced in an area where the same people espousing it are trying to rapidly earn billions of dollars, I am skeptical.

Transformers are a form of translation and information compression, ultimately.

The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

What is the risk of open-sourcing the product? Very few individuals could assemble a dataset or train a full competitive model on their own hardware. So not really a competitive risk there. But every big corp could.

The morality angle protects the startups from the big six. SD is a product demo. I view it the same way at the highest level as an alpha version of Google translate.

hprotagonist · 3 years ago
> The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

And that they’re buggy and hard to fix and generally more limited than the buzz would have you believe.

public high minded talk morality also cynically keeps the money coming in :)

colinmhayes · 3 years ago
There is legitimate regulatory risk with ai generative models. Really all it takes is the media picking up one bizarre story about child revenge porn generated with these models for them to be completely banned. And a ban wouldn’t mean people stop using them, just that researchers stop getting paid for making them.
svnt · 3 years ago
Definitely. Framing it as morality when it is business risk is disingenuous though.
dougmwne · 3 years ago
I agree tht I find it all pretty silly. You know what else can produce horrifying and immoral images? Pencil and paper.

I suspect that quite a lot of this caution is driven by Google and other large companies which want to slow everyone else down so they can win the new market on this tech. The remaining part of the caution appears to come from our neo-puritan era where there is a whole lot of pearl clutching over everything. Newsflash, humans are violent brutes, always have been.

jmfldn · 3 years ago
The key difference with pencil and paper being that I can't produce photoreal deep fakes at the speed of processing. That's not a valid comparison.

You might be right in the second paragraph about the motivations for slowing this down. There clearly are reasons to be cautious here though, even if this isn't the real reason for the current caution.

betwixthewires · 3 years ago
Replace pencil and paper with a camera then if it makes you happy, although I don't think the quality of the images make a single bit of difference.

Why should only the few have access to such a technology? Because some people will use it for naughty things? And that's what we are talking about here, about whether a minority should have permissioned access to a new technology, and particularly one that cannot directly actually cause physical harm. I can glean motivations surrounding all this from that observation alone.

Bakary · 3 years ago
I am inclined to agree with OP's view, but consider the following scenario: mass uploads of brutal deepfaked pornographic videos involving your likeness and/or that of your loved ones. What would your reaction be? Personally, I would find that very disturbing to the point where it would affect my life.

This isn't even a hypothetical but a reality celebrities already face to some extent.

I'd say the usage of the tech will stabilize, but it's also the case that we have dark days ahead of us.

thomastjeffery · 3 years ago
Rumors are not a new social pattern. We've handled them fine so far.

Your discomfort is not enough reason to stop the free speech of everyone else. It never was, and never will be.

The most effective way to deal with the damage of deepfakes is to make them ubiquitous and accessible. The more familiar people are with the ability to create deepfake content, the more prepared they are to think critically and distrust it.

The average person already knows that still images aren't flawless evidence. The world didn't fall apart after the first release of Photoshop.

dougmwne · 3 years ago
Sure, with 14 fingers and 3 ears. Someone could also use photoshop and get a better result. This is nothing new in terms of risk, just a new tool.
pixl97 · 3 years ago
Part of the disconnect you appear to be experiencing is the inability to take this just beyond "draw a picture". You seemed to have missed out on software driven machine control and decision making. Of course you may not mind whom the drone decides to kill as long as you're not the target.

Also you seem to be affected by American sensibilities. If your AI decides to go full hiel in Germany you may find that the authorities have a lot of talk about with you.

dougmwne · 3 years ago
The AI draws something close to what you ask it to draw. Then you look at the results and decide to share it or delete it. You are in control, not the AI. The AI is a tool. Likewise with other AI applications. People decide to create these things, decide on the datasets they are trained on, and put them in the pipeline and every day decide to keep them plugged in. This is still human centered technology.
mrshadowgoose · 3 years ago
I also eye-roll when someone legitimately wants to mandate that tools produce "morally correct" outputs.

However, as a person who has been closely following the developments in this field, I share a similar perspective to a few of the other commentators here. Most of the noise is just virtue-signalling to deflect scrutiny and/or protect business interests. Scrutiny from governments is something we absolutely do not want right now.

Humanity is on a path towards artificial general intelligence. Today, the concerns are "what about the artists" and "what if people make mean/offensive things"? As we start to develop AI/ML systems that do more practical things we will start to trend towards more serious questions like "what about everybody's job?". These are the things that will get governments to step in and regulate.

There is a pathway to AGI in which governments and corporations end up with a monopoly on it. I personally view that as a nightmare scenario, as AGI is a power-multiplier the likes of which we've never seen before.

It's important that current development efforts remain mostly unfettered, even if one has to put on a "moral" facade. The longer it takes for governments to catch on, the less likely it will be that they will manage to monopolize the technology.

Dead Comment

photochemsyn · 3 years ago
Some forms of technology are highly regulated because people can do really stupid, reckless and dangerous things. Home chemistry kits today are quite unlike those produced 100 years ago, which had ingredients for making gunpowder and other explosives, as well as highly toxic compounds like cyanide, and less dangerous but problematic things like metallic mercury. Similarly, biotech is now regulated and monitored because modern tools allow people with relatively minimal resources to do things like re-assemble smallpox using nothing but the sequence data:

https://www.livescience.com/59809-horsepox-virus-recreated.h...

As far as AI, maybe the immediate risks aren't quite so dramatic but it's going to create a real lack-of-trust problem with images, video and data in general. Manipulated still photographs are already very difficult if impossible to detect and there's an ongoing controversy over whether they are admissible in court. AI modification of video is steadily getting harder to identify, and there are already good reasons to suspect the veracity of video clips put out by nation-states as evidence for their claims (who likely already have unrestricted access to the necessary technology - for example, Iran recently released a suspicious 'accidental death' video of a woman arrested for not covering her head, which could be a complete fabrication).

Similarly, AI opens the door to massive undetectable research fraud. Many such incidents in the past have been detected as duplicated data or copied images, but training an AI on similar datasets and images to create original frauds would change all that.

A more alarming application is the construction of AI-integrated drones capable of assassinating human beings with zero operator oversight, just load the drone with a search image and facial recognition software, and then launch-and-forget, which doesn't sound like that good of an idea. Basically Ray Bradbury's Mechanical Hound in Farenheit 451, only airborne.