IMO, no one wants to acknowledge simple facts for fear of retribution/being labelled racist, instead we keep dancing around this issue forever getting more and more ridiculous.
Also, trust me i can talk about this because i'm not white...
fact - people prefer people like them. Not even consciously, this is basic shit hardwired into us. I don't blame white men for being subconsciously biased to hiring white men, literally any other group would do the same. sure we can try to fight that bias, but its not at all evil or wrong to have that bias, only natural.
fact - taking an "agnostic" approach the way science does, of course the algorithms will reflect "biases". if men are statistically more likely to be programmers, or black people are more likely to commit crimes (STATISTICALLY), then the algorithm will pick that up. They are biases sure, but also statistical realities.
Now we can debate whether we should actively engineer algorithms to fight these "biases" on a case-by-case basis (for example, focusing more on women might be a win if you can find talent no one else can), but there's no reason to start pointing fingers at the "evil white guys" on top who planned this from the very beginning... it's just more stereotyping.
hypothesis - she wrote this crap to gain publicity.
> if men are statistically more likely to be programmers, or black people are more likely to commit crimes (STATISTICALLY),
I think you meant black people are more likely to be convicted of crime. The problems with crime 'statistics' is that on the surface, it all seems coldly scientific, yet they are generated and derived via very biased, very human, very unscientific processes - there is a lot of bad data. The ACLU did research that showed that there is no statistically significant difference in the possession of weed between white and black people, yet more black people are convicted[1] for possession.
Here's a mind experiment: after watching this YouTube video[2], how skewed do you think the statistics for white female criminals (bike thieves) vs black criminals would be?
Yeah, i don't really know enough to argue that. You're probably right about the convictions.
I just think we need to be able to TALK about these issues, so that when a real expert looks at those statistics they can get to the truth of the matter, and say that truth whether it is or isn't politically correct.
The law says racism is illegal in certain situations, and society says racism is undesirable in most situations.
The law and society aren't claiming that racism is statistically non-optimal -- in fact, there are lots of things more optimal than status quo that many people would find totally horrifying.
If are widely replacing human systems with AI systems, I think this is a legitimate concern.
I really depart from the article in two areas:
1. The AI will inherit the biases of its creators. This is possible but far from guaranteed. And relatedly, inclusivity of the development team guarantees nothing regarding the goals of the system.
2. Criticising the people who are warning of the problem and trying to do something about it. This is related to the AI control problem. There is no switch that can be flipped that will prevent AI systems from Bad Ideas. It's not that we just aren't flipping it to preserve our chokehold on captialism. Implementing morality in AI systems is a genuinely monumental problem. And the people who are doing something about it are behaving very altruistically.
Agreed! If we did nothing to correct inequalities in society, the biggest and strongest would rule over all - so yes we must have our own values and stick to them.
And with AI, we must make these values explicit, which is very difficult to do - i agree this is a very important problem to solve, and the people doing it should absolutely be rewarded.
Honestly, reading the article again after your summary i found it very reasonable :p I think the headline just ticked me off.
No one is stopping other people from getting in on the debate, they absolutely should (and I'm sure there are roadblocks in their way, and people who really are racist). It just feels wrong to implicitly all the "bad white people" for that. Blame those who cause the problem. Otherwise we are back to stereotyping.
"The law...". Whose law? "Society says...". Which society? "Implementing morality". Whose morality?
Also, why wouldn't you want things to be optimal, depending what what they're optimizing for? I thought optimizing was the exact point of machine learning and AI.
If you change the label of the datapoints, say black to apple and white to oranges, you'll change racism to fruitism. It's unlikely that a racist system would do that.
I do agree with the article that feeding biased data will result in a biased system and we need to be aware of that. But calling it racist and sexist is sensationalism.
The last person I met from Google's machine learning group in search was female and Chinese. The big name behind machine learning is Andrew Yan-Tak Ng; he was a professor at Stanford and is Chief Scientist at Baidu now.
They're complaining about Nikon cameras not recognizing Asian faces properly, and this is discrimination? Nikon is a Japanese company. Headquarters is in Tokyo. The CEO is Kazuo Ushida.
My experience as well. I'm from Seattle and have met (at my office and others) and overwhelmingly large number of asian, and especially female asians working in data science and machine learning.
I think their point was the training data may have a lot of white guys. I don't know what Nikon used but if they just googled the web for images they'd probably end up with quite a lot of white subjects.
The reason why this is garbage, is that there is not a single naturally occurring domain in society in which groups can be found to be represented equally. I'm pretty sure this is true in nature as well. So one can literally at their sole discretion, analyze any area of life and make statements like "systemically oppressed this", "unequally represented that". It's like staring at an ink blot and being asked what you see.
I have the nagging sensation that if were up to todays hyper-sensitized media on how society should look and function, we would all be grey globs in a grey world, devoid of any differences.
The more depressing truth? Striking these chords are an absolute goldmine for ratings and clicks. Everyone is naturally curious about how they might be currently oppressed or disadvantaged, it plays to our instinctual tribalism.
So please, realize you can do anything you want in this world, and don't be seduced by hate and bitterness from some writer sitting in Soho that has a click-quota to meet this month.
This article makes a perfectly valid point- AI is only as good as the data you use to train it. If you feed it bad, biased data, then the AI will behave in bad, biased ways.
These biases can be major (no Amazon delivery to black neighborhoods) or minor. I'm reminded of a gaming podcast I heard (can't remember which one) where a guy recounted watching a female journalist try VR goggles that couldn't detect her eyes because she had mascara. Apparently no one making the headset had tested the effects of that kind of makeup.
The article is right. If we are serious about creating products that revolutionize everyone's lives, we need to involve more kinds of people. Our perspectives are limited. We can't understand everything. That's the point of having a diverse team. Like Ben Thompson says, there's a very strong business case for diversity because "You don't know what you don't know."
The article begins by conflating algorithms and training data and that becomes the "sticking point" in the reader's mind even though she later clarifies this fact soon after. This is not a surprise since this obfuscation helps to back the sinister and prosecutorial tone of the piece.
While I agree that the article raises an important point, I don't really see how more diverse development teams would have fixed any of the problems raised.
This article falls in the same fallacy as a lot of more postmodern social science articles.
The whole point of AI and machine learning is to find things that are not immediately obvious but backed up by the data.
The author of this article is suggesting that if the conclusions of this research are politically unfavorable, then there has to be bias/racism/sexism somewhere. race/gender/socioeconomic agnostic.
Which runs immediately counter to both machine learning and research in general. "Dang it, run the numbers until it supports the conclusion I support."
It's like the author doesn't understand the basic premise of machine learning or research. If the datasets are restricted in some way unfairly, that's something to be looked at. Algorithms are generally fair and unbiased.
"In the United States, this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less."
More policing reduces crime. The author seems to think that people living in these poor, nonwhite neighborhoods would rather see the police resources go to wealthy, white neighborhoods. But the studies show that minorities and people living in high-crime neighborhoods mostly do approve of police. There is a lot of cognitive dissonance here: is crime reduction through more policing in poor nonwhite neighborhoods a right goal despite sometimes justified skepticism of the police or not?
Also, trust me i can talk about this because i'm not white...
fact - people prefer people like them. Not even consciously, this is basic shit hardwired into us. I don't blame white men for being subconsciously biased to hiring white men, literally any other group would do the same. sure we can try to fight that bias, but its not at all evil or wrong to have that bias, only natural.
fact - taking an "agnostic" approach the way science does, of course the algorithms will reflect "biases". if men are statistically more likely to be programmers, or black people are more likely to commit crimes (STATISTICALLY), then the algorithm will pick that up. They are biases sure, but also statistical realities.
Now we can debate whether we should actively engineer algorithms to fight these "biases" on a case-by-case basis (for example, focusing more on women might be a win if you can find talent no one else can), but there's no reason to start pointing fingers at the "evil white guys" on top who planned this from the very beginning... it's just more stereotyping.
hypothesis - she wrote this crap to gain publicity.
I think you meant black people are more likely to be convicted of crime. The problems with crime 'statistics' is that on the surface, it all seems coldly scientific, yet they are generated and derived via very biased, very human, very unscientific processes - there is a lot of bad data. The ACLU did research that showed that there is no statistically significant difference in the possession of weed between white and black people, yet more black people are convicted[1] for possession.
Here's a mind experiment: after watching this YouTube video[2], how skewed do you think the statistics for white female criminals (bike thieves) vs black criminals would be?
1. https://www.aclu.org/files/assets/aclu-thewaronmarijuana-rel...
2. https://www.youtube.com/watch?v=ge7i60GuNRg
I just think we need to be able to TALK about these issues, so that when a real expert looks at those statistics they can get to the truth of the matter, and say that truth whether it is or isn't politically correct.
The law says racism is illegal in certain situations, and society says racism is undesirable in most situations.
The law and society aren't claiming that racism is statistically non-optimal -- in fact, there are lots of things more optimal than status quo that many people would find totally horrifying.
If are widely replacing human systems with AI systems, I think this is a legitimate concern.
I really depart from the article in two areas:
1. The AI will inherit the biases of its creators. This is possible but far from guaranteed. And relatedly, inclusivity of the development team guarantees nothing regarding the goals of the system.
2. Criticising the people who are warning of the problem and trying to do something about it. This is related to the AI control problem. There is no switch that can be flipped that will prevent AI systems from Bad Ideas. It's not that we just aren't flipping it to preserve our chokehold on captialism. Implementing morality in AI systems is a genuinely monumental problem. And the people who are doing something about it are behaving very altruistically.
And with AI, we must make these values explicit, which is very difficult to do - i agree this is a very important problem to solve, and the people doing it should absolutely be rewarded.
Honestly, reading the article again after your summary i found it very reasonable :p I think the headline just ticked me off.
No one is stopping other people from getting in on the debate, they absolutely should (and I'm sure there are roadblocks in their way, and people who really are racist). It just feels wrong to implicitly all the "bad white people" for that. Blame those who cause the problem. Otherwise we are back to stereotyping.
"The law...". Whose law? "Society says...". Which society? "Implementing morality". Whose morality?
Also, why wouldn't you want things to be optimal, depending what what they're optimizing for? I thought optimizing was the exact point of machine learning and AI.
http://www.psychologicalscience.org/media/releases/2005/pr05...
http://fivethirtyeight.com/features/in-the-end-people-may-re...
http://www.telegraph.co.uk/news/science/science-news/3336375...
http://conf.som.yale.edu/obsummer07/PaperBen-NerKramer.pdf
I do agree with the article that feeding biased data will result in a biased system and we need to be aware of that. But calling it racist and sexist is sensationalism.
Deleted Comment
Deleted Comment
They're complaining about Nikon cameras not recognizing Asian faces properly, and this is discrimination? Nikon is a Japanese company. Headquarters is in Tokyo. The CEO is Kazuo Ushida.
I have the nagging sensation that if were up to todays hyper-sensitized media on how society should look and function, we would all be grey globs in a grey world, devoid of any differences.
The more depressing truth? Striking these chords are an absolute goldmine for ratings and clicks. Everyone is naturally curious about how they might be currently oppressed or disadvantaged, it plays to our instinctual tribalism.
So please, realize you can do anything you want in this world, and don't be seduced by hate and bitterness from some writer sitting in Soho that has a click-quota to meet this month.
This article makes a perfectly valid point- AI is only as good as the data you use to train it. If you feed it bad, biased data, then the AI will behave in bad, biased ways.
These biases can be major (no Amazon delivery to black neighborhoods) or minor. I'm reminded of a gaming podcast I heard (can't remember which one) where a guy recounted watching a female journalist try VR goggles that couldn't detect her eyes because she had mascara. Apparently no one making the headset had tested the effects of that kind of makeup.
The article is right. If we are serious about creating products that revolutionize everyone's lives, we need to involve more kinds of people. Our perspectives are limited. We can't understand everything. That's the point of having a diverse team. Like Ben Thompson says, there's a very strong business case for diversity because "You don't know what you don't know."
The whole point of AI and machine learning is to find things that are not immediately obvious but backed up by the data.
The author of this article is suggesting that if the conclusions of this research are politically unfavorable, then there has to be bias/racism/sexism somewhere. race/gender/socioeconomic agnostic.
Which runs immediately counter to both machine learning and research in general. "Dang it, run the numbers until it supports the conclusion I support."
It's like the author doesn't understand the basic premise of machine learning or research. If the datasets are restricted in some way unfairly, that's something to be looked at. Algorithms are generally fair and unbiased.
More policing reduces crime. The author seems to think that people living in these poor, nonwhite neighborhoods would rather see the police resources go to wealthy, white neighborhoods. But the studies show that minorities and people living in high-crime neighborhoods mostly do approve of police. There is a lot of cognitive dissonance here: is crime reduction through more policing in poor nonwhite neighborhoods a right goal despite sometimes justified skepticism of the police or not?
http://www.theatlantic.com/national/archive/2015/02/more-pol...
https://www.ncjrs.gov/pdffiles1/nij/197925.pdf