Readit News logoReadit News
nlpprof commented on I Am Deleting the Blog   slatestarcodex.com/2020/0... · Posted by u/perditus
dcolkitt · 5 years ago
> He works for Facebook. He's paid with Facebook money. So why draw this imaginary line between research and production? He is paid to do research that will go into production.

This is a silly standard to uphold. The sizable bulk of American academic researchers are at least partially funded by grants made from the US federal budget.

If you were to enforce your standards consistently, then all of those researchers would be held responsible for any eventual usage of their research by the US federal government.

I really doubt you apply the same standard. So, the criticism mostly seems to be an isolated demand for rigor. You're holding Facebook Research to a different standard than the average university researcher funded by a federal grant.

nlpprof · 5 years ago
Did you read what I wrote?

I don't think his argument is true. (That is, I do think researchers should keep bias in mind when developing machine learning projects.) (Regardless of their funding sources.)

Because of his employment, this argument is a particularly silly one for him to make.

nlpprof commented on I Am Deleting the Blog   slatestarcodex.com/2020/0... · Posted by u/perditus
tinyhouse · 5 years ago
I was just thinking about this now after reading attacks on Yann Lecun on twitter. He's a prominent AI figure (head of facebook research and turing award recipient). My interpretation - he was saying that bias in AI is mostly a problem of data. He didn't say there's no bias or that you can't solve bias with modeling. Just that the model itself isn't what causing the bias. One woman researcher started attacking him and everyone is backing her up... even calling him a racist. I guess a lot of people who work on fairness in AI got offended because they feel he calls their research BS. (which I don't think is what he meant)

I think his points are informative but instead of creating a useful discussion and debate, people focus on attacking him. I wouldn't be surprised if some people will request FB to fire him... (which thankfully won't happen) It's likely next time he will think twice before saying his opinion on social media. That's how toxic social media has become.

Update: Great to see this got so many upvotes so quickly. Just shows how biased (no pun intended) social media like Twitter is, and how concerned people are to say their opinion publicly these days.

nlpprof · 5 years ago
I'm in the field - though not as prominent as Yann (who has been very nice and helpful in my few interactions with him) - and your interpretation is off. People are disagreeing with his stance that researchers should not bother exploring bias implications of their research. (He says this is because bias is a problem of data - and therefore we should focus on building cool models and let production engineers worry about training production models on unbiased data.)

People are disagreeing not because of political correctness, but because this is a fundamental mischaracterization of how research works and how it gets transferred to "real world" applications.

(1) Data fuels modern machine learning. It shapes research directions in a really fundamental way. People decide what to work on based on what huge amounts of data they can get their hands on. Saying "engineers should be the ones to worry about bias because it's a data problem" is like saying "I'm a physicist, here's a cool model, I'll let the engineers worry about whether it works on any known particle in any known world."

(2) Most machine learning research is empirical (though not all). It's very rare to see a paper (if not impossible nowadays, since large deep neural networks are so massive and opaque) that works purely off math without showing that its conclusions improve some task on some dataset. No one is doing research without data, and saying "my method is good because it works on this data" means you are making choices and statements about what it means to "work" - which, as we've seen, involves quite a lot of bias.

(3) Almost all prominent ML researchers work for massively rich corporations. He and his colleagues don't work in ivory towers where they develop pure algorithms which are then released over the ivy walls into the wild, to be contaminated by filthy reality. He works for Facebook. He's paid with Facebook money. So why draw this imaginary line between research and production? He is paid to do research that will go into production.

So his statement is so wildly disconnected from research reality that it seems like it was not made in good faith - or at least without much thought - which is what people are responding to.

Also, language tip - a "woman researcher" is a "researcher".

u/nlpprof

KarmaCake day47June 23, 2020View Original