1. Unionized coal mines had/have much better wages and working conditions :)
2. Of course there are shades everywhere. And just because there are worse exploiters, doesn't mean google should get away with anything that's not "as bad" as those.
If I felt I was being under-compensated, working in poor conditions, and had my back against the wall, I'd want a union too. But describing my job like that is hysterically funny, much less Google programmers.
What's really going on here is simple. If you work at Google, your job is too good. There's no struggle. People want a struggle and so they invent one by LARPing as Marxists.
In the paper, another paper is summarized describing the environmental costs of training large models. It is then argued that global warming will disproportionately affect marginalized people. This constitutes "environmental racism" because the main beneficiaries of these models are rich white people whereas the people bearing the costs are poor people who live near the equator. In my mind, this is two-points-make-a-line thinking.
The rest of the paper is more interesting. It argues that machine learning is conservative in that it "reinforces hegemonic language". I think this is true. Models are trained using data, data is what exists rather than what we would like to exist. If you're unhappy with what exists (as Gebru is, terribly), this represents a problem. Two of her solutions are "curating" data (censoring data) and "working with panels of experiential experts through the Diverse Voices methodology" (constituting powerful groups made up of her ideological allies). I think these solutions are dead-ends. It seems to me that we have to make peace with the conservatism inherent in these models (and maybe all models).
The paper also argues (I think rightly) that problems will materialize "should humans encounter seemingly coherent language model output and take it for the words of some person or organization who have accountability for what is said". In other words, we grant other humans responsibility for what they say and do but what about models that can mimic humans very accurately? A similar problem exists today when a Google or Youtube account is erroneously locked by some algorithm. There's a sense that, really, no one is responsible for this outcome because an algorithm did it. Even if the owner manages to get his account restored, there's no clear way to assign responsibility for the mistake. Perhaps this problem will become even worse if we start to see algorithms powered by models that are near-indistinguishable from people.
Anyway, the paper doesn't seem controversial to me. It seems clear that Gebru's strident tone and personal style were the cause of her firing/resignation. Jeff Dean's claim that the paper was rejected for not citing other papers may be false. The truth seems much simpler: based on her own public communications, Gebru felt she was the subject of "harassment" and "dehumanization" but those grievances seem to be hyperbole, and the personal injury that she felt became an excuse for her to treat her coworkers poorly. In other words, she didn't drop her twitter persona at work.