And then try to sell it back to businesses, even suggesting they use the data to train AI. You also make it sound like there’s a team manually doing all the work.
https://www.economizafloripa.com.br/?q=parceria-comercial
That whole page makes my view of the project go from “helpful tool for the people, to wrestle back control from corporations selling basic necessities” to “just another attempt to make money”. Which is your prerogative, I was just expecting something different and more ethically driven when I read the homepage.
* https://www.cnn.com/2019/05/06/tech/facebook-groups-russia-f...
* https://www.voaafrica.com/a/israeli-firm-meddled-in-african-...
I sort of suspect AI-driven accounts are already present on social media, but I don't have proof.
Why the national specificity here ?
The people who review your papers in conferences and ask you why you didn't cite future arxiv papers are the same people who put their work on arxiv and cite each other's preprints. You can't rely on the process of "peer-review" on arxiv any more than you can rely on the conference peer-reviews because they're performed by the same people, and they're people who don't know what they're doing.
The sad truth is that the vast majority of the researchers in the machine learning community haven't got a clue what the hell they're doing, nor do they understand what anyone else is doing. The typical machine learning paper is poorly motivated, vaguely written, and makes no claims, nor presents any results, other than "our system beats some other systems". As to reproducibility, hell if we know whether any of that work is really reproducible. Everybody who references it ends up doing something completely different anyway and they just cite prior work as an excuse to avoid doing their job and properly motivating their work. The people who write those papers eventually get to be reviewers (by sheer luck), or sub-reviewers. They have no idea how to write a good paper, so they have no idea how to write a good review, either. And they couldn't recognise a good paper if it jumped up and bit them in the cojones.
I love to cite Geoff Hinton on this one:
GH: One big challenge the community faces is that if you want to get a paper
published in machine learning now it's got to have a table in it, with all
these different data sets across the top, and all these different methods
along the side, and your method has to look like the best one. If it doesn’t
look like that, it’s hard to get published. I don't think that's encouraging
people to think about radically new ideas.
Now if you send in a paper that has a radically new idea, there's no chance
in hell it will get accepted, because it's going to get some junior reviewer
who doesn't understand it. Or it’s going to get a senior reviewer who's
trying to review too many papers and doesn't understand it first time round
and assumes it must be nonsense. Anything that makes the brain hurt is not
going to get accepted. And I think that's really bad.
https://www.wired.com/story/googles-ai-guru-computers-think-...So the problem is not arxiv or not arxiv, the problem is that peers in peer review lack expertise and knowledge and they can't do their job well.
Only? That's an inordinate amount of money for what's just a phone.
Some religions take 10% of your adult income. Buying a Zelda game or Marvel movie ticket is peanuts in comparison.