This paragraph really pisses me off and I'm not sure why.
> Critics have already written thoroughly about the environmental harms
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
> the reinforcement of bias and generation of racist output
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
>the cognitive harms and AI supported suicides
There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.
>the problems with consent and copyright
This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.
Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.
I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.
> I don't ask a lot of race-based questions to my LLMS I guess
The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...
> Copyright never stopped me from saving images or pirating movies.
I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.
> I also grew up being told that ANYTHING on the internet was for the public
Who told you that? How sure are you they are right?
Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.
We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.
It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.
Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.
If you want to argue that it does useful things, you have to explain at least one of those things.
- Actually knowing things / being correct - Creating anything original
It's good at
- Producing convincing output fast and cheap
There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.