It is totally legal to train on this stuff, but illegal to reproduce copyrighted works. Interestingly, Google's business model could have been criticized the same way. They construct a big index of copyrighted works, reproduce them, and monetize it.
No, AI is not going to kill us anytime soon. Yes, LLMs are not even close to AGI and the research on them does not lead to research on understanding cognition (read Chomsky on this), which is required for AGI. Yes, DO ban all AI research and investment until society as a whole, by consensus, grants the privilege of continuing. OpenAI saga already has shown Sillicon Valley tech culture is in no fit shape to be involved in such manners in any form. The culture that gave us Musk buying Twitter and torpedoing its value by half the next day and the shitcoin scammer SBF does not have a good track record.
With all due respect, Chomsky has no intuition for this kind of technology. Was sad to see him reject the technology, felt oddly like sour grapes when really LLMs have validated his work on universal grammar.
This is the problem with people: they build icons to worship and turn a blind eye to the crooked side of that icon. Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business. Right now, Altman is irrelevant for the further development of AI and GPT in particular because the vision for the AI future comes from the engineers and scientists of OpenAI. Apple has never had any equipment that is good enough and comparable in price/performance to its market counterparts. The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand. GPT-4 and GPT-4 Turbo are totally different. They are the best, but they are not irreplaceable. If you look at what Phind did to LLaMA-2, you'll say it is very competitive. Though LLaMA-2 requires some additional hidden layers to further close the gap. Making LLaMA-2 175B or larger is just a matter of finances.
That said, Altman is not vital for OpenAI anymore. Preventing Altman from creating a dystopian future is a much more responsible task that OpenAI can undertake.
Right now, Altman may be the most relevant for the further development of AI because the way the technology continues to go to market will be largely shaped by the regulatory environments that exist globally, and Sam leading OAI is in by far thr best position to influence guide that policy. And he has been doing a good job with it.
Something I don’t read enough about that would make autonomous cars a lot more reliable is smart tech in the road infra. Why don’t stop lights send a beacon? Roads and lanes should send a signal. Cars should send signals to each other about their intent eg. turning indicators should not be purely visual, etc. Are there standards for this stuff being worked on?
We shrug them off because they are the devil we know: a super well understood risk. Also because safety is improving, so the trend is toward less risk not more.