I would hope there aren't too many large utility jurisdictions which would curtail citizen consumers in favour of industrial users in the event of a demand surge.
On a related note. It's worrying to me how quickly we've accepted that we're going to boost electricity consumption massively prior to achieving anything close to the carbon intensity reduction targets which would mitigate the worst of climate change effects. It's all driven by a market force which cannot be effectively regulated on a global scale for multinational tech firms who can shop around for the next data centre location with near total freedom. And with advances in over the top fibre networks etc... a tonne of AI demand can be met by a compute cluster on the other side of the world (especially during model training) so the externalities related to the computing infrastructure can theoretically be completely dumped somewhere far away from the paying customer.
The authors do include the claim that humans would immediately disregard this information and maybe some would and some wouldn't that could be debated and seemingly is being debated in this thread - but I think the thrust of the conclusion is the following:
"This work underscores the need for more robust defense mechanisms against adversarial perturbations, particularly, for models deployed in critical applications such as finance, law, and healthcare."
We need to move past the humans vs ai discourse it's getting tired. This is a paper about a pitfall LLMs currently have and should be addressed with further research if they are going to be mass deployed in society.
Andre Marziali - Physics of Racing https://www.youtube.com/watch?v=bYp2vvUgEqE
There is an excellent Wikipedia article that goes into detail on the subject: https://en.wikipedia.org/wiki/English_terms_with_diacritical...
I'm at the beginning of my career and learning every day - I could do my job faster with an LLM assistant but I would lose out on an opportunity to acquire skills. I don't buy the argument that low-level critical thinking skills are obsolete and high level conceptual planning is all that anyone will need 10 years from now.
On a more sentimental level I personally feel that there is meaning in knowing things and knowing how to do things and I'm proud of what I know and what I know how to do.
Using LLM's doesn't look particularly hard and if I need to use one in the future I'll just pick whichever one is supposedly the newest and best but for now I'm content to toil away on my own.