LORD NASH [Tory, contactholmember@parliament.uk] BARONESS CASS [Crossbench / 'independent', rivisn@parliament.uk ("staff")] BARONESS BENJAMIN [Liberal Democrat - which particularly disappoints me – benjaminf@parliament.uk]
All three can be contacted by sending an email to contactholmember@parliament.uk using the proper form of address as detailed in https://members.parliament.uk/member/4270/contact
If you're reading this website and are either living in the UK or are a British citizen I strongly urge you to write a personalised and above all polite email stating with evidence why they are misguided. The "think of the children" brigade is strong – you may well be able to persuade these individuals why it is a bad idea.
My claim is that an llm acts the same way (or superset) to how a person with short term memory would behave if the only mode they could communicate with was text. Do you agree?
And I do not agree. LLMs are literally incapable of understanding the concept of truth, right/wrong, knowledge and not-knowledge. It seems pretty crucial to be able to tell if you know something or not for any level of human-level intelligence.
Again, this conversation has been had in many variations constantly since LLMs were on the rise, and we can't rehash the same points over and over. If one believes LLMs are capable of cognition, they should offer formal proof first, otherwise we're just wasting our time.
That said, I wonder if there are major differences in cognition between humans, because there is no way I would look at how my brain works and think "oh, this LLM is capable of the same level of cognition as I am." Not because I am ineffably smart, but because LLMs are utterly simplistic in comparison to even a fruit fly.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
Maybe now we will start seeing a reversion to the people in it for the passion.