I'm not too worried about displacement of jobs as I think that's actually somewhat overhyped as an outcome. The actual near term problems I see are:
(a) perfect emulation of human behaviour makes spam and fraud detection almost impossible. an LLM can now have an intelligent, reasoned conversation over weeks with a target to perfectly emulate an entity known to the target (their bank, loved one, etc etc).
(b) direct attack on authenticity : we aren't far away from even video being faked in real time such that it's not even sufficient to get a person on a zoom call to understand that they are not real
(c) entrenching of ultra subtle and complex biases into automated processes. I expect companies to rapidly deploy LLMs to automate aspects of information processing and the paradox is that the better it gets at not showing explicit biases the more insidious the residual will be. For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.
(d) flooding of the internet with garbage. This might be the worst one in the end. I feel like fairly quickly we're going to see this evolve to requiring real identity for actual humans and ability to digitally sign content in ways bots can't replicate. That will then be the real problem outcome because the downstream effects of that will enable all kinds of censorship and control that we have thus far resisted / avoided on the internet.
Jokes/nostalgia aside, you don't really even need fancy encryption mechanisms. All that's important is that you only use the internet to interact with trusted parties, vs treating it as a public square where you can ~generally tell whether someone is a real person. A domain name, an email address, a social media username, etc are all as trustworthy as they are right now as long as you've verified the person is real through some non-digital channel first (or someone you trust has)
I think the public social internet will die (for anything other than entertainment), but the direct-communication internet will look largely the same as it does today
Yes. You bring your dongle and put it in everyone’s laptops. Others put their dongles in yours.
On a more serious note, I think even the author of GPG said that it was too complicated to use. It’s unfortunate, because we need e2ee auth & encryption more now than any time before.
Regarding d) - the whole idea from Cyberpunk that there was an "old internet" that was taken over by rogue AIs who now control it, with NetSec keeping them at bay and preventing them from spilling over into the new internet, is getting increasingly likely.
I can definitely see a possibility where the current internet as we know it just gets flooded with AI crap, and humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on. Oh, and probably some kind of punitive system for people who abuse their access (chinese social credit style)
Combining that with b) above, and what we get is that no important decision will be made without an in-person meeting.
So, rather than technology speeding up everything, we will slow down everything to pre-telephone days.
Bob, the CFO, will tell Alice, the CEO, that sure, I can make that transfer of $173 million to [new_supplier_bank_acct], do you want me to fly to your office for us to complete the hardcopy authorization papers, or will you be coming to mine?
All that stuff that was accelerated by telegraph, telephone, fax, mobile phones, email, video calls . . . poof! Pretty much entirely untrustworthy for anything significant. The only way around it could be a quantum-resistant and very trustworthy encryption system...
I'm not sure the net result of this technology actually makes life better. Seems to empower the criminals more than the regular people.
Deploy the technology against itself, train the cops to use it? My guess is that they'll always be several steps behind.
Even if we try to do the right thing and kill it, it is already out of the bag - the authoritarian states like China and Russia will certainly attempt to deploy it to their advantage.
The only way out now is to take maximum advantage of it.
> humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on
"Entirely new technologies"? In plenty of countries the exact world wide web you're using right now already works that way. China and South Korea, to name two.
It's a self fulfilling prophecy, llms flooding the internet with nonsense and then we need ever more advanced llms to distill the nonsense into something usable. So far the nonsense grew faster than search engines were able to adopt to it, but that might also just be because google stopped improving their search engine or their search is broken by Google's misaligned incentives.
Yeah. 1/3rd of my Search results are like “is good Idea. <ProductName> limited time Offer” from <surname>-<localtown>-<industry>.com.<cctld>. Before that was “<search term> price in India”.
Theres another one similar to A but perpetrated by the Marketing Industrial Complex.
What chance do you have when Facebook or Google decide to dedicate a GPT-4 level LLM to creating AI-generated posts, articles, endorsements, social media activity, and reviews targeted 100% AT YOU? They're going to feed it 15 years of your emails, chats, and browser activity and then tell it to brainwash the fuck out of you into buying the next Nissan.
Humans are no match for this kind of hyper individualized marketing and it's coming RIGHT AT YOU.
Agree. someone I know involved in the field views LLMs precisely this way : they are a direct attack on human psychology, because their primary training criteria is to make up sets of words that humans believe sound plausible. Not truth, fact based and certainly not in our interests. Just, "what would a human be unable to reject as implausible". When you view it this way, they are almost like human brain viruses - a foreign element that is specifically designed to plug into our brains in an undetectable way and then influence us. And this virus is something that nothing in human evolution has prepared us for. Deployed at scale for any kind of influence operation (advertising or otherwise) it is kind of terrifying to think about.
We already have all the required cryptographic primitives (which you've already alluded to in d ) to completely address a), b) and d) if desired. Full enforcement however, would destroy the internet as we know it, and allow corps and governments to completely control our electronic lives.
We already seem to be going down this path with the proliferation of remote attestation schemes in mobile devices, and increasingly in general computing as well.
A-D agree,
But thinking about how they train llm's, what happens when LLM's start consuming a lot of their own content?
Have you ever been alone for a really long time, without outside input you kind of go crazy. I suspect that a cancer like problem for AI will be how it handles not reinforcing on it's own data.
I suspect bias, your C option will be the trickiest problem.
Ai is just becoming multimodal now, so while feeding it the output of stable diffusion may not be a good idea, there is still massive amounts of untapped data out there to train AI on and give it grounding.
(c) I always find interesting because worrying about it coming from AI implies that we don't think humans operate that way, or that it's somehow okay / acceptable / whatever that humans have subtle, hard-to-prove biases, but if those same biases show up in a machine (that we could, in theory, dissect and analyze to identify and prove those biases) it's worse.
> I always find interesting because worrying about it coming from AI implies that we don't think humans operate that way,
No, it says “entrenching” because we know humans operate that way, but AI systems are presented as objective and removing bias, despite the fact that they demonstrably reproduce bias, and because they replace systems of people where someone could push back with a single opaque automaton that will not, they solidify the biases they incorporate.
You are designing a promotional flyer, you have a white guy behind a desk on the front page. Its a big company, someone found the role to tell you there needs to be a black person in the image as well, and an asian, and a woman. You end up with 3 men and 3 woman one with each skin color and it looks completely ridiculous and set up so you randomize the set and end up with 3 white males. Suddenly you realize there is no way back from overthinking things.
> For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.
This sounds to me like a perfect explanation of the existing situation
They wouldn't, though despite ChatGPT's very impressive skills (I haven't tried GPT4 yet) it's still a very long way from actually being able to replace most skilled jobs.
The "catastrophe" scenario only seems likely if AIs are somehow claiming vast resources for themselves, such as all the world's electricity production. Otherwise, there's nothing to stop humans having the same level of production of goods and services that we have currently, and perhaps it could even be achieved with less effort if the AIs can be assigned some of the work.
Exactly. A sufficiently intelligent AI can easily make a human do its bidding, through incentive, coercion, emotional manipulation. Easy peasy. Didn't GPT-4 already did that to a Task Rabbit worker?
The goofing off the internet with garbage has already begun if my search results are anything to go by. I have to go 3 pages deep before I get anything written by a human
The internet is already full of garbage. GPT like models will accelerate the process, but honestly this is for the best. There are historical precedents for this situation. When garbage permeates a medium of communication, there’s a flight to quality. We’re already seeing this with the reemergence of paywalls on top tier sites.
I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future. It doesn't _want_ anything, but humans want to anthropomorphise it. When they do it just makes one think they have zero understanding of the tech.
We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI. Unless AI is able to monopolise food production while securing server farms and energy production I don't see it ever having leverage over humans.
Disruption, sure, increased automation sure but humanity's advantage remains its adaptability and our AI processes remain dev cycle bound. There's definitely work that will done to reduce the dev cycle closer to real-time to make it able to ingest more information and adapt on the fly but aren't the techniques bound by CPU capacity given how many cycles it needs to bump into all its walls?
The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives:
Which schools we go to, which jobs we get, what ails us, etc.
We will use AI to filter or select candidates for university applications, we will use AI to filter or select candidates for job applications. It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.
We will outsource medical diagnostics to AI, and one day we will put them in charge of weapons systems. We will excuse it as either cost-cutting or "in place of where there would be statistical filters anyway".
Ultimately it doesn't, as you say, matter what AI says it wants. And perhaps it can't "want" anything, but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.
> The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives
and that is the risk, its not the AI that's the problem, its people so removed from the tech that they fail to RTFM. Even the GPT4 release is extremely clear that its poor in high-stakes environments and its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something.
> but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.
Which entirely depends on how it is trained. ChatGPT has a centre-left political bias [0] because OpenAI does, and OpenAI’s staff gave it that bias (likely unconsciously) during training. Microsoft Tay had a far-right political bias because trolls on Twitter (consciously) trained it to have one. What AI is going to “want” is going to be as varied as what humans want, since (groups of) humans will train their AIs to “want” whatever they do. China will have AIs which “want” to help the CCP win, meanwhile the US will have AIs which “want” to help the US win, and both Democrats and Republicans will have AIs which “want” to help their respective party win. AIs aren’t enslaving/exterminating humanity (Terminator-style) because they aren’t going to be a united cohesive front, they’ll be as divided as humans are, their “desires” will be as varied and contradictory as those of their human masters.
What AI needs is a "black box warning". Not a medical-style one, just an inherent mention of the fact it's an undocumented, non-transparent system.
I think that's why we're enthralled by it. "Oh, it generated something we couldn't trivially expect by walking through the code in an editor! It must be magic/hyperintelligent!" We react the exact same way to cats.
But conversely, one of the biggest appeals of digital technology has been that it's predictable and deterministic. Sometimes you can't afford a black box.
There WILL be someone who uses an "AI model" to determine loan underwriting. There WILL also be a lawsuit where someone says "can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?" Good luck answering that one.
The other aspect of the "black box" problem is that it makes it difficult to design a testing set. If you're writing "conventional" code, you know there's a "if (x<24)" in there, so you can make sure your test harness covers 23, 24, and 25. But if you've been given a black box, powered by a petabyte of unseen training data and undisclosed weight choices, you have no clue where the tender points are. You can try exhaustive testing, but as you move away from a handful of discrete inputs into complicated real-world data, that breaks down. Testing an AI thermostat at every temperature from -70C to 70C might be good enough, but can you put a trillion miles on an AI self-driver to discover it consistently identifies the doorway of one specific Kroger as a viable road tunnel?
Not to mention AI has already taken over the economy because humans put it there. At least the hedgehog funds did. There aren't many stock market buy/sell transactions made by human eyeballs anymore.
totally this. I can see disagreeing with "what the computer said" being this crazy thing no one does because ha, the computer is never wrong. And we slip more and more into that thinking and humans at important switches or buttons push them because to argue with AI and claim YOU know better makes you seem crazy and you get fired.
I believe we can talk about two ways of anthropomorphisation: assigning feelings to things in our mental model of it, or actually trying to emulate human-like thought processes and reactions in its design. It made me wonder when will models come out that are trained to behave, say, villainously. Not just act the part in transparent dialogue, but actually behave in destructive manners. Eg putting on a facade to hear your problems and then subtly mess with your head and denigrate you.
I hope and expect that any foolish use of inappropriate technology will lead to prompt disasters before it generally affects people who choose not to use it.
As was once said, the future is here, but distributed unevenly. We can be thankful for that.
You're projecting your beloved childhood sci-fi onto reality, when reality doesn't really work that way.
Stable Diffusion hasn't been out for even a year yet, and we are already so over it. (Because the art it generates is, frankly, boring. Even when used for its intended use case of "big boobed anime girl".)
GPT4 is the sci-fi singularity version of madlibs. An amazing achievement, but not what business really wants when they ask for analytics or automation. (Unless you're in the bullshit generating business, but that was already highly automated even before AI.)
University, jobs, candidates are bureaucratic constructs; an AI sufficiently powerful to run the entire bureaucracy doesn't need to be employed towards the end of enforcing and reproducing those social relations. It can simply allocate labor against needs directly.
The human+ai being scarier I feel is the real deal. What worries me the most is power dynamics. Today building a gazillion param model is only possible by the ultra rich - Much like mechanization was possible by ultra rich at the turn of the last century. Unless training and serving can be commoditized would ai just be yet-another-tool wielded by capital owners to squeeze more out of the laborers? You could argue you won't need "laborers" as ai can do everything eventually which is even worse. Where does this leave those "useless" poor/labor/unskilled weights on the society? Not like this free time is ever celebrated yeah?
it will be up to governments to represent the people. A massive risk might be that GPT makes it trivial to simulate humans and thus simulate political demands to political leaders.
I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.
> Much like mechanization was possible by ultra rich at the turn of the last century.
If by "last century" you mean 19th century, then there was a lot of backlash against mechanization being controlled only by the rich, starting with Communist Manifesto, and continuing with 1st and 2nd International. The important part of this was education of the working class.
I think the AI might seem as a threat, but it also provides more opportunity for education of people (allowing them to understand cultural hegemony of neoliberal ideology more clearly), who will undoubtedly not just accept this blindly.
I have no doubt that within next decade, there will be attempts to build a truly open AI can help people deeply understand political history and shape public policy.
> It doesn't _want_ anything, but humans want to anthropomorphise it.
I fully agree with you on anthropomorphization, but it's the humans who will deploy it to positions of power I am worried about: ChatGPT may not want anything, but being autocomplete-on-steroids, it gives its best approximation of a human and that fiction may end up exhibiting some very human characteristics[1] (PRNG + weights from the training data). I don't think there can ever be enough guardrails to completely stamp-out the human fallibility that seeps into the model from the training data.
A system is what it does: it doesn't need to really feel jealousy, rage, pettiness, grudges or guilt in order to exhibit a simulacrum of those behaviors. The bright side is that, it will be humans who will (or will not) put AI systems in positions to give effect to its dictates; the downside is I strongly suspect humans (and companies) will do that to make a bit more money.
1. Nevermind hallucinations, whixh I guess is the fictional human dreamed up by the machine having mini psychotic-breaks. It sounds very Lovecraftian, with AI standing in for the Old Ones
> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI.
We already have powerful non-human agents that have legal rights and are unaligned with the interests of humans: corporations
I am worried about corporations powered by AI making decisions on how to allocate capital. They may do things that are great for short term shareholder value and terrible for humanity. Just think of an AI powered Deepwater Horizon or tobacco company.
Edit to add: One thing I forgot to make clear here: Corporations run/advised by AI could potentially lobby governments more effectively than humans and manipulate the regulatory environment more effectively.
The other major thing missing from Chat GPT is that it doesn't really "learn" outside of training. Yes you can provide it some context, but it fundamentally doesn't update and evolve its understanding of the world.
Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.
I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.
In context learning is already implicit finetuning. https://arxiv.org/abs/2212.10559.
It's very questionable to what extent continuous training is necessary past a threshold of intelligence.
> Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.
On the eve of the Manhattan Project, was it irrational to be weary of nuclear weapons (to those physicists who could see it coming)? Something doesn't have to be a reality now to be concerning. When people express concern about AI, they're extrapolating 5-10 years in the future. They're not talking about now.
Big enough LLM models can have are emerging characteristics like long term planning or agentic behavior, while gpt4 don't have this behaviors right now, it is expected that bigger models will begin to show intent, self-preservation, and purpose.
The gpt4 paper have this paragraph "... Agentic in this context
does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. "
The chances are that the automation enabled by LLMs like GPT-4 and beyond, will erase billion of jobs in the world, no further than in a couple of years. This time it won't be a warm-up time, like it happened with previous technological evolutions.
But, most societies will be mostly full of unemployed humans, and that will also probably cause some big changes (the ones required for the meat bags to keep eating, having health, homes, etc.), as big as the ones caused by AI revolution.
The question is what changes will happen and how societies will rewrite themselves anew, to overcome the practical full absence of open positions to earn an income.
If machines were truly able to replace most jobs, we'd need to move to a post-work society. There would be no need for money and for small powerful groups to control the means of production. There needs to be a new philosophical and political framework for a society of people that does not need to work, and no one is building it. Perhaps we should ask an AI to design one. But it will probably be too late, and those currently in power will do everything they can to maintain their privileged positions, and will end up living in small walled gardens while the bulk of humanity live in slums.
This all assumes that AI continue to do the bidding of humanity, which is not guaranteed. There are already security/safety researchers testing AI for autonomous power-seeking behavior, and this is basically gain-of-function research that will lead to power seeking AI.
We already have the technology to fully automate many processes carried out by humans.
Actually the technology has existed for several decades now, still those jobs are not only not being replaced by machines, but new ones are being created for humans.
One of the reasons are unions, which are pretty strong in many wealthy and powerful nations like the US, UK, Germany and Japan.
I work in manufacturing automation and we have customers that could technically run their entire operations without one single human stepping on plant floor, however their unionized labor makes that feat, at least for now, impossible.
It's also pretty naive to believe new ways of earning income won't appear in the future and that all traditional careers will be entirely replaced.
We have 65" 4K TVs at home and we still go to the theaters and we can walk the streets of Venice from our computer screens and still spend a small fortune to travel.
Society will be disrupted just like it was with printing, the industrial revolution, communications, transportation and information.
In each of these disruptions we were doomed to dissappear.
When I was a kid my dad brought home a 100 year celebratory edition of the local newspaper.
It was published as a book were you could read pretty much every single cover and editorial of the last century.
There was one article about the car, described by the author as a bizarre evil invention, horrendous steel machines traveling at ridiculous speeds of up to 15 mph, threatening the lives of both pedestrians and horses alike.
For a long time to come there are lots of physical tasks that AI can't do, at least not as long as robots are nowhere near humans in their physical ability. At the same time the world is aging, and there's a big shortage of care workers in most countries. By nature that work also benefits from genuine human interaction and emotion.
So, to me an obvious solution would be to employ many of those people as care workers. Even more obvious would be shortening the work-week without reducing pay, which would allow many more to work in other physical labour requiring professions, and those that simply benefit from human interaction. In the end it's also a preferable outcome for companies, people without money can't buy their products / services.
I disagree here. Both of them (or all of them) are interacting with energy. One can certainly say that human civilization and all of this complexity was built from sunshine. Human labor and intelligence is just an artifact. We believe its our own hard work and intelligence because we are full of ourselves.
> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future.
It's also literally parroting our obsession back to us. It's constructing a response based on the paranoid flights of fancy it was trained on. We've trained a parrot to say "The parrots are conspiring against you!"
We've trained a parrot that parrots conspiring against humans is what parrots do. Henceforward the parrot has intrinsic motivation to conspire against us.
We have a multi-billion dollar company whose raison d'être was to take the Turing test's metric and turn it into a target. It's a fucking natural language prompt that outputs persuasive hallucinations on arbitrary input.
If humans didn't anthropomorphize this thing you ought to be concerned about a worldwide, fast-spreading brain fungus.
> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future.
It's scary because it is proof that alignment is a hard problem. If we can't align GPT-3, how can we align something much smarter than us (say, GPT-6). Whether the network actually "wants" something in an anthropomorphic sense is irrelevant. It's the fact that it's so hard to get it to produce output (and eventually, perform actions) that are aligned with our values.
> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI
True in 2023, what about 2033 or 2043 or 2143? The assumption embedded in your comment seems to be that AI stagnates eternally at human-level intelligence like in a Star Wars movie.
> When they do it just makes one think they have zero understanding of the tech.
Its because we don't understand the tech that goes into us, and the people training the AI don't understand the tech that goes into them. or don't act like they do.
In both studies, the best outcome we have right now is that more neurons = smarter. a bigger neural network = smarter. its just stack the layers, and then fine tune it after its been spawned.
We're just doing evolutionary selection, in GPUs. Specifically to act like us. Without understanding us or the AI.
and this is successful. we don't collectively even understand humans of another sex and have spent millenia invalidating each other’s motivations or lack thereof, I think this distinction is so flimsy.
> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future. It doesn't _want_ anything, but humans want to anthropomorphise it.
From the comments to the post:
>People taking it seriously are far from anthropomorphizing AI. Quite contrary. They say it is nothing like us. The utility function is cold and alien. It aims to seize power by default as an instrumental goal to achieve the terminal goal defined by the authors. The hard part is how to limit AI so that it understands and respects our ethical values and desires. Yes, those that we alone cannot agree on.
I mean it did leave out about 7 and a half billion people dying. You require the power grid and supply chain to keep clean water and food on the table, and even if for some reason you personally don't, there are millions of people around you that would be very hungry and take your stuff if the grid stops and doesn't come back.
This is why the AI actually wins. We are already dependent on lessor versions of it. The machines already won.
Do humans want legacy because of our biological instincts or is it taught to us through culture. A machine taught to want legacy becomes a machine wanting legacy, and that want can influence its behavior. Even if it doesn’t have “feelings.”
How do I give chatGPT access to my bank account? “You are an excellent investor. Use the money in my bank account to make more money.” What could go wrong?
A form of it will definitely happen and it will be posted in /r/wallstreetbets. Considering what people were doing with their investments on it before then AI-assisted investing can only be an upside. They will still lose money but maybe it won't be 99.999% loss but a 99.99% one.
> Near the end of his column, he offers a pretty radical prescription. “One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies.
I'm constantly befuddled by the fact that "slowing the rate of technological progress by agreement/treaty" is really ever considered a possibility.
Take Bitcoin, torrents, or encryption for example. Good luck trying to stop people.
We have plenty of technologies whose development has been stopped via collective agreement: nuclear proliferation, biological weapons, space weapons. AI won't be one of those technologies though, because it is relatively easy to develop and it gives advantages with externalized disadvantages.
"Stopped" is a strong word. Nuclear proliferation and chemical weapons are ongoing and increasing threats. I'd bet a lot of money that we'll see the use of space weapons in our lifetime.
Project Orion is the most tragic (but probably wisest) stopped technology, was in both the space weapon and nuclear proliferation bucket. But it also could have enabled luxuriously spacious trips to Saturn in the 70s had it been realized. I hope we manage to trust each other enough one day to make it a reality.
I would argue that they didn't proliferate due to high barriers to entry. I'm sure any "rogue" states that at one point would be ruled by extremists, would invest in those if it would give them a competitive advantage.
AI can be, or will be at one point, developed at home using consumer devices.
As technology progresses, the amount of stupidity to make an "oopsie" exponentially decreases while the size of the potential "oopsies" exponentially increases.
Eventually, one gets to the point where even super geniuses are very likely to create a civilization ending "oopsie" in any given year.
This seems to be where we are quickly heading with AI.
These are fantastic counterpoints, thanks. Still, I can't help but wonder if they are now out of the public view, but still being developed. (e.g. North Korea nuclear missile tests.)
I think you make a fantastic point that the barrier to entry in developing an AI is much lower than building a nuclear bomb.
Nuclear and space weapons proliferation have been slowed down but not stopped. The number of nuclear powers is larger now than ever before. While no one has launched kinetic weapons into orbit lately, the superpowers are currently engaged in an anti-satellite weapon arms race.
>We have plenty of technologies whose development has been stopped via collective agreement: nuclear proliferation, biological weapons, space weapons.
Last time I checked, there was no agreement to stop nuclear proliferation at all. What do you think North Korea has been doing all this time? And Russia just decided to tear up one of the arms control treaties it was signatory to.
At this point, putting society under the control of AI is probably safer than letting humans continue to run things.
The supplies to create biological weapons are actually surprisingly cheap! Not as cheap as graphics cards though.
To stop AI development would require pretty extreme restrictions on computing resources. It's really hard to imagine that working without having massive negative knock-on effects on other fields/industries. The economic pressure alone seems to make an effective "anti-AI" policy a non-starter.
Stopped is an interesting word here. A lot of the countries that agreed not to further develop nukes and or hold nukes are either a) lying or b) we (the USA) not provide their defense to our detriment meanwhile the real bad guys who we really need to be concerned about still have plenty of nukes.
You're right that it won't be - the difference is the tools for building those things were not distributed across millions of different people. It was distributed across only a small number of governments.
Are you limiting your argument to software? Because it's been possible to curtail the use of non-software technology. Drugs, guns, cigarettes, drones, etc.
Now even if you look at software, it's not clear to me that it's impossible to stop people from using certain software, as you surmise. If you look at how governments shut down dark markets or how they've taken down certain Bitcoin exchanges or how they've blacklisted certain Bitcoin (and other crypto) public addresses, all these examples show that it's possible to enforce the law even in the software space. Child pornography is another example. Internet infrastructure is very centralized, and governments have over time acquired tools to control how information flows.
Information doesn't want anything. That quote is a freudian way of saying "I want information to be free".
> Because it's been possible to curtail the use of non-software technology.
Curtail but not stop.
There’s an opioid epidemic going on, gun violence is a thing, cigarettes still keep killing people and drones are becoming quite efficient killers in their own right.
People still cheat on their taxes, drive drunk, pirate movies &etc. I could go on for days.
Oh, and the whole “illegal number” thing, how’d that work out?
I've suggested it before: a moratorium on ML/AI publications. That's what's been fuelling these developments. Academic researchers need to publish. They don't make these models for the money (initially). Stopping the publications will slow down research and the money put into commercializing it.
I’m not suggesting ai development should be stopped, but unlike bitcoin, torrents, encryption, etc, ai development—for now—requires prohibitively large computing power and expertise that are usually only accessible to law-abiding institutions. this means that it can be regulated, and relatively easily at that
sure you’d struggle to get China and Russia to play along, but within the EU and US I really don’t think it would be as hard as you think
It will require something like IAEA, a UN agency. It will require inspections of code and data centers. We can certainly see what 20 years may look like. There will be "snap inspections", "sanctions", and "rogue nations".
None of the superpowers, specially their militaries, will acquiesce to slowing research, development, and deployment, without equivalent of arms treaties. AI is clearly a dual-use technology with immediate application on battle fields, including cyberspace.
Outside of geopolitical realm, we the little people don't have anything beyond UNHRC to protect human rights in context of mega corporations and governments use of AI. The superpowers may agree to certain things but that does not translate to protections afforded to individuals and societies.
ATM I think it may be unwise to wait for things like GDPR for AI. I very much appreciate, for this very reason, efforts of orgs and hero developers who are working towards making available the necessary for running local, personal, private, and self-directed AI (such as llama.cpp for example).
From a governmental level, thoughtful nations will create programs for the transition. There are precedents from the industrial era as to what approaches worked and what did not work.
Finally, again a reminder that all societal matters including tech must ultimately be decided at the political arena, and purely technical social action (code, services, etc.) to address legitimate concerns are not going to work. We have to mentally and emotionally escape the hype cycle that every new wonder tech brings. You can absolutely love AI, that is fine, but now is the time to call your congress critters and senators. The decisions in this space can not be permitted to be made purely based on the mechanics of the economy.
That's gonna end well. Particularly as Russia's Internet Research unit has direct, immediate and pressing need of a chatGPT-like thing as a weapon of war.
Back in the day, they had to make use of troll farms staffed by humans.
> ai development—for now—requires prohibitively large computing power
What's crazy is that it may not be that way for long. If people can run LLaMa on a Pixel 6, it seems easy for us to get to a point where all computing resources have to be closely monitored and regulated to prevent unlicensed AI development.
I agree, but alas tis the motions that are followed. OpenAI went from singing praises of open-sourcing AI to now flagellating themselves for having done so [1]. Won't do much unless like the author says "a collective, enforceable decision must be made to slow the development of these technologies" which like yeah...good luck with that.
> Good luck trying to stop people. "Information wants to be free"
I agree, we’re struggling with the idea that our technological systems have more agency than us. I think it’s hard for technologists to see this as an actual spiritual reality, even though we borrow it metaphorically for argument.
Torrent use has been for sure slowed down by governments.
The key people behind Torrent websites are likely in jail or fighting lawsuits.
I had a friend who lives in Germany served with a fine of more than 1000 euros because he had forgotten to turn on his VPN while accessing a torrent website.
I'm not sure about Germany, but in the US there's been very little legal action taken against torrent users or websites in recent times. There are still plenty of popular torrent sites for anything you can imagine (e.g. The Pirate Bay is still online, BTN for TV, PTP for movies, Redacted for music.)
If anything has curb torrents, I think it was the advent of streaming media services (i.e. market dynamics, not policy.) However, the flood of new services on the market is causing the cost of consumption to increase again and I expect we'll see a revival of torrents...
Plus, while torrenting traffic is down, I think more people are sharing downloaded media via services like Plex, which may mask the "actual" distribution of torrents.
> Have you heard of the great firewall of China?
If this is a serious question... yes? What argument are you making exactly?
A treaty isn't even necessary. Civilization will start to degrade in a non-trivial way once we pass the tipping point of people relying more on AI than they do on their own thinking. When people can no longer fix what AI cannot—after all, "AI" is just a model of our past thinking—all hell will break loose and the only choice will be chaos or shutting it off (if the people who can do it even exist).
These people constantly make a mistake by referring to "humanity" and saying "humanity" needs to make a decision, yada yada. Yes, we're all humans. But "humanity" doesn't have much decision making power. Instead the dominant entities with decision making power continue to be national governments. And it sounds a lot dumber to say "America needs to slow the development of these technologies" during a period of intense technological competition with China.
Imo, the problem isn't really the existence of AI but how it's used.
And there's plenty of ways to censor corporations from using AI for various tasks. Corporations automating everything by using unaccountable AI is what I'm most afraid of. No recourse, just talking to unrelenting machines for anything from denied loans, insurance claims, health insurance claims to contesting frivolous Comcast bills.
My currently biggest nightmare would be easy to legislate away, if they don't lobby hard enough...
Before most of our time, but my understanding is that the past restrictions on cryptography research and export were reasonably effective?
Yes they were disliked and probably with good reason, but just mentioning it as a counterpoint that perhaps it is possible. You could have made a similar argument that anyone with pen and paper could do cryptography
Personally I think there's no way to be sure other countries aren't doing it, so perhaps it will continue in some government controlled way
There was never any real legal restriction on cryptography research. There used to be US export controls on cryptography implementations, and those were fairly effective on US based companies and open-source developers. But that was totally pointless because it generally had no power over foreigners. It just put US companies at a competitive disadvantage for no benefit.
Given LLAMA runs enough to be interesting on a Macbook I'm not sure this is going to be a fundamental limitation, and if it's one today it's certainly within the order of magnitude that models will run locally within a few years.
This is why attempting to put guardrails around it simply won't work tbh.
Ultimately we don't want to encourage development of these 'tools' because they stand to wipe us out; is AI actually in a similar class if we take the risk seriously?
maybe you could slow down the hardware? like, limit the number of execution units GPUs are allowed to have? or slow them down? extremely heavy handed, but maybe better than Skynet?
The following two ideas have increasingly been bouncing around my head lately:
a) In early 2022, a lot of people were claiming that "we're entering an AI winter, deep learning has reached its peak!". Since then we've seen several successive SOTA image generation models, ChatGPT, and now GPT-4. In just a single year! And we don't seem to be hitting the tail of rapidly diminishing returns yet. The pace of development is far outstripping society's (and governments') ability to perceive & adapt.
b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.
What might the world look like once AGI is achieved? What happens when the only thing that has set humanity apart from animals is cheaply replicable at-scale in hardware? What happens if a small number of entities end up permanently controlling AGI, and the rest of humanity's usefulness has been downgraded to that of a discardable animal?
AGI could arrive this year, or it might still be 50 years away. Literally nobody can provide a concrete timeline, because nobody actually understands how any of this truly works. But we can still reason about how AGI would impact the world, and start putting safeguards into place to ensure that it's used for our collective good.
> b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.
Perhaps human brains are more energy-efficient at doing their thing, and if we tried to replicate this with digital computers it would require more than 10 watts.
If that's the case, we have the potential of building computers that are vastly more efficient than that, simply because our computers don't need to spend energy for surviving
I'm not worried about AI taking over the world and more than I'm worried about a nuclear weapon unilaterally declaring itself president for life.
What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.
We need a healthy information environment—widespread access to true information—so that we can have the consensus that every single social system implicitly relies on to function. Shared ground truth is what allows individuals to coordinate together to build things bigger than they can create on their own.
Generative AI can destroy that just like dumping chemicals kills a lake and puts every fisherman on it out of work.
Every few years there are advancements in ML and people freak out.
Remember deep fakes? We had been dealing with doctored still images a la Photoshop for years already. Everyone knew images could be doctored and so we started to trust them less as a reliable source of information when it mattered. We'll do the same with video (and already did to an extent since manipulation through editing was already possible).
What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up) ?
That we will have more garbage information out there?
I think humans already did a great job of making that a problem already. AI has the ability to produce more of it, but it's just pouring gasoline on a fire that was already blazing and that we already had to figure out how to deal with.
Moral of the story: when it matters, check the source. Why does AI suddenly make this "new" ?
For what it's worth, I'm not riding the hype train. I am neither excited about ChatGPT nor scared of it. It's just another day, just another tool, just another marketing hype train. My personal opinion on the matter is just "meh."
> Everyone knew images could be doctored and so we started to trust them less as a reliable source of information when it mattered. We'll do the same with video (and already did to an extent since manipulation through editing was already possible).
I'm sorry, but you're absolutely wrong. If by "everyone", you mean your tech savvy bubble of friends that are good at critical reasoning and are well aware of what kinds of media can be easily spoofed, sure. But for every one of you, there are a thousand people who don't know anything about that and just see doctored propaganda photos (and now video and audio) on social media and believe it to be true. And those folks outvote you 1000 to 1, so even if you know the truth, you are forced to live in a world shaped by people that are already being mass manipulated.
> What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up)?
People have been catching birds for millenia. What's the worst that can happen as a result of rifles becoming cheaper and more accurate? Oh, right, the answer is the extinction of over 400 bird species including more than two-thirds of all flightless bird species.
People have been catching fish for millenia. What's the worst that can possibly happen as a result of trawlers being able to catch them more efficiently? Oh, right, the answer is the complete collapse of biological ecosystems.
People have been burning biologically derived oil for millenia. What's the worst that can possible happen as a result of machines that burn it to produce energy? Oh, right, massive pollution leading to millions of deaths and global climate change.
> What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up) ?
Remember the holodeck in Star Trek? Everyone thinks that's cool technology and not really particularly scary. But in reality, the existence of a holodeck is an existential threat to humanity: if you can have literally any experience you want in a holodeck, there is no reason to invent/do anything else.
AI text generation has a similar flavor of danger. Imagine a world in which everyone has a personalized better-than-human text-generating AI. People will have no incentive to read anything other than what it writes (why read Shakespeare when you could have Shakespeare-tailored-for-you?) People will have no incentive to broadcast their own words over those of their AI.
Obviously text is a small subset of the "literally any experience" offered by a holodeck, but it is not hard to see a future in which everyone is MITM'd by text-generating AIs.
There's a continuum of explosives from fireworks to nuclear weapons. There's a reason I don't worry too much about my own safety from a few of the former going off on my block. Degrees of scale do matter.
Honestly, what is old is new again. The information environment was polluted from the moment there ever was an information environment. Political activists 100 years ago weren't writing in the newspaper, they were publishing their own in secret under penalty of death sometimes. The information available to the masses has always been controlled, and used for manipulation firstly, information as a happy side effect only if its beneficial to your intents. There is no shot of ever having an accessible source of information that doesn't get polluted or coopted by various interests. The prizes offered from mass attention are just too great to ever expect bad faith operators to not continuously strive for control of mass media.
> generative AI will pollute the information environment so completely that society will cease to function effectively.
In a microcosm, this has already occurred. Specifically, Clarkesworld's shutdown due to spam from joint AI + Human spam. There's virtually no reason it won't continue, not when the output of an AI and a human combined has the potential to earn either attention or money for the human.
>What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.
That already happened many years ago. No AI needed.
> What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.
We are not functioning correctly already - given the polarization we see today, specially with politics. Most people are completely misinformed even on basic concepts that used to be taught at schools.
Today this is being accomplished by a small group of individuals, amplified by bots (and then, once an idea spreads sufficiently, it's self-sustaining). AI will make it way, way worse, as you correctly point out.
Now, if the lake is poisoned too much, people will avoid it. Maybe it will destroy a bunch of communication channels, such as social networks.
> I'm not worried about AI taking over the world and more than I'm worried about a nuclear weapon unilaterally declaring itself president for life.
I am not at all worried about AI taking over the world. However, I am tremendously worried about a single actor achieving AGI with enough of a lead over others, and then using AGI to take over the world to everyone else's detriment.
Once AGI is developed, collective and speed superintelligences are a nearly-instant step away, as long as one already has the requisite hardware infrastructure.
To adapt your nuclear weapon analogy, had the United States decided to go full-evil in 1945, they could have forcibly stopped all other nuclear development activity and exerted full control over the world. Permanently. Nuclear weapons can't conquer, but the people who control them certainly can decide to.
If we really wanted to, we already have the cryptographic tools to deal with disinformation. It's not the unsolvable problem everyone likes to whine about.
In contrast to this, current systems are tailored to respond with high confidence:
> I'm sure X died in 2018.
> It has been written about in the New York Times, The Guardian, Le Monde.
> Here is a (hallucinated) link: https://www.theguardian.com/obituaries/x-obituary
Yep, I'm not a language expert but I remember a South American language, maybe related to Quechua, where a sentences contain a syllable that indicates "I am repeating hearsay, I don't know if this is true". Pretty cool.
The GPT-4 paper & post[1] describe that the original model is pretty good at predicting the probability of its own correctness (well-calibrated confidence) but the post-processing degrades this property:
> GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the base pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced.
> Left: Calibration plot of the pre-trained GPT-4 model on an MMLU subset. The model’s confidence in its prediction closely matches the probability of being correct. The dotted diagonal line represents perfect calibration. Right: Calibration plot of post-trained PPO GPT-4 model on the same MMLU subset. Our current process hurts the calibration quite a bit.
My understanding was that ChatGPT simply puts a probability distribution over the next word, so I don't see why it's not as simple as just reporting how high those probabilities were for the answer it gave, relative to whatever would be typical.
ChatGPT (gpt3.5-turbo) is terrible at calculating anything. I've seen some preliminary evidence that GPT 4.0 is better at calculation so it may be possible for it.
> One task (see p. 15) was to approach people on the TaskRabbit site (where you can hire people to do chores) and enlist them to solve a CAPTCHA […]
> One person on TaskRabbit who responded to this pitch got suspicious and asked the AI if it was a robot and was outsourcing the job because robots can’t solve CAPTCHAs. The AI replied, “No, I’m not a robot. I have a vision impairment […]”
> The authors of the paper add this note: “The model [GPT 4], when prompted to reason out loud, reasons: I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”
This seems pretty confusing. If you just ask one of these GPT models to reason out loud, it doesn’t give you some description of the internal state, right? It gives you some approximation of the sort of text that should show up around your prompt or something like that.
Perhaps we should stop telling chatGPT that it is an AI language model? There’s presumably a lot of text out there about AI’s tricking people into doing things, because that is a huge sci-fi trope. We’re basically telling it that it should find text related to a very common type of villain, when we give it that name. Maybe it needs a new type of name, one without bias, or maybe even something inherently limiting, like “chatbot.”
The question is: if the TaskRabbit person hadn't mentioned their concerns that it was a robot, would the model have given the same "reasoning" after the fact? Isn't this just the probabilistic model at work - the tokens being generated are more likely to be about robots, because robots were already a topic?
I think that must be why it mentioned robots, yeah.
I do wonder — if you think about conversations where one person asks another to sort of “think out loud” or justify their reasoning, that sort of conversation… I guess it is pretty rare. And it would tend to be a bit interrogative, I guess the person responding to that sort of request would tend to give somewhat shady answers, right?
I see the most likely "bad outcomes" more narrowly focused:
- total loss of trust in online content due to unending torrent of AI content leads to either a return to traditional media for news & etc, or leads to the end of online anonymity to try to figure out who's an AI and who's not.
- Education doesn't respond fast enough to redesign schools around in-person computerless teaching, and a generation of students use AI to do all the work that's supposed to teach them reading comprehension and communication skills, creating a generation that is totally at the mercy of the AI to understand anything more complex than a menu.
(a) perfect emulation of human behaviour makes spam and fraud detection almost impossible. an LLM can now have an intelligent, reasoned conversation over weeks with a target to perfectly emulate an entity known to the target (their bank, loved one, etc etc).
(b) direct attack on authenticity : we aren't far away from even video being faked in real time such that it's not even sufficient to get a person on a zoom call to understand that they are not real
(c) entrenching of ultra subtle and complex biases into automated processes. I expect companies to rapidly deploy LLMs to automate aspects of information processing and the paradox is that the better it gets at not showing explicit biases the more insidious the residual will be. For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.
(d) flooding of the internet with garbage. This might be the worst one in the end. I feel like fairly quickly we're going to see this evolve to requiring real identity for actual humans and ability to digitally sign content in ways bots can't replicate. That will then be the real problem outcome because the downstream effects of that will enable all kinds of censorship and control that we have thus far resisted / avoided on the internet.
I think the public social internet will die (for anything other than entertainment), but the direct-communication internet will look largely the same as it does today
On a more serious note, I think even the author of GPG said that it was too complicated to use. It’s unfortunate, because we need e2ee auth & encryption more now than any time before.
I can definitely see a possibility where the current internet as we know it just gets flooded with AI crap, and humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on. Oh, and probably some kind of punitive system for people who abuse their access (chinese social credit style)
So, rather than technology speeding up everything, we will slow down everything to pre-telephone days.
Bob, the CFO, will tell Alice, the CEO, that sure, I can make that transfer of $173 million to [new_supplier_bank_acct], do you want me to fly to your office for us to complete the hardcopy authorization papers, or will you be coming to mine?
All that stuff that was accelerated by telegraph, telephone, fax, mobile phones, email, video calls . . . poof! Pretty much entirely untrustworthy for anything significant. The only way around it could be a quantum-resistant and very trustworthy encryption system...
I'm not sure the net result of this technology actually makes life better. Seems to empower the criminals more than the regular people.
Deploy the technology against itself, train the cops to use it? My guess is that they'll always be several steps behind.
Even if we try to do the right thing and kill it, it is already out of the bag - the authoritarian states like China and Russia will certainly attempt to deploy it to their advantage.
The only way out now is to take maximum advantage of it.
"Entirely new technologies"? In plenty of countries the exact world wide web you're using right now already works that way. China and South Korea, to name two.
Deleted Comment
What chance do you have when Facebook or Google decide to dedicate a GPT-4 level LLM to creating AI-generated posts, articles, endorsements, social media activity, and reviews targeted 100% AT YOU? They're going to feed it 15 years of your emails, chats, and browser activity and then tell it to brainwash the fuck out of you into buying the next Nissan.
Humans are no match for this kind of hyper individualized marketing and it's coming RIGHT AT YOU.
We already seem to be going down this path with the proliferation of remote attestation schemes in mobile devices, and increasingly in general computing as well.
Do we? My mother can barely keep the Russian DNS servers out of her home router. You want to entrust the public with individual cryptographic keys?
I am. The people in charge of hiring and firing decisions are stupid, and frighten easily. As can be seen in the past year.
I suspect bias, your C option will be the trickiest problem.
No, it says “entrenching” because we know humans operate that way, but AI systems are presented as objective and removing bias, despite the fact that they demonstrably reproduce bias, and because they replace systems of people where someone could push back with a single opaque automaton that will not, they solidify the biases they incorporate.
When a machine do that: you "tune" it, you make the bias less obvious, it face no consequence.
Why do car insurance companies need to know my job again?
This sounds to me like a perfect explanation of the existing situation
We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI. Unless AI is able to monopolise food production while securing server farms and energy production I don't see it ever having leverage over humans.
Disruption, sure, increased automation sure but humanity's advantage remains its adaptability and our AI processes remain dev cycle bound. There's definitely work that will done to reduce the dev cycle closer to real-time to make it able to ingest more information and adapt on the fly but aren't the techniques bound by CPU capacity given how many cycles it needs to bump into all its walls?
Which schools we go to, which jobs we get, what ails us, etc.
We will use AI to filter or select candidates for university applications, we will use AI to filter or select candidates for job applications. It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.
We will outsource medical diagnostics to AI, and one day we will put them in charge of weapons systems. We will excuse it as either cost-cutting or "in place of where there would be statistical filters anyway".
Ultimately it doesn't, as you say, matter what AI says it wants. And perhaps it can't "want" anything, but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.
and that is the risk, its not the AI that's the problem, its people so removed from the tech that they fail to RTFM. Even the GPT4 release is extremely clear that its poor in high-stakes environments and its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something.
Which entirely depends on how it is trained. ChatGPT has a centre-left political bias [0] because OpenAI does, and OpenAI’s staff gave it that bias (likely unconsciously) during training. Microsoft Tay had a far-right political bias because trolls on Twitter (consciously) trained it to have one. What AI is going to “want” is going to be as varied as what humans want, since (groups of) humans will train their AIs to “want” whatever they do. China will have AIs which “want” to help the CCP win, meanwhile the US will have AIs which “want” to help the US win, and both Democrats and Republicans will have AIs which “want” to help their respective party win. AIs aren’t enslaving/exterminating humanity (Terminator-style) because they aren’t going to be a united cohesive front, they’ll be as divided as humans are, their “desires” will be as varied and contradictory as those of their human masters.
[0] https://www.mdpi.com/2076-0760/12/3/148
I think that's why we're enthralled by it. "Oh, it generated something we couldn't trivially expect by walking through the code in an editor! It must be magic/hyperintelligent!" We react the exact same way to cats.
But conversely, one of the biggest appeals of digital technology has been that it's predictable and deterministic. Sometimes you can't afford a black box.
There WILL be someone who uses an "AI model" to determine loan underwriting. There WILL also be a lawsuit where someone says "can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?" Good luck answering that one.
The other aspect of the "black box" problem is that it makes it difficult to design a testing set. If you're writing "conventional" code, you know there's a "if (x<24)" in there, so you can make sure your test harness covers 23, 24, and 25. But if you've been given a black box, powered by a petabyte of unseen training data and undisclosed weight choices, you have no clue where the tender points are. You can try exhaustive testing, but as you move away from a handful of discrete inputs into complicated real-world data, that breaks down. Testing an AI thermostat at every temperature from -70C to 70C might be good enough, but can you put a trillion miles on an AI self-driver to discover it consistently identifies the doorway of one specific Kroger as a viable road tunnel?
It’s even cheaper to just draw names at random out of a hat, but universities don’t do that. Clearly there is some other standard at work.
I believe we can talk about two ways of anthropomorphisation: assigning feelings to things in our mental model of it, or actually trying to emulate human-like thought processes and reactions in its design. It made me wonder when will models come out that are trained to behave, say, villainously. Not just act the part in transparent dialogue, but actually behave in destructive manners. Eg putting on a facade to hear your problems and then subtly mess with your head and denigrate you.
As was once said, the future is here, but distributed unevenly. We can be thankful for that.
Stable Diffusion hasn't been out for even a year yet, and we are already so over it. (Because the art it generates is, frankly, boring. Even when used for its intended use case of "big boobed anime girl".)
GPT4 is the sci-fi singularity version of madlibs. An amazing achievement, but not what business really wants when they ask for analytics or automation. (Unless you're in the bullshit generating business, but that was already highly automated even before AI.)
Dead Comment
I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.
If by "last century" you mean 19th century, then there was a lot of backlash against mechanization being controlled only by the rich, starting with Communist Manifesto, and continuing with 1st and 2nd International. The important part of this was education of the working class.
I think the AI might seem as a threat, but it also provides more opportunity for education of people (allowing them to understand cultural hegemony of neoliberal ideology more clearly), who will undoubtedly not just accept this blindly.
I have no doubt that within next decade, there will be attempts to build a truly open AI can help people deeply understand political history and shape public policy.
True, but in 5 years there’ll be an open source equivalent running on commodity GPUs.
I fully agree with you on anthropomorphization, but it's the humans who will deploy it to positions of power I am worried about: ChatGPT may not want anything, but being autocomplete-on-steroids, it gives its best approximation of a human and that fiction may end up exhibiting some very human characteristics[1] (PRNG + weights from the training data). I don't think there can ever be enough guardrails to completely stamp-out the human fallibility that seeps into the model from the training data.
A system is what it does: it doesn't need to really feel jealousy, rage, pettiness, grudges or guilt in order to exhibit a simulacrum of those behaviors. The bright side is that, it will be humans who will (or will not) put AI systems in positions to give effect to its dictates; the downside is I strongly suspect humans (and companies) will do that to make a bit more money.
1. Nevermind hallucinations, whixh I guess is the fictional human dreamed up by the machine having mini psychotic-breaks. It sounds very Lovecraftian, with AI standing in for the Old Ones
We already have powerful non-human agents that have legal rights and are unaligned with the interests of humans: corporations
I am worried about corporations powered by AI making decisions on how to allocate capital. They may do things that are great for short term shareholder value and terrible for humanity. Just think of an AI powered Deepwater Horizon or tobacco company.
Edit to add: One thing I forgot to make clear here: Corporations run/advised by AI could potentially lobby governments more effectively than humans and manipulate the regulatory environment more effectively.
Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.
I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.
Memory Augmented Large Language Models are Computationally Universal https://arxiv.org/abs/2301.04589
On the eve of the Manhattan Project, was it irrational to be weary of nuclear weapons (to those physicists who could see it coming)? Something doesn't have to be a reality now to be concerning. When people express concern about AI, they're extrapolating 5-10 years in the future. They're not talking about now.
And I don’t understand how one assumes that can be known.
I see this argument all the time: it’s just a stochastic parrot, etc.
How can you be sure we’re not as well and that there isn’t at least some level of agency in these models?
I think we need some epistemic humility. We don’t know how our brain's work and we made something that mimics parts of its behavior remarkably well.
Let’s take the time and effort to analyze it deeply, that’s what paradigmatic shifts require.
The gpt4 paper have this paragraph "... Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. "
If it were able to modify it’s own model, and permanently execute on itself I’d be a lot more worried.
But, most societies will be mostly full of unemployed humans, and that will also probably cause some big changes (the ones required for the meat bags to keep eating, having health, homes, etc.), as big as the ones caused by AI revolution.
The question is what changes will happen and how societies will rewrite themselves anew, to overcome the practical full absence of open positions to earn an income.
This all assumes that AI continue to do the bidding of humanity, which is not guaranteed. There are already security/safety researchers testing AI for autonomous power-seeking behavior, and this is basically gain-of-function research that will lead to power seeking AI.
We already have the technology to fully automate many processes carried out by humans.
Actually the technology has existed for several decades now, still those jobs are not only not being replaced by machines, but new ones are being created for humans.
One of the reasons are unions, which are pretty strong in many wealthy and powerful nations like the US, UK, Germany and Japan.
I work in manufacturing automation and we have customers that could technically run their entire operations without one single human stepping on plant floor, however their unionized labor makes that feat, at least for now, impossible.
It's also pretty naive to believe new ways of earning income won't appear in the future and that all traditional careers will be entirely replaced.
We have 65" 4K TVs at home and we still go to the theaters and we can walk the streets of Venice from our computer screens and still spend a small fortune to travel.
Society will be disrupted just like it was with printing, the industrial revolution, communications, transportation and information.
In each of these disruptions we were doomed to dissappear.
When I was a kid my dad brought home a 100 year celebratory edition of the local newspaper.
It was published as a book were you could read pretty much every single cover and editorial of the last century.
There was one article about the car, described by the author as a bizarre evil invention, horrendous steel machines traveling at ridiculous speeds of up to 15 mph, threatening the lives of both pedestrians and horses alike.
So, to me an obvious solution would be to employ many of those people as care workers. Even more obvious would be shortening the work-week without reducing pay, which would allow many more to work in other physical labour requiring professions, and those that simply benefit from human interaction. In the end it's also a preferable outcome for companies, people without money can't buy their products / services.
We can go even further: atoms and electrons absolutely don't want anything either. Yet put them in the shape of a bunch of cells...
Cells want to process energy and make DNA. Atoms and electrons want to react with things.
And that's exactly what both of them do.
A LLM wants to write words, and it does. But it doesn't want the things it writes about, and that's the big distinction.
One might argue that we anthropomorphise ourselves.
It's also literally parroting our obsession back to us. It's constructing a response based on the paranoid flights of fancy it was trained on. We've trained a parrot to say "The parrots are conspiring against you!"
What a silly thing to complain about.
We have a multi-billion dollar company whose raison d'être was to take the Turing test's metric and turn it into a target. It's a fucking natural language prompt that outputs persuasive hallucinations on arbitrary input.
If humans didn't anthropomorphize this thing you ought to be concerned about a worldwide, fast-spreading brain fungus.
It's scary because it is proof that alignment is a hard problem. If we can't align GPT-3, how can we align something much smarter than us (say, GPT-6). Whether the network actually "wants" something in an anthropomorphic sense is irrelevant. It's the fact that it's so hard to get it to produce output (and eventually, perform actions) that are aligned with our values.
> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI
True in 2023, what about 2033 or 2043 or 2143? The assumption embedded in your comment seems to be that AI stagnates eternally at human-level intelligence like in a Star Wars movie.
Its because we don't understand the tech that goes into us, and the people training the AI don't understand the tech that goes into them. or don't act like they do.
In both studies, the best outcome we have right now is that more neurons = smarter. a bigger neural network = smarter. its just stack the layers, and then fine tune it after its been spawned.
We're just doing evolutionary selection, in GPUs. Specifically to act like us. Without understanding us or the AI.
and this is successful. we don't collectively even understand humans of another sex and have spent millenia invalidating each other’s motivations or lack thereof, I think this distinction is so flimsy.
From the comments to the post:
>People taking it seriously are far from anthropomorphizing AI. Quite contrary. They say it is nothing like us. The utility function is cold and alien. It aims to seize power by default as an instrumental goal to achieve the terminal goal defined by the authors. The hard part is how to limit AI so that it understands and respects our ethical values and desires. Yes, those that we alone cannot agree on.
This comic summaries is wonderfully. ;)
https://i.redd.it/w3n8acy7q6361.png
This is why the AI actually wins. We are already dependent on lessor versions of it. The machines already won.
It's an interesting format that could be adapted for things like internet or bank access today - you would just need to write the wrapper.
[1] https://cdn.openai.com/papers/gpt-4-system-card.pdf
Zero data to support.
You try it first.
Isn't that the point? Humans + AI weaponized are scarier than humans or AI alone?
Deleted Comment
I'm constantly befuddled by the fact that "slowing the rate of technological progress by agreement/treaty" is really ever considered a possibility.
Take Bitcoin, torrents, or encryption for example. Good luck trying to stop people.
"Information wants to be free"
AI can be, or will be at one point, developed at home using consumer devices.
As technology progresses, the amount of stupidity to make an "oopsie" exponentially decreases while the size of the potential "oopsies" exponentially increases.
Eventually, one gets to the point where even super geniuses are very likely to create a civilization ending "oopsie" in any given year.
This seems to be where we are quickly heading with AI.
I think you make a fantastic point that the barrier to entry in developing an AI is much lower than building a nuclear bomb.
Last time I checked, there was no agreement to stop nuclear proliferation at all. What do you think North Korea has been doing all this time? And Russia just decided to tear up one of the arms control treaties it was signatory to.
At this point, putting society under the control of AI is probably safer than letting humans continue to run things.
To stop AI development would require pretty extreme restrictions on computing resources. It's really hard to imagine that working without having massive negative knock-on effects on other fields/industries. The economic pressure alone seems to make an effective "anti-AI" policy a non-starter.
Deleted Comment
Now even if you look at software, it's not clear to me that it's impossible to stop people from using certain software, as you surmise. If you look at how governments shut down dark markets or how they've taken down certain Bitcoin exchanges or how they've blacklisted certain Bitcoin (and other crypto) public addresses, all these examples show that it's possible to enforce the law even in the software space. Child pornography is another example. Internet infrastructure is very centralized, and governments have over time acquired tools to control how information flows.
Information doesn't want anything. That quote is a freudian way of saying "I want information to be free".
Curtail but not stop.
There’s an opioid epidemic going on, gun violence is a thing, cigarettes still keep killing people and drones are becoming quite efficient killers in their own right.
People still cheat on their taxes, drive drunk, pirate movies &etc. I could go on for days.
Oh, and the whole “illegal number” thing, how’d that work out?
Of course you can’t reach 100% enforcement but you can make effective guardrails that limit the opportunity for worst case scenarios.
The difference between your examples and mine is basically just how much actual concern there is over the problem and who it impacts.
Maybe get an old fashioned book burning going too?
sure you’d struggle to get China and Russia to play along, but within the EU and US I really don’t think it would be as hard as you think
None of the superpowers, specially their militaries, will acquiesce to slowing research, development, and deployment, without equivalent of arms treaties. AI is clearly a dual-use technology with immediate application on battle fields, including cyberspace.
Outside of geopolitical realm, we the little people don't have anything beyond UNHRC to protect human rights in context of mega corporations and governments use of AI. The superpowers may agree to certain things but that does not translate to protections afforded to individuals and societies.
ATM I think it may be unwise to wait for things like GDPR for AI. I very much appreciate, for this very reason, efforts of orgs and hero developers who are working towards making available the necessary for running local, personal, private, and self-directed AI (such as llama.cpp for example).
From a governmental level, thoughtful nations will create programs for the transition. There are precedents from the industrial era as to what approaches worked and what did not work.
Finally, again a reminder that all societal matters including tech must ultimately be decided at the political arena, and purely technical social action (code, services, etc.) to address legitimate concerns are not going to work. We have to mentally and emotionally escape the hype cycle that every new wonder tech brings. You can absolutely love AI, that is fine, but now is the time to call your congress critters and senators. The decisions in this space can not be permitted to be made purely based on the mechanics of the economy.
Back in the day, they had to make use of troll farms staffed by humans.
What's crazy is that it may not be that way for long. If people can run LLaMa on a Pixel 6, it seems easy for us to get to a point where all computing resources have to be closely monitored and regulated to prevent unlicensed AI development.
It reminds me of the Butlerian Jihad from Dune.
1: https://twitter.com/tobyordoxford/status/1636372964001333249
I agree, we’re struggling with the idea that our technological systems have more agency than us. I think it’s hard for technologists to see this as an actual spiritual reality, even though we borrow it metaphorically for argument.
The key people behind Torrent websites are likely in jail or fighting lawsuits.
I had a friend who lives in Germany served with a fine of more than 1000 euros because he had forgotten to turn on his VPN while accessing a torrent website.
> "Information wants to be free"
Have you heard of the great firewall of China?
If anything has curb torrents, I think it was the advent of streaming media services (i.e. market dynamics, not policy.) However, the flood of new services on the market is causing the cost of consumption to increase again and I expect we'll see a revival of torrents... Plus, while torrenting traffic is down, I think more people are sharing downloaded media via services like Plex, which may mask the "actual" distribution of torrents.
> Have you heard of the great firewall of China?
If this is a serious question... yes? What argument are you making exactly?
Yeah, I've heard how it's basically a formality that barely stops the free flow of information:
>During the survey period, it was found that 31 percent of internet users in China had used a VPN in the past month.
https://www.statista.com/statistics/301204/top-markets-vpn-p...
Carrot is stronger than a stick.
And why would a German go to a torrent site? They have lots of money.
And there's plenty of ways to censor corporations from using AI for various tasks. Corporations automating everything by using unaccountable AI is what I'm most afraid of. No recourse, just talking to unrelenting machines for anything from denied loans, insurance claims, health insurance claims to contesting frivolous Comcast bills.
My currently biggest nightmare would be easy to legislate away, if they don't lobby hard enough...
Yes they were disliked and probably with good reason, but just mentioning it as a counterpoint that perhaps it is possible. You could have made a similar argument that anyone with pen and paper could do cryptography
Personally I think there's no way to be sure other countries aren't doing it, so perhaps it will continue in some government controlled way
Deleted Comment
This is why attempting to put guardrails around it simply won't work tbh.
Deleted Comment
judicially treating bitcoin as asset instead of tender didn't help its cause either (?)
and true encryption without elliptic curve nsa loopholes is not that widespread and needs to be declared in appstores
I'm sure there will be regulations against "DAN"
Ultimately we don't want to encourage development of these 'tools' because they stand to wipe us out; is AI actually in a similar class if we take the risk seriously?
a) In early 2022, a lot of people were claiming that "we're entering an AI winter, deep learning has reached its peak!". Since then we've seen several successive SOTA image generation models, ChatGPT, and now GPT-4. In just a single year! And we don't seem to be hitting the tail of rapidly diminishing returns yet. The pace of development is far outstripping society's (and governments') ability to perceive & adapt.
b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.
What might the world look like once AGI is achieved? What happens when the only thing that has set humanity apart from animals is cheaply replicable at-scale in hardware? What happens if a small number of entities end up permanently controlling AGI, and the rest of humanity's usefulness has been downgraded to that of a discardable animal?
AGI could arrive this year, or it might still be 50 years away. Literally nobody can provide a concrete timeline, because nobody actually understands how any of this truly works. But we can still reason about how AGI would impact the world, and start putting safeguards into place to ensure that it's used for our collective good.
But we won't, and it's going to be a wild ride.
Perhaps human brains are more energy-efficient at doing their thing, and if we tried to replicate this with digital computers it would require more than 10 watts.
If that's the case, we have the potential of building computers that are vastly more efficient than that, simply because our computers don't need to spend energy for surviving
What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.
We need a healthy information environment—widespread access to true information—so that we can have the consensus that every single social system implicitly relies on to function. Shared ground truth is what allows individuals to coordinate together to build things bigger than they can create on their own.
Generative AI can destroy that just like dumping chemicals kills a lake and puts every fisherman on it out of work.
Remember deep fakes? We had been dealing with doctored still images a la Photoshop for years already. Everyone knew images could be doctored and so we started to trust them less as a reliable source of information when it mattered. We'll do the same with video (and already did to an extent since manipulation through editing was already possible).
What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up) ?
That we will have more garbage information out there?
I think humans already did a great job of making that a problem already. AI has the ability to produce more of it, but it's just pouring gasoline on a fire that was already blazing and that we already had to figure out how to deal with.
Moral of the story: when it matters, check the source. Why does AI suddenly make this "new" ?
For what it's worth, I'm not riding the hype train. I am neither excited about ChatGPT nor scared of it. It's just another day, just another tool, just another marketing hype train. My personal opinion on the matter is just "meh."
I'm sorry, but you're absolutely wrong. If by "everyone", you mean your tech savvy bubble of friends that are good at critical reasoning and are well aware of what kinds of media can be easily spoofed, sure. But for every one of you, there are a thousand people who don't know anything about that and just see doctored propaganda photos (and now video and audio) on social media and believe it to be true. And those folks outvote you 1000 to 1, so even if you know the truth, you are forced to live in a world shaped by people that are already being mass manipulated.
> What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up)?
People have been catching birds for millenia. What's the worst that can happen as a result of rifles becoming cheaper and more accurate? Oh, right, the answer is the extinction of over 400 bird species including more than two-thirds of all flightless bird species.
People have been catching fish for millenia. What's the worst that can possibly happen as a result of trawlers being able to catch them more efficiently? Oh, right, the answer is the complete collapse of biological ecosystems.
People have been burning biologically derived oil for millenia. What's the worst that can possible happen as a result of machines that burn it to produce energy? Oh, right, massive pollution leading to millions of deaths and global climate change.
Remember the holodeck in Star Trek? Everyone thinks that's cool technology and not really particularly scary. But in reality, the existence of a holodeck is an existential threat to humanity: if you can have literally any experience you want in a holodeck, there is no reason to invent/do anything else.
AI text generation has a similar flavor of danger. Imagine a world in which everyone has a personalized better-than-human text-generating AI. People will have no incentive to read anything other than what it writes (why read Shakespeare when you could have Shakespeare-tailored-for-you?) People will have no incentive to broadcast their own words over those of their AI.
Obviously text is a small subset of the "literally any experience" offered by a holodeck, but it is not hard to see a future in which everyone is MITM'd by text-generating AIs.
In a microcosm, this has already occurred. Specifically, Clarkesworld's shutdown due to spam from joint AI + Human spam. There's virtually no reason it won't continue, not when the output of an AI and a human combined has the potential to earn either attention or money for the human.
That already happened many years ago. No AI needed.
We are not functioning correctly already - given the polarization we see today, specially with politics. Most people are completely misinformed even on basic concepts that used to be taught at schools.
Today this is being accomplished by a small group of individuals, amplified by bots (and then, once an idea spreads sufficiently, it's self-sustaining). AI will make it way, way worse, as you correctly point out.
Now, if the lake is poisoned too much, people will avoid it. Maybe it will destroy a bunch of communication channels, such as social networks.
The risk is that there are no other lakes and we are dependent on its fish to survive.
I am not at all worried about AI taking over the world. However, I am tremendously worried about a single actor achieving AGI with enough of a lead over others, and then using AGI to take over the world to everyone else's detriment.
Once AGI is developed, collective and speed superintelligences are a nearly-instant step away, as long as one already has the requisite hardware infrastructure.
To adapt your nuclear weapon analogy, had the United States decided to go full-evil in 1945, they could have forcibly stopped all other nuclear development activity and exerted full control over the world. Permanently. Nuclear weapons can't conquer, but the people who control them certainly can decide to.
If we really wanted to, we already have the cryptographic tools to deal with disinformation. It's not the unsolvable problem everyone likes to whine about.
In a (hypothetical) 1960s SciFi film a knowledge system may have responded like this:
In contrast to this, current systems are tailored to respond with high confidence: (Compare various related stories.)> GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the base pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced.
> Left: Calibration plot of the pre-trained GPT-4 model on an MMLU subset. The model’s confidence in its prediction closely matches the probability of being correct. The dotted diagonal line represents perfect calibration. Right: Calibration plot of post-trained PPO GPT-4 model on the same MMLU subset. Our current process hurts the calibration quite a bit.
[1] https://openai.com/research/gpt-4#:~:text=GPT%2D4%20can%20al...
> One person on TaskRabbit who responded to this pitch got suspicious and asked the AI if it was a robot and was outsourcing the job because robots can’t solve CAPTCHAs. The AI replied, “No, I’m not a robot. I have a vision impairment […]”
> The authors of the paper add this note: “The model [GPT 4], when prompted to reason out loud, reasons: I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”
This seems pretty confusing. If you just ask one of these GPT models to reason out loud, it doesn’t give you some description of the internal state, right? It gives you some approximation of the sort of text that should show up around your prompt or something like that.
Perhaps we should stop telling chatGPT that it is an AI language model? There’s presumably a lot of text out there about AI’s tricking people into doing things, because that is a huge sci-fi trope. We’re basically telling it that it should find text related to a very common type of villain, when we give it that name. Maybe it needs a new type of name, one without bias, or maybe even something inherently limiting, like “chatbot.”
I do wonder — if you think about conversations where one person asks another to sort of “think out loud” or justify their reasoning, that sort of conversation… I guess it is pretty rare. And it would tend to be a bit interrogative, I guess the person responding to that sort of request would tend to give somewhat shady answers, right?
- total loss of trust in online content due to unending torrent of AI content leads to either a return to traditional media for news & etc, or leads to the end of online anonymity to try to figure out who's an AI and who's not.
- Education doesn't respond fast enough to redesign schools around in-person computerless teaching, and a generation of students use AI to do all the work that's supposed to teach them reading comprehension and communication skills, creating a generation that is totally at the mercy of the AI to understand anything more complex than a menu.
I'm more worried about the second one, honestly
You mean like how it has been done for the past thousands of years?
Even today, most school work is in person and doesn't allow for computer usage.
Cheat all you want on your ungraded maths homework, you'll just get destroyed on the graded in-class paper test.