> ...a lot of the safeguards and policy we have to manage humans own unreliability may serve us well in managing the unreliability of AI systems too.
It seems like an incredibly bad outcome if we accept "AI" that's fundamentally flawed in a way similar to if not worse than humans and try to work around it rather than relegating it to unimportant tasks while we work towards a standard of intelligence we'd otherwise expect from a computer.
LLMs certainly appear to be the closest to real AI that we've gotten so far. But I think a lot of that is due to the human bias that language is a sign of intelligence and our measuring stick is unsuited to evaluate software specifically designed to mimic the human ability to string words together. We now have the unreliability of human language processes without most of the benefits that comes from actual human level intelligence. Managing that unreliability with systems designed for humans bakes in all the downsides without further pursuing the potential upsides from legitimate computer intelligence.
I don’t disagree. But I also wonder if there even is an objective “right” answer in a lot of cases. If the goal is for computers to replace humans in a task, then the computer can only get the right answer for that task if humans agree what the right answer is. Outside of STEM, where AI is already having a meaningful impact (at least in my opinion), I’m not sure humans actually agree that there is a right answer in many cases, let alone what the right answer is. From that perspective, correctness is in the eye of the beholder (or the metric), and “correct” AI is somewhere between poorly defined and a contradiction.
Also, I think it’s apparent that the world won’t wait for correct AI, whatever that even is, whether or not it even can exist, before it adopts AI. It sure looks like some employers are hurtling towards replacing (or, at least, reducing) human headcount with AI that performs below average at best, and expecting whoever’s left standing to clean up the mess. This will free up a lot of talent, both the people who are cut and the people who aren’t willing to clean up the resulting mess, for other shops that take a more human-based approach to staffing.
I’m looking forward to seeing which side wins. I don’t expect it to be cut-and-dry. But I do expect it to be interesting.
Does "knowing what today is" count as "Outside STEM"? Coz my interactions with LLMs are certainly way worse than most people.
Just tried it:
tell me the current date please
Today's date is October 3, 2023.
Sorry ChatGPT, that's just wrong and your confidence in the answer is not helpful at all. It's also funny how different versions of GPT I've been interacting with always seem to return some date in October 2023, but they don't all agree on the exact day. If someone knows why, please do tell!
Most real actual human people would either know the date, check their phone or their watch or be like "Oh, that's a good question lol!". But somehow GPTs always be the 1% of people that will lie to know the answer to whatever question you ask them. You know, the kind that evening talk shows will ask ask. Questions like "how do do chickens lay eggs" and you get all sorts of totally completely b0nkers but entirely "confidently told" answers. And of course they only show the ones that give the b0nkers con-man answers. Or the obviously funnily stupid people.
Of course absent access to a "get the current date" function it makes sense why an LLM would behave like it does. But it also means: not AGI, sorry.
Perhaps that kind of thing could help us finally move on from the "stupid should hurt" mindset to a real safety culture, where we value fault tolerance.
We like to pretend humans can reliably execute basic tasks like telling left from right or counting to ten, or reading a four digit number, and we assume that anyone who fails at these tasks is "not even trying"
But people do make these kinds of mistakes all the time, and some of them lead to patients having the wrong leg amputated.
A lot of people seem to see fault tolerance as cheating or relying on crutches, it's almost like they actively want mistakes to result in major problems.
If we make it so that AI failing to count the Rs doesn't kill anyone, that same attitude might help us build our equipment so that connecting the red wire to R2 instead of R3 results in a self test warning instead of a funeral announcement.
Obviously I'm all for improving the underlying AI tech itself ("Maintain Competence" is a rule in crew resource management), but I'm not a super big fan of unnecessary single points of failure.
If I was smarter, I could probably come up with a Kantian definition. Something about our capacity to model subjective representations as a coherent experience of the world within a unified space-time. Unfortunately, it's been a long time since I tried to read A Critique of Pure Reason, and I never understood it very well anyway. Even though my professor was one of the top Kant scholars, he admitted that reading Kant is a huge slog.
I honestly don't have a great one, which is less worrying than it might otherwise be since I'm not sure anyone else does either. But in a human context, I think intelligence requires some degree of creativity, self-motivation, and improvement through feedback. Put a bunch of humans on an island with various objects and the means for survival and they're going to do...something. Over enough time they're likely to do a lot of unpredictable somethings and turn coconuts into rocket ships or whatever. Put a bunch of LLMs on an equivalent island with equivalent ability to work with their environment and they're going to do precisely nothing at all.
On the computer side of things, I think at a minimum I'd want intelligence capable of taking advantage of the fact that it's a deterministic machine capable of unerringly performing various operations with perfect accuracy absent a stray cosmic ray or programming bug. Star Trek's Data struggled with human emotions and things like that, but at least he typically got the warp core calculations correct. Accepting LLMs with the accuracy of a particularly lazy intern feels like it misses the point of computers entirely.
I think using the word “intelligence” when speaking of computers, beyond a kind of figure of speech, is anthropomorphizing, and it is a common pseudoscientific habit that must go.
What is most characteristic about human intelligence is the ability to abstract from particular, concrete instances of things we experience. This allows us to form general concepts which are the foundation of reason. Analysis requires concepts (as concepts are what are analyzed), inference requires concepts (as we determine logical relations between them).
We could say that computers might simulate intelligent behavior in some way or other, but this is observer relative not an objective property of the machine, and it is a category mistake to call computers intelligent in any way that is coherent and not the result of projecting qualities onto things that do not possess them.
What makes all of this even more mystifying is that, first, the very founding papers of computer science speak of effective methods, which is by definition about methods that are completely mechanical and formal, and this stripped of the substantive conceptual content it can be applied to. Historically, this practically meant instructions given to human computers who merely completed them without any comprehension of what they were participating in. Second, computers are formal models, not physical machines. Physical machines simulate the computer formalism, but are not identical with the formalism. And as Kripke and Searle showed, there is no way in which you can say that a computer is objectively calculating anything! When we use a computer to add two numbers, you cannot say that the computer is objectively adding two numbers. It isn’t. The addition is merely an interpretation of a totally mechanistic and formal process that has been designed to be interpretable in such ways. It is analogous to reading a book. A book does not objectively contains words. It contains shaped blots of pigment on sheets of cellulose that have been assigned a conventional meaning in a culture and language. In other words, you being the words, the concepts, to the book. You bring the grammar. The book itself doesn’t have them.
So we must stop confusing figurative language with literal language. AI, LLMs, whatever can be very useful, but it isn’t even wrong to call them intelligent in any literal sense.
There has been some good research published on this topic of how RLHF, ie aligning to human preferences easily introduces mode collapse and bias into models. For example, with a prompt like: "Choose a random number", the base pretrained model can give relatively random answers, but after fine tuning to produce responses humans like, they become very biased towards responding with numbers like "7" or "42".
It's very funny that people hold the autoregressive nature of LLMs against them, while being far more hardline autoregressive themselves. It's just not consciously obvious.
I wonder whether we hold LLMs to a different standard because we have a long term reinforced expectation for a computer to produce an exact result?
One of my first teachers said to me that a computer won't ever output anything wrong, it will produce a result according to the instructions it was given.
LLMs do follow this principle as well, it's just that when we are assessing the quality of output we are incorrectly comparing it to the deterministic alternative, and this isn't really a valid comparison.
I think people tend to just not understand what autoregressive methods are capable of doing generally (i.e., basically anything an alternative method can do), and worse they sort of mentally view it as equivalent to a context length of 1.
The theory I've heard is that the more prime a number is, the more random it feels. 13 feels more awkward and weird, and it doesn't come up naturally as often as 2 or 3 do in everyday life. It's rare, so it must be more random! I'll give you the most random number I can think of!
People tend to avoid extremes, too. If you ask for a number between 1 and 10, people tend to pick something in the middle. Somehow, the ordinal values of the range seem less likely.
Additionally, people tend to avoid numbers that are in other ranges. Ask for a number from 1 to 100, and it just feels wrong to pick a number between 1 and 10. They asked for a number between 1 and 100. Not this much smaller range. You don't want to give them a number they can't use. There must be a reason they said 100. I wonder if the human RNG would improve if we started asking for numbers between 21 and 114.
My guess is that we bias towards numbers with cultural or personal significance. 7 is lucky in western cultures and is religiously significant (see https://en.wikipedia.org/wiki/7#Culture). 42 is culturally significant in science fiction, though that's a lot more recent. There are probably other examples, but I imagine the mean converges on numbers with multiple cultural touchpoints.
Is my understanding wrong that LLMs are trained to emulate observed human behavior in their training data?
From that follows that LLMs fit to produce all kinds of human biases. Like preferring the first choice out of many, and the last our of many (primacy biases). Funnily the LLM might replicate the biases slightly wrong and by doing so produce new derived biases.
In most cases, The LLM itself is a name-less and ego-less clockwork Document-Maker-Bigger. It is being run against a hidden theater-play script. The "AI assistant" (of whatever brand-name) is a fictional character seeded into the script, and the human unwittingly provides lines for a "User" character to "speak". Fresh lines for the other character are parsed and "acted out" by conventional computer code.
That character is "helpful and kind and patient" in much the same way way that another character named Dracula is a "devious bloodsucker". Even when form is really good, it isn't quite the same as substance.
The author/character difference may seem subtle, but I believe it's important: We are not training LLMs to be people we like, we are training them to emit text describing characters and lines that we like. It also helps in understanding prompt injection and "hallucinations", which are both much closer to mandatory features than bugs.
This understanding is incomplete in my opinion. LLMs are more than emulating observed behavior. In the pre-training phase tasks like masked language model indeed train the model to mimic what they read (which of course contains lots of bias); but in the RLHF phase, the model tries to generate the best response judged by human evaluations (who tries to eliminate as much bias as possible in the process). In other words, they are trained to meet human expectations in this later phase.
But human expectations are also not bias-free (e.g. from the preferring-the-first-choice phenomenon)
Not only that if future AI distrusts humanity it is because history, literature and fiction is full of such scenarios and AI will learn those patterns and associated emotions from those texts. Humanity together will be responsible for creating a monster (if that scenario happens).
Together? It would be, 1. AI programmers, 2. AI techbros and a distant 3. AI fiction/history/literature. Foo who never used the internet: not responsible. Bar who posted pictures on Facebook: not responsible. Baz who wrote machine learning, limited dataset algorithms (webmd): not responsible. Etc.
This is the "anyone can be a mathematician meme". People who hang around elite circles have no idea how dumb the average human is. The average human hallucinates constantly.
So if you give a bunch of people a boring task, pay them the same regardless of if they treat it seriously or not - the end result is they do a bad job!
Hardly a shocker. I think this say more about the experimental design then it does about AI & humans.
The paper basically sums to suggesting (and analyzing) these otpions:
* Comparing all possible pair permutations eliminates any bias since all pairs are compared both ways, but is exceedingly computationally expensive.
* Using a sorting algorithm such as Quicksort and Heapsort is more computationally efficient, and in practice doesn't seem to suffer much from bias.
* Sliding window sorting has the lowest computation requirement, but is mildly biased.
The paper doesn't seem to do any exploration of the prompt and whether it has any impact on the input ordering bias. I think that would be nice to know. Maybe assigning the options random names instead of ordinals would reduce the bias. That said, I doubt there's some magic prompt that will reduce the bias to 0. So we're definitely stuck with the options above until the LLM itself gets debiased correctly.
If the question inherently allows for "no-preference" to be valid but that is not a possible answer then you've left it to the person or llm to deal with that. If a human is not allowed to specify no preference why would you expect uniform results when you don't even ask for it? You only asked to pick the best. Even if they picked perfectly, its not defined in the task to make sure you select draws in a random way.
It seems like an incredibly bad outcome if we accept "AI" that's fundamentally flawed in a way similar to if not worse than humans and try to work around it rather than relegating it to unimportant tasks while we work towards a standard of intelligence we'd otherwise expect from a computer.
LLMs certainly appear to be the closest to real AI that we've gotten so far. But I think a lot of that is due to the human bias that language is a sign of intelligence and our measuring stick is unsuited to evaluate software specifically designed to mimic the human ability to string words together. We now have the unreliability of human language processes without most of the benefits that comes from actual human level intelligence. Managing that unreliability with systems designed for humans bakes in all the downsides without further pursuing the potential upsides from legitimate computer intelligence.
Also, I think it’s apparent that the world won’t wait for correct AI, whatever that even is, whether or not it even can exist, before it adopts AI. It sure looks like some employers are hurtling towards replacing (or, at least, reducing) human headcount with AI that performs below average at best, and expecting whoever’s left standing to clean up the mess. This will free up a lot of talent, both the people who are cut and the people who aren’t willing to clean up the resulting mess, for other shops that take a more human-based approach to staffing.
I’m looking forward to seeing which side wins. I don’t expect it to be cut-and-dry. But I do expect it to be interesting.
Just tried it:
Sorry ChatGPT, that's just wrong and your confidence in the answer is not helpful at all. It's also funny how different versions of GPT I've been interacting with always seem to return some date in October 2023, but they don't all agree on the exact day. If someone knows why, please do tell!Most real actual human people would either know the date, check their phone or their watch or be like "Oh, that's a good question lol!". But somehow GPTs always be the 1% of people that will lie to know the answer to whatever question you ask them. You know, the kind that evening talk shows will ask ask. Questions like "how do do chickens lay eggs" and you get all sorts of totally completely b0nkers but entirely "confidently told" answers. And of course they only show the ones that give the b0nkers con-man answers. Or the obviously funnily stupid people.
Of course absent access to a "get the current date" function it makes sense why an LLM would behave like it does. But it also means: not AGI, sorry.
We like to pretend humans can reliably execute basic tasks like telling left from right or counting to ten, or reading a four digit number, and we assume that anyone who fails at these tasks is "not even trying"
But people do make these kinds of mistakes all the time, and some of them lead to patients having the wrong leg amputated.
A lot of people seem to see fault tolerance as cheating or relying on crutches, it's almost like they actively want mistakes to result in major problems.
If we make it so that AI failing to count the Rs doesn't kill anyone, that same attitude might help us build our equipment so that connecting the red wire to R2 instead of R3 results in a self test warning instead of a funeral announcement.
Obviously I'm all for improving the underlying AI tech itself ("Maintain Competence" is a rule in crew resource management), but I'm not a super big fan of unnecessary single points of failure.
You've just explained "race to the bottom". We've had enough of this race, and it has left us with so many poor services and products.
People’s unawareness of their own personification bias with LLMs is wild.
Compare that to the weight we place on "experts" many of whom are hopelessly compromised or dragged by mountains of baggage.
So I'll leave it to Skeeter to explain.
https://www.youtube.com/watch?v=W9zCI4SI6v8
On the computer side of things, I think at a minimum I'd want intelligence capable of taking advantage of the fact that it's a deterministic machine capable of unerringly performing various operations with perfect accuracy absent a stray cosmic ray or programming bug. Star Trek's Data struggled with human emotions and things like that, but at least he typically got the warp core calculations correct. Accepting LLMs with the accuracy of a particularly lazy intern feels like it misses the point of computers entirely.
What is most characteristic about human intelligence is the ability to abstract from particular, concrete instances of things we experience. This allows us to form general concepts which are the foundation of reason. Analysis requires concepts (as concepts are what are analyzed), inference requires concepts (as we determine logical relations between them).
We could say that computers might simulate intelligent behavior in some way or other, but this is observer relative not an objective property of the machine, and it is a category mistake to call computers intelligent in any way that is coherent and not the result of projecting qualities onto things that do not possess them.
What makes all of this even more mystifying is that, first, the very founding papers of computer science speak of effective methods, which is by definition about methods that are completely mechanical and formal, and this stripped of the substantive conceptual content it can be applied to. Historically, this practically meant instructions given to human computers who merely completed them without any comprehension of what they were participating in. Second, computers are formal models, not physical machines. Physical machines simulate the computer formalism, but are not identical with the formalism. And as Kripke and Searle showed, there is no way in which you can say that a computer is objectively calculating anything! When we use a computer to add two numbers, you cannot say that the computer is objectively adding two numbers. It isn’t. The addition is merely an interpretation of a totally mechanistic and formal process that has been designed to be interpretable in such ways. It is analogous to reading a book. A book does not objectively contains words. It contains shaped blots of pigment on sheets of cellulose that have been assigned a conventional meaning in a culture and language. In other words, you being the words, the concepts, to the book. You bring the grammar. The book itself doesn’t have them.
So we must stop confusing figurative language with literal language. AI, LLMs, whatever can be very useful, but it isn’t even wrong to call them intelligent in any literal sense.
Dead Comment
https://en.wikipedia.org/wiki/42_(number)
One of my first teachers said to me that a computer won't ever output anything wrong, it will produce a result according to the instructions it was given.
LLMs do follow this principle as well, it's just that when we are assessing the quality of output we are incorrectly comparing it to the deterministic alternative, and this isn't really a valid comparison.
5 is exactly halfway, that's not random enough either, that's out.
2, 4, 6, 8 are even and even numbers are round and friendly and comfortable, those are out too.
9 feels too close to the boundary, it's out.
That leaves 3 and 7, and 7 is more than 3 so it's got more room for randomness in it right?
Therefore 7 is the most random number between 1 and 10.
People tend to avoid extremes, too. If you ask for a number between 1 and 10, people tend to pick something in the middle. Somehow, the ordinal values of the range seem less likely.
Additionally, people tend to avoid numbers that are in other ranges. Ask for a number from 1 to 100, and it just feels wrong to pick a number between 1 and 10. They asked for a number between 1 and 100. Not this much smaller range. You don't want to give them a number they can't use. There must be a reason they said 100. I wonder if the human RNG would improve if we started asking for numbers between 21 and 114.
https://xkcd.com/221/
My favorite is:
And they trained their PI* on that giant turd pile.* Pseudo Intelligence
From that follows that LLMs fit to produce all kinds of human biases. Like preferring the first choice out of many, and the last our of many (primacy biases). Funnily the LLM might replicate the biases slightly wrong and by doing so produce new derived biases.
In most cases, The LLM itself is a name-less and ego-less clockwork Document-Maker-Bigger. It is being run against a hidden theater-play script. The "AI assistant" (of whatever brand-name) is a fictional character seeded into the script, and the human unwittingly provides lines for a "User" character to "speak". Fresh lines for the other character are parsed and "acted out" by conventional computer code.
That character is "helpful and kind and patient" in much the same way way that another character named Dracula is a "devious bloodsucker". Even when form is really good, it isn't quite the same as substance.
The author/character difference may seem subtle, but I believe it's important: We are not training LLMs to be people we like, we are training them to emit text describing characters and lines that we like. It also helps in understanding prompt injection and "hallucinations", which are both much closer to mandatory features than bugs.
But human expectations are also not bias-free (e.g. from the preferring-the-first-choice phenomenon)
How can the RLHF phase eliminate bias if it uses a process(human input) that has the same biases as the pre-training(human input)?
Together? It would be, 1. AI programmers, 2. AI techbros and a distant 3. AI fiction/history/literature. Foo who never used the internet: not responsible. Bar who posted pictures on Facebook: not responsible. Baz who wrote machine learning, limited dataset algorithms (webmd): not responsible. Etc.
> spits out chunks of words in an order that parrots some of their training data.
So, if the data was created by humans then how is that different from "emulating human behavior?"
Genuinely curious as this is my rough interpretation as well.
Hardly a shocker. I think this say more about the experimental design then it does about AI & humans.
The authors discuss the person 1 / doc 1 bias and the need to always evaluate each pair of items twice.
If you want to play around with this method there is a nice python tool here: https://github.com/vagos/llm-sort
* Comparing all possible pair permutations eliminates any bias since all pairs are compared both ways, but is exceedingly computationally expensive. * Using a sorting algorithm such as Quicksort and Heapsort is more computationally efficient, and in practice doesn't seem to suffer much from bias. * Sliding window sorting has the lowest computation requirement, but is mildly biased.
The paper doesn't seem to do any exploration of the prompt and whether it has any impact on the input ordering bias. I think that would be nice to know. Maybe assigning the options random names instead of ordinals would reduce the bias. That said, I doubt there's some magic prompt that will reduce the bias to 0. So we're definitely stuck with the options above until the LLM itself gets debiased correctly.