The problem of not having a definition is over-emphasized. There is a widespread consensus that there is something that is an important part of human behavior and being human (and exhibited to some degree by many other species, not to mention extinct hominids), and that concept is well-enough constrained that we can study it. Defining it is something we will work towards as we come to understand it better.
I suspect the idea that we need definitions first comes from education, where much of what we know is presented in this manner. This is rarely, however, the way our initial understanding was achieved - just consider how our concepts and definitions of 'energy' and 'matter' have changed over time.
We have perfectly fine definitions for intelligence amongst creatures that are not closely related to humans, and it mostly centres around problem solving, even when introduced to problems that the creature and its ancestors had never grappled with.
In these situations does the creature try the eager solution despite it obviously not working near the end?
For example, does it take the close bridge that gets it 99% of the way across the river or does it not even bother and travel to the farther one that actually gets it across.
If it uses small sticks in nature will it employ artificially introduced tools to solve artificially introduced problems?
Etc.
The key here is to apply the test without breeding for it first. That is how you tell the individual is intelligent rather than the breeding mechanism. For example, I don't think the individuals are at all intelligent in OpenAI's[0] Hide and Seek. It's basically burned in instincts since the introduction of a button that would, say, swap positions with the farthest enemy agent wouldn't be utilized for hundreds of rounds of play. The learning is the brain and replication together. Reminds me of a story about aliens on LessWrong[1] about baby eating aliens.
The definition marks the goal. We can reach the goal without definition, but would we recognize it? And how much time and effort will be wasted by blindly tinkering around in the dark? The definition is a light beacon. Having it, will speed up progress significantly.
>We can reach the goal without definition, but would we recognize it?
You can very easily shoot a porn, even without being able to define what is pornographic and what is not (a proverbially notoriously difficult to define thing, often times debated legally in various countries, for example "movie where sex acts are performed" or "movie meant to arouse the viewer sexually" both don't cut it).
There is no problem with convergent evidence of various different measures correlating positively with general intelligence, which is also measured by IQ tests:
Intelligence doesn't seem that complex a thing to define, to me. Oxford Dictionary says "the ability to acquire and apply knowledge and skills".
I generally go for a slightly wider "The ability to create models of the world around them, and make predictions based on those models" - where "knowledge" would be "models" and "making predictions based on those models" is a sort of proto-skill.
Sure, the kinds of things that different people find easy to model vary. One person might find it easy to model mathematical theorems, others the internal working of car engines. But in both cases there's an underlying ability to make a mental model of the thing you're learning about, and use that to predict how it will function, and work out what you can do with it.
We're only as intelligent as the models we learn. Where we don't have models we tend to learn by simple correlation. Discoveries are rare, even researchers only grind at that 0.01% beyond the known modelling space.
A human in a primitive society, even with the same IQ with a modern man, would be much less intelligent about solving a variety of tasks because they lack the mental "furniture". Most of our intelligence does not come from the brain, it comes from the culture, which is an evolutionary process.
> Oxford Dictionary says "the ability to acquire and apply knowledge and skills".
And what is knowledge? When I say I know p, then p is a proposition. Thus, p is intelligible. Anything intelligible is conceptual. Indeed, if we analyze what a proposition is, we see that it entails predicates and predicates correspond to concepts. However, concepts are abstract and universal, that is, they are not concrete or particular, they are not mere images. Triangularity is not this or that triangle, but that which holds of all triangles, and you will not find triangularity out and about in the world on its own, but only instantiated in particulars, and any particular is not any of the other particulars for which the same predicate holds.
When we implement propositions in computers, we really only simulate the formal via mechanical manipulations. When I create a negation operation on a string of symbols, I am only moving symbols around in a way that corresponds to what negation would produce. But the computer is not strictly speaking negating anything. Furthermore, the symbols that stand for predicates are just placeholders at best. There is no concept in the machine. Deep learning does not somehow magically transcend this limitation.
> "The ability to create models of the world around them, and make predictions based on those models"
Modeling and prediction is not intelligence, but a consequence of it. It is also central to modern science because the purpose of modern science is, to a large degree, less about understanding nature than it is about mastering it for practical purposes (prediction is presupposed by control).
I think the problem is coming up with a definition of intelligence that is measurable across contexts. I’m worse at creating mental models when traveling in a country with language and cultures I don’t understand. Does that mean my IQ is 100 when I travel and 120 at home? Functionally probably. It would probably measure as such.
Also my oral communication tends to be more around vague ideas, not specifics, and I have trouble only communicating one idea at a time linearly. I’m sure I would score differently on the ability to model something depending on how the test was conducted.
In general I would say communication, focus and executive functioning will all get in the way of measuring raw intelligence.
Does memory count as intelligence? I've seen people with great memories and people think they are very intelligent. They people I'm thinking about were actually intelligent, but their great memories really set them apart. For example, the guy at work who remembers every project, every shortcoming, and the reasoning behind every technical decision.
You could also say that mathematics is a model of how Sets behave. It is a generalized model of reality. We don't create models of mathematical theorems, the theorems are a property of the very generalized and abstract model which is mathematics.
My dog is very intelligent. Each time I stop and let him play he remembers the place we stopped at. So going back home with my dog is really hard since he tries to stop in all the previous places he remembers, and usually they are like fifty or so. Also he always tries to go as far as possible from home, taking exactly the direction in which global distance increases, so my huskie dog should become a great mathematician if he desired so. I think one day we will discover how much more intelligent are dogs compared to what we think. Some more hints for dogs intelligence: My dog is a master in dodging other dogs while running. He takes a long time analyzing the pee he smells. Furthermore, to pee over other dogs pees, he usually spends about 15 seconds trying to find the best position to do it well, it must be a difficult task for him or perhaps he foresees and enjoys the real pleasure he will obtain doing so.
The belief that an airtight definition is required in order for a concept to be meaningful is just philosophically confused. That isn't how concepts work. Conceptual boundaries are always fuzzy, with confusing edge cases and ambiguities, and that's ok.
I'd challenge people who think definitions are necessary to grapple with coming up with a definition of a chair, in the form of a list of necessary and sufficient conditions. Make sure you don't exclude anything commonly thought of as a chair, or include anything not commonly thought of as a chair. For an extra challenge, come up with a definition most people would agree on. This endeavour will be a struggle.
And yet we manage to discuss chairs without difficulty. We have a generally understood concept of what we mean by a chair, with central examples like dining chairs or lounge chairs commonly held to be chairs. In the case of ambiguities of communication, we can clarify on a case by case basis ("Did you want me to take the ottomans to the other room too?")
The simplest definition of intelligence I use is one's capcity for abstraction. Is a mind capable of generalizing accurately and then refining it with some good tools for feedback?
Pattern recognition is close, as it's abstracting things into symbols and then comparing the symbols, but it's not sufficient. "Smart" I define as the ability to effect intentions, or to get what you want. In this sense, a lot of intelligent people are not very smart, and a lot of very smart people are unhindered by intelligence. Animals are perfectly smart for their environment without needing much intelligence. Humans are poorly adapted to our physical environment, and require a great deal more intelligence to have survived. Language is useful for many things, but the things it isn't good for are anti-smart (e.g. I think our ego as a refining filter for experience is an artifact of language)
Anyway, the Turing Test as a thought experiment isn't really a measure of intelligence so much as it is an economics model of an indifference curve, which is how much the observer cares (or not) about whether they are dealing with a machine.
I'm actually more bullish on the possibility of AGI for some admittedly very strange reasons, even though I am harsh about people who anthropomorphisze code and fall for animism. My view reduces to a kind of theistic argument where if we can create conscious life from rude materials, it is logical evidence that we ourselves may also have been the expression of some similar intention. If AGI is demonstrably impossible within our physics (like incompleteness theorem level proof), then we exist within a hard ontological boundary, and the best we can do is infer what that boundary is made of (probably time/gravity). The reason I think AGI is plausible is because I have theistic axioms that create a kind of circular reference where if we can Create life, then we could also have been Created with the intent to discover and appreciate the meaning of that, and if we cannot Create, we were not meant to experience that Creation. Maybe even if there is something on the other side of death, we may still just be programs or epiphenomena that aren't intended to reflect or apprehend our substrate, sort of like the one-way directional relationship between an instrument and a song played on it. An AGI would make the leap from song, to software, to operating on its environment using abstraction, generalization and feedback. It wouldn't be "us," but I think it could certainly become a them that could eventually exist independently of us.
Interesting article. I personally don't think we'll ever have a definition of intelligence because it is an emergent behaviour. We'll just keep working on it, all the while believing that what we create is not intelligent, until one day it is and kills us all.
I suspect the idea that we need definitions first comes from education, where much of what we know is presented in this manner. This is rarely, however, the way our initial understanding was achieved - just consider how our concepts and definitions of 'energy' and 'matter' have changed over time.
In these situations does the creature try the eager solution despite it obviously not working near the end?
For example, does it take the close bridge that gets it 99% of the way across the river or does it not even bother and travel to the farther one that actually gets it across.
If it uses small sticks in nature will it employ artificially introduced tools to solve artificially introduced problems?
Etc.
The key here is to apply the test without breeding for it first. That is how you tell the individual is intelligent rather than the breeding mechanism. For example, I don't think the individuals are at all intelligent in OpenAI's[0] Hide and Seek. It's basically burned in instincts since the introduction of a button that would, say, swap positions with the farthest enemy agent wouldn't be utilized for hundreds of rounds of play. The learning is the brain and replication together. Reminds me of a story about aliens on LessWrong[1] about baby eating aliens.
[0] https://openai.com/blog/emergent-tool-use/
[1] https://www.lesswrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-e...
You can very easily shoot a porn, even without being able to define what is pornographic and what is not (a proverbially notoriously difficult to define thing, often times debated legally in various countries, for example "movie where sex acts are performed" or "movie meant to arouse the viewer sexually" both don't cut it).
1. Reaction speed https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5608941/
2. Dendritic tree arborization, synaptic density, electrophysiological properties of pyramidal neurons https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6363383/
There is a problem with sophistry and competitive virtue signaling around these phenomena. Instead of ignoring biology we could as well embrace it.
I generally go for a slightly wider "The ability to create models of the world around them, and make predictions based on those models" - where "knowledge" would be "models" and "making predictions based on those models" is a sort of proto-skill.
Sure, the kinds of things that different people find easy to model vary. One person might find it easy to model mathematical theorems, others the internal working of car engines. But in both cases there's an underlying ability to make a mental model of the thing you're learning about, and use that to predict how it will function, and work out what you can do with it.
A human in a primitive society, even with the same IQ with a modern man, would be much less intelligent about solving a variety of tasks because they lack the mental "furniture". Most of our intelligence does not come from the brain, it comes from the culture, which is an evolutionary process.
Deleted Comment
And what is knowledge? When I say I know p, then p is a proposition. Thus, p is intelligible. Anything intelligible is conceptual. Indeed, if we analyze what a proposition is, we see that it entails predicates and predicates correspond to concepts. However, concepts are abstract and universal, that is, they are not concrete or particular, they are not mere images. Triangularity is not this or that triangle, but that which holds of all triangles, and you will not find triangularity out and about in the world on its own, but only instantiated in particulars, and any particular is not any of the other particulars for which the same predicate holds.
When we implement propositions in computers, we really only simulate the formal via mechanical manipulations. When I create a negation operation on a string of symbols, I am only moving symbols around in a way that corresponds to what negation would produce. But the computer is not strictly speaking negating anything. Furthermore, the symbols that stand for predicates are just placeholders at best. There is no concept in the machine. Deep learning does not somehow magically transcend this limitation.
> "The ability to create models of the world around them, and make predictions based on those models"
Modeling and prediction is not intelligence, but a consequence of it. It is also central to modern science because the purpose of modern science is, to a large degree, less about understanding nature than it is about mastering it for practical purposes (prediction is presupposed by control).
Also my oral communication tends to be more around vague ideas, not specifics, and I have trouble only communicating one idea at a time linearly. I’m sure I would score differently on the ability to model something depending on how the test was conducted.
In general I would say communication, focus and executive functioning will all get in the way of measuring raw intelligence.
You could also say that mathematics is a model of how Sets behave. It is a generalized model of reality. We don't create models of mathematical theorems, the theorems are a property of the very generalized and abstract model which is mathematics.
I'd challenge people who think definitions are necessary to grapple with coming up with a definition of a chair, in the form of a list of necessary and sufficient conditions. Make sure you don't exclude anything commonly thought of as a chair, or include anything not commonly thought of as a chair. For an extra challenge, come up with a definition most people would agree on. This endeavour will be a struggle.
And yet we manage to discuss chairs without difficulty. We have a generally understood concept of what we mean by a chair, with central examples like dining chairs or lounge chairs commonly held to be chairs. In the case of ambiguities of communication, we can clarify on a case by case basis ("Did you want me to take the ottomans to the other room too?")
Pattern recognition is close, as it's abstracting things into symbols and then comparing the symbols, but it's not sufficient. "Smart" I define as the ability to effect intentions, or to get what you want. In this sense, a lot of intelligent people are not very smart, and a lot of very smart people are unhindered by intelligence. Animals are perfectly smart for their environment without needing much intelligence. Humans are poorly adapted to our physical environment, and require a great deal more intelligence to have survived. Language is useful for many things, but the things it isn't good for are anti-smart (e.g. I think our ego as a refining filter for experience is an artifact of language)
Anyway, the Turing Test as a thought experiment isn't really a measure of intelligence so much as it is an economics model of an indifference curve, which is how much the observer cares (or not) about whether they are dealing with a machine.
I'm actually more bullish on the possibility of AGI for some admittedly very strange reasons, even though I am harsh about people who anthropomorphisze code and fall for animism. My view reduces to a kind of theistic argument where if we can create conscious life from rude materials, it is logical evidence that we ourselves may also have been the expression of some similar intention. If AGI is demonstrably impossible within our physics (like incompleteness theorem level proof), then we exist within a hard ontological boundary, and the best we can do is infer what that boundary is made of (probably time/gravity). The reason I think AGI is plausible is because I have theistic axioms that create a kind of circular reference where if we can Create life, then we could also have been Created with the intent to discover and appreciate the meaning of that, and if we cannot Create, we were not meant to experience that Creation. Maybe even if there is something on the other side of death, we may still just be programs or epiphenomena that aren't intended to reflect or apprehend our substrate, sort of like the one-way directional relationship between an instrument and a song played on it. An AGI would make the leap from song, to software, to operating on its environment using abstraction, generalization and feedback. It wouldn't be "us," but I think it could certainly become a them that could eventually exist independently of us.
Apparently whales can hear each other over distances of several thousand miles -- entirely across oceans.