So I’m just glossing through the paper linked in the article. There is a chart showing scores per year stratified by education.
The overall trend is down, sure. But… year by year variance across education groups seems to be highly correlated. Like in 2011 there’s a positive bump in performance equally in Grad students and high school students. And in 2014 everyone dropped.
Like… come on. That seems like an enormous red flag to the validity of the measurement. The year by year variance (normalized against the trend) should be random across groups unless there is something specifically different about that year. Presuming people didn’t briefly all get smarter in 2011, it’s crazy that all groups tested better.
The only plausible answer is that the test was easier in 2011 right? And if that’s true, how confident can we really be that it isn’t just testing variance?
Edit: that is to say, per the central limit theorem, the sample means of each sub group per year should be normally distributed around the mean. Let’s assume our null hypothesis is the reverse Flynn decline. We should expect that for a given year that some groups will do worse than the trend prediction and some groups will do better. Since we instead see that all groups do about the same relative to the prediction, we should infer that our data is not a random sampling of scores that are described by the mean of that trend line. In fact, the scores seem to be better described by other unseen factors that are specific to each year.
Is the test different year-on-year? I thought they were somewhat standardized instruments (or at least standard question pools drawn from randomly). Not my area of expertise though.
When IQ tests were invented folks didn't know about tests, at least in the US. They were rural immigrants who could maybe read. So when asked logic questions, they would answer pragmatically and be 'wrong'. That had some impact on perceived early low results.
As folks became better-read and educated they began to understand that IQ test questions were a sort of puzzle, not a real honest question. The answer was expected to solve the puzzle, not be right in any way.
E.g. There are no Elephants in Germany. Munich is in Germany. How many elephants are there in Munich? A) 0 B) 1 C)2
Folks back then might answer B or C, because they figure hey there's probably a zoo in Munich, bet they have an elephant or two there. And be marked wrong.
That theory could be plausible, except Flynn used results from Raven's Progressive Matrices, which is just pattern recognition. There are no questions about elephants or text-based questions that could introduce cultural bias. It's simply picking the shape that matches the pattern presented in a grid.
Good point about the Raven's Progressive Matrices, but I still think the parent's hypothesis that we have improved at test taking is relevant. Outside of verbal reasoning there are a lot of soft factors that influence test taking ability. People who haven't taken a timed test before would probably perform worse than expected simply because it is an unfamiliar context, and unfamiliarity tends to create higher stress/anxiety which is known to be bad for cognitive performance in novices. A group that has taken hundreds of tests will probably score better on a test in an unfamiliar subject than another unprepared group that isn't used to taking tests simply because they have some meta-knowledge about test taking.
But… why would they answer B or C when they were just told the right answer was A? That doesn’t make any sense. They don’t need to ‘figure’ anything when they’ve been told that there are zero. That’s not even a puzzle, that’s just series of statements.
That’d be like if you told me a tree was 10 feet tall, then asked me how tall it was and I said “10 feet 1 inch” because I figured it had grown at least an inch in the interim. Why figure when you should already know?
> But… why would they answer B or C when they were just told the right answer was A?
Because there's a decent chance the initial statement isn't true.
[EDIT] To clarify, it's the difference between a seasoned test-taker understanding implicitly that they're looking at a (very simple) logic puzzle, not a question about reality, and someone taking the question at face-value (and assuming the first statement's some kind of trick, or simply an error). In a sense, answering it "correctly" demands that you act dumber.
[EDIT 2] I just checked, out of curiosity, and in fact the person who answers it "wrong" is closer to correct than the person who answers it "right", in actual reality. In a not-unreasonable sense, the "moron" who gets this wrong is more-correct than the trained monkey who answers zero. There's a zoo, evidently within the borders of Munich, and they do have elephants.
"there are no elephants in Germany" can mean "there are literally zero elephants in Germany" or it can mean "elephants are not endemic to Germany". Only the first excludes answers b and c. The latter does not. Language is often imprecise =)
Compare these to reading comprehension tests on the SAT. Many of those questions ask “why did the author write x?” And all answers are valid insights to the whole piece, but only one is derived from X. It has to be learned to read questions literally imo. It’s not intuitive that questions are as dumb as they are. You must explicitly silence intuition, outside knowledge, social norms towards more impressive seeming insights, awareness of the bigger idea, etc.
This is interesting but do you have a source for this, is this authoritatively known? Or is it just an theory you came up with? I can't quite tell which way you're trying to present this.
Is it though? It’s pretty easy to read the first line as something like “Elephants are not native to Germany” and then assume the Munich question is a trick question. In fact the person who answers 1 or 2 because of the possibility of a zoo is actually using smarter logic than someone who takes it at face value.
That only makes sense if you assume the purpose of the test is to correctly apply logic rather than guess the actual number of Elephants in Germany. Someone who's not familiar with standardized testing may assume the latter is more important.
Imagine that, 150 years ago or whenever, some clever soul had decided to make a written test to measure niceness, and called the test score Niceness Quotient. And the first version sucked, but some other folks iterated on it and over time the test was improved until it correlated pretty well to the sorts of things you would think that niceness would correlate to. 150 years of progress later, we'd have a whole field of Niceometry and researchers trying to isolate sub-areas like charity, friendliness, etc, and trying to suss out an underlying factor of general amiability, and the whole thing would be so well embedded in to the culture that almost no one remembers that "nice" is just a regular word with no objective or scientific definition, and that we measure it with a written test not because that's a good way to measure niceness but because we can't find a better way.
> IQ tests measure something real and objective called the g factor
Sure, and NQ tests do too, because look how well graciousness correlates with cheerfulness! That can't be an accident, can it?
Less snarkily, a better analogy would be athletic ability. Suppose you take a bunch of people and measure how fast they can run, how well they can shoot free throws, and how far they can throw a football. Will the results be correlated? Of course, some people are more athletic than others. Does that mean there's a quantity called 'athleticism' that we can objectively measure with a number? No; and not because all people are equally athletic, but because you're trying to take a squishy subjective English language word and pretend it's a scalar value.
> I would hardly call it a "whole field".
The problem isn't the size of the field, it's that academics work within their field, they don't refute it. There's a very uncomfortable result about IQ tests that a generation of psychologists have tried to explain away, and I maintain that the reason they haven't succeeded is because they are institutionally incapable of saying, "Hey, maybe this is pseudoscience."
IQ tests produce a number, because they're tests and all tests produce a number. This does not mean the number means anything or existed before the test did. (Especially now that we have AIs that can take tests, do they contain a "g factor"?)
A lot of scientific research is bad and you shouldn't trust it when it comes from a pre-credibility revolution field!
Can someone actually explain how IQ tests work? By work, I mean how are the tests engineered, and the results computed.
Long time ago someone explained to me that the engineering of IQ tests was actually drafted from a very large pool of (regularly updated) questions, where statistical significance was extracted to form a _core symposium_ of questions to sample from. Also, the IQ score itself was normalized to be normally distributed centered at 100.
With this understanding, I was under the impression that IQ was a relative measure, at a specific point in time, of one's placement in the distribution.
Which meant to me that IQ cannot "drop" across a population, the mean will always be 100. And IQ scores cannot be compared on a time series basis, since they are only cross sectional measures.
An IQ test is a relative measure of where you are relative to the population at the point of time where the test was normalized.
That point of time is somewhere in the past. And when tests are renormalized, there is a conversion of "this score on the old test is that score on the new test". This allows for comparisons of IQ over time, across different versions of the same test. This is how the Flynn effect was first discovered.
Tests usually have solid conversions between them, you can can compare across different tests as well. Such conversions allow more verification of the Flynn effect.
If you go back far enough, you will find tests for children measuring mental age vs physical age, taking a ratio, and multiplying by 100. They fell on a distribution that was close enough to normal centered at 100 with a standard deviation of 15 or 16 that adult tests were developed to match them.
Since we don't know all the factors behind the Flynn effect in the first place, we also don't know all the factors behind why it might be reversing now.
There is a raw score underlying any given IQ test that is an absolute value. It might just be as simple as the number of questions you get right. When testing a population, these scores form a normal distribution. We then scale the raw scores so that the mean/median or center of the distribution becomes an IQ of 100. So the raw scores can be compared across time and can vary, even though the IQ cannot as you said.
Hum, but my understanding was that the whole point of the normalization was that the raw scores are not in a scale that is meaningful outside of the symposium which they were placed in?
Does it really make sense to compare raw scores from different tests? If that were the case then the normalization step would be useless, we would have an absolute measure of intelligence.
It's a 2 hour video essay that covers psychometry generally as a lens to understanding a book called the Bell Curve that was a flashpoint for questions about the validity of IQ science generally (but most especially how it applies to race). It took a good chunk of my Sunday to get through it, but it was really enjoyable and gave me a ton of insight into a ton of buzzwords and studies I had heard of but couldn't really dig my teeth into.
I think with this topic, getting a briefer, less nuanced summary than something like this would be a mistake because of how much misunderstanding of these topics permeates popular culture. The video also provides a number of studies and books to keep going beyond the relatively breif 2 hours of content it provides.
It uses the famed/infamous book the Bell Curve as a case study and delves into how they were originally created, how they are updated, how the term hereditery when used in genetics means something that is sometimes counter-intuitive to the definition used in popular culture (for example whether someone wears earings has high heritability, whereas having 2 arms has effectively zero herritability) the statistical meaninfullness of factorization (G-factor) of domains of IQ into a single numerical value, how these domains came to be defined, the current state of understanding regarding the local vs enviornmental source of IQ for individuals, how the Flynn effect was observed, etc.
But to answer a bit of your earlier question.
When IQ tests are created, they create a set of questions, test it on a sample group, and set the average value to 100 and higher/lower scores depending on what the distribution of correct answers is. The Flynn effect happened because researchers noticed while the average for new tests always is set at 100, people scoring 100 on a more recent test were generally scoring even higher on previous years tests. The article of the reverse Flynn effect is a little bit sensationalist because as it mentions while some areas (like Spatial Reasoning) are improving, others are apparently starting to get lower. This calls into question a bit the idea of a G-Factor which is an assumption that their is a common factor of intelligence that covaries across all IQ domains (spatial reasoning, reaction time, etc... ) which is the theoretical reasoning behind IQ being meaningfully represented as a single numerical value rather than a multi-dimensional value.
My understanding at a high level is that an IQ test is basically made by generating a bunch of cognitive tasks that putatively test different aspects of cognitive functioning (e.g. verbal reasoning, visuospatial reasoning, attention, working memory, etc.). For most people, performance on one type of cognitive task (e.g. verbal reasoning) is highly correlated with performance on the other types of cognitive tasks.
This allows you to model the test as a hierarchical statistical model with some general intelligence factor (denoted G) at the top and then specific cognitive tasks branch off from there. You can then infer what G is just by statistical inference on the "branches" (the performance on the individual cognitive tasks); similar to how you might infer someone's height if you only had access to their leg and arm lengths, as these are highly correlated with each other and also with height.
I believe IQ scores are always population normed to have a mean of 100 but unnormalized scores are likely available to compare across time.
> For most people, performance on one type of cognitive task (e.g. verbal reasoning) is highly correlated with performance on the other types of cognitive tasks.
It's cool that that's your understanding, but you're wading into territory that gets people sterilized and killed. I have a lot of trust in science, but not here. This video was informative for me: https://youtu.be/UBc7qBS1Ujo
For example, here's an IQ test; let's say its given to 5,000,000 Canadians, and 3,000 Texans (1):
* What is the capital city of Canada?
* Which Canadian province is the largest by land area?
* Who is considered the "Father of Medicare" in Canada?
* Name the two official languages of Canada.
* Which Canadian team won the Stanley Cup in 2017?
* What is the national sport of Canada?
* What is the name of Canada's national anthem?
* Name the famous Canadian dish made with fries, cheese curds, and gravy.
* Who was the first Prime Minister of Canada?
* In which Canadian city is the CN Tower located?
I would expect a normal distribution for Canadians, but for the Texans to score in the bottom quintile (regardless of "general intelligence").
IQ is a construct representing intelligence. The aim is to refine this construct so that it is a) externally consistent with what constitutes our understanding of human intelligence and b) internally consistent so that various things we deem high or low IQ are consistent with other things we deem high or low IQ. The things here are items, which are question:answer pairs.
IQ is an example of factor analysis [0] where an unobserved "general intelligence factor" g is derived from observed items. The items are chosen so that responses correlate with each other and with g (basically, if the same person took two IQ tests with different questions then the score should be about the same). There may be some intermediate factors like verbal ability or spatial ability. The items here will be chosen so that they correlate only with items of the same factor--e.g. verbal items correlate with verbal items but do not correlate with spatial items.
A raw score on the test is not meaningful; individuals are compared against the population of test takers to determine their rank in the population. First, raw scores of the population are normalized so that the mean is 100 and the standard deviation is 15 (this is arbitrary; it's just the scale they use). Then an individual test taker can be compared with the population on this scale. The percentile rank can be obtained directly from the IQ score and vice versa.
You have some questions about the shifting mean of 100. In practice the normed distribution is computed from a norming group rather than recomputing the norm after ever test. A particular population can shift from the norming group (either over time or because it's a group with different characteristics) which is where things like the Flynn Effect come from. So a lot depends on the norming group.
IQ measures something but that something is definitely intelligence, no matter what people try to tell you. Let's all just stop pretending it measures intelligence
It’s a hard test that doesn’t require specific knowledge and has a short time limit to prevent bruteforcing.
The questions are in difficulty order and they’re all worth the same so because of the time limit you won’t be able to answer them all correctly. The quicker you are at solving problems, the higher the score you will get.
Your score then shows how you compare to the population percentage wise and that’s your IQ. The average score would give you an average IQ.
I think this video by `Shaun` is informative and well-researched (articles and books are cited). It's long; the creator isn't especially succinct, but it's as entertaining as the material allows, and the subject's complex enough to warrant a video that's a couple of hours long. https://youtu.be/UBc7qBS1Ujo
They say scores in spatial reasoning went up while analogies, vocabulary, and numerical reasoning declined.
Hmmm I wonder if an increase use of videogames paired with a decrease in the amount of time parents can spend communicating with their children might be related.
Note that over the last 30 years it's vastly transitioned from one parent staying home raising children to both parents working.
> A study of Norwegian military conscripts' test records found that IQ scores have been falling for generations born after the year 1975, and that the underlying cause of both initial increasing and subsequent falling trends appears to be environmental rather than genetic.
There is ongoing debate among academics, even in Norway...
In 2019, a group of researchers from the University of Oslo published a study that found no evidence of a decline in IQ scores over time in Norway, despite claims of such declines in other countries. The researchers argued that the methodology used in previous studies may have contributed to false conclusions about declining IQ scores.
In contrast, a group of researchers from the University of Amsterdam published a study in 2018 that reported a decline in IQ scores in the Netherlands over the past several decades. The researchers suggested that changes in educational systems, such as increased emphasis on testing and memorization, may be contributing to the decline.
Reminds me of this "The humble pocket calculator should have taken Sociology by storm half a century ago." post criticizing how psychometrics has become bunk as it hasn't kept up with the times:
Humans are incredibly adaptive. Is there much reason to have an expansive vocabulary nowadays? We are taught to speak and write as concisely and understandably as possible. We can look up the definition of any word at our fingertips. "[I do not] carry such information in my mind since it is readily available in books." - Einstein.
Maybe these tests are declining because they are measuring skills that are decreasingly relevant? I'm not certain I believe this myself but it's an interesting thought.
Your vocabulary is tied to your expressive power and your ability to form coherent and compelling arguments. I'd argue that without an expansive vocabulary you would struggle to write with precision let alone brevity.
Not that its wrong to question, I just think you'd need to do more work supporting the idea that language skills are less important today for some reason.
> We can look up the definition of any word at our fingertips.
So what? Being able to understand expressive language and quickly context shift vocabulary is extremely valuable. If you don't have the vocabulary to identify context, being able to look up the definition of words will only get you so far. Additionally, you can't write words if you don't know that they exist.
I've previously wondered if people are learning vocabulary they wouldn't be willing to put in a test.
This site is dense in terms and acronyms that generally will not appear in a dictionary, which does not appear to be unusual. Many interests and professions have a frightening number of terms and acronyms now.
Plus the focus on teaching for the test and the move away from critical thinking in schools. I had to teach critical thinking myself to my kids and also how to research and cross reference. I have two sets of encyclopedias and a pile of other reference books and showed them how to use them. What they teach in schools now is vastly different from when I went to school. Public education has gone from teaching critical thinking and problem solving to group thinking, homogenizing and memorizing the 'correct' answers, no understanding required.
Here is a pretty clear indication that people are just not having children.
Maybe the percentage of stay at home parents has stayed the same but the number of stay at home parents has shrunk because the number of all parents has just shrunk as well.
None of that really helps indicate why IQ in certain metrics related to communication would be in decline. Since the percentages are the same you would think outcomes would be similar then.
So are kids getting dumber or are parents just getting worse?
Or other factors in our environment are contributing to this. An increase in smart devices autocorrecting and doing 'math' for us for example.
The point of the original Flynn effect being a big deal was that the changes were faster than was possible with genetics alone.
A big part of "The Bell Curve" was arguing that no interventions could change IQ except genetics and so any money spent on low IQ people (African-Americans in the book, but the author followed up by attacking poor people more generally) was a pointless waste.
It turns out he wasn't just an asshole, he was also wrong.
I've never read the "Bell Curve", and I'm not a huge fan of Charles Murray's work in general, but, from the first line in Wikpedia:
"The Bell Curve: Intelligence and Class Structure in American Life is a 1994 book by psychologist Richard J. Herrnstein and political scientist Charles Murray, in which the authors argue that human intelligence is substantially influenced by both inherited and environmental factors."
That statement completely contradicts what your claim about the book, and now I am disinclined to trust you.. Later on another statement also completely contradicts what you are saying:
"According to Herrnstein and Murray, the high heritability of IQ within races does not necessarily mean that the cause of differences between races is genetic. On the other hand, they discuss lines of evidence that have been used to support the thesis that the black-white gap is at least partly genetic, such as Spearman's hypothesis. They also discuss possible environmental explanations of the gap, such as the observed generational increases in IQ, for which they coin the term Flynn effect"
Absolutely not what Murray said in the bell curve. It's not a very hard book to attack, so I'm not sure why people always go for strawmen. Please post anything from the book that comes anywhere close to saying IQ is 100% genetic.
> A big part of "The Bell Curve" was arguing that no interventions could change IQ except genetics and so any money spent on low IQ people (African-Americans in the book, but the author followed up by attacking poor people more generally) was a pointless waste.
It should be self-evident that you can lower IQ through environment (injury, developmental issues, malnourishment, etc.). So even if you believe there's a genetic ceiling to IQ, Flynn effect (and reverse) don't contradict that.
I agree it probably isn’t genetics alone, notably the increase in visual spatial skills I would suspect to have more to do with video games than genetics.
I have yet to read “the bell curve” said, but did they really use an argument that flew in the face of the abundant evidence of IQ increases unlinked to genetics as a result of better nutrition and education? Hell America gained a few IQ points nationwide from banning leaded gasoline alone so we also knew of environmental means to affect IQ levels. This was all known about and very well established at the time of authorship. Is there an excerpt?
What is going on with Bell Curve apologists all of a sudden replying to this post. I thought the debate was slowly fading out and than I count 5 different account replying within an hour.
If George Carlin was a philosopher, perhaps Mike Judge is also more than a physicist, musician, and director. Giving Brian May a run for his money.
When the collective hive mind of a society is organized around anti-intellectualism (the core ethos of America), then it will subsidize a combination of stupid (less talent) and mental laziness (lack of productive application of talent). This is how a society enters the dustbin of history and emerges as a brutal and backwards people.
America in 2023 is more intellectual than any society that existed in or before the 20th century. How extreme does intellectualism have to get before America is no longer considered anti-intellectual? Does manual labour need to be banned and does 75% of the population need post-secondary degrees?
America and western society is so taken with intellectualism that they spend their prime years for having children doing precocious amounts of studying while their brains are in a state of peak plasticity. Americans right now are like modern day Irish elk who need increasingly large antlers (degrees) to win the reproductive game but are so encumbered by this that it becomes difficult to reproduce at all. So fertility is sub-replacement and Americans have reacted by bringing in immigrants so they can spend more time working on their degrees.
What sort of immigrants do they bring in? University graduates!
From a strict evolutionary perspective it probably hasn't been for a long time and likely never was as significant on an individual level as people like to believe.
The only things being selected for in modern humans, if anything, are going to be things like disease resistance, maybe tolerance to some chemical contaminants in food & water, air pollution. And even then only in some parts of the world.
It's useful. You have to think on the macro scale. The evolution of the entire population rather then individuals.
On the individual level stupider people reproduce more so evolutionary speaking it's more efficient to be stupid. However if a small portion of the population has high IQ they can move society forward via say the discovery of electricity, mathematics, etc, etc. This propels every individual forward as a whole at the detriment of a few individuals who are to nerdy or geeky to get laid that often.
Thus from a high level perspective, there is selection pressure that works on the population of people that makes it so that our genes have the mechanisms in place to produce an occasional genius via specific combinations of traits or via simple on/off switch mutations that easily occur.
IQ hasn't been beneficial to evolution for over 100 years. Once means-tested welfare came into existence, being low-IQ became more advantageous. The reason Europe was able to take over the world is that they taxed the poor (low-IQ) more than the rich in the dark ages and the rich out-bred the poor for at least 2 generations.
Very arguably social welfare resulted in the biggest increase in IQ levels in history as general health, nutrition, and education improved. Also social welfare improved social mobility which should cause IQ to have more of an impact rather than less of one.
I am not rejecting this point but I have a hard time accepting it Carte Blanche. If anything if IQ is lowering for generic reasons I would suspect birth control as a cause especially since it’s a more recent phenomenon than means tested welfare.
One of the first European world powers after the dark ages, Portugal, was not more advanced than many of the areas it attacked except for in weaponry since Europe had been infighting while other math and science was being pursued in eastern parts of the world. Regardless of what percentage of motivation you ascribe to "we want their shit" vs "we want them to take our religion," I don't think you can say it was an advantage or motivation driven by intelligence.
("The rich outbred the poor" also seems very dubious, labor was still very manual, so you gotta have someone to do it.)
Looking at the paper here [1], it appears there was stability, then a rise up to 2011 followed by a global drop in 2012/2013, and then more stability (Fig 1). It is implausible that things could change so quickly for everyone. A spurious effect related to methodology is surely the most likely answer? I haven't read the text in detail. Fitting a linear relationship to the above as they do in the paper seems a bit crazy to me, but I'm not a psychologist.
The overall trend is down, sure. But… year by year variance across education groups seems to be highly correlated. Like in 2011 there’s a positive bump in performance equally in Grad students and high school students. And in 2014 everyone dropped.
Like… come on. That seems like an enormous red flag to the validity of the measurement. The year by year variance (normalized against the trend) should be random across groups unless there is something specifically different about that year. Presuming people didn’t briefly all get smarter in 2011, it’s crazy that all groups tested better.
The only plausible answer is that the test was easier in 2011 right? And if that’s true, how confident can we really be that it isn’t just testing variance?
Edit: that is to say, per the central limit theorem, the sample means of each sub group per year should be normally distributed around the mean. Let’s assume our null hypothesis is the reverse Flynn decline. We should expect that for a given year that some groups will do worse than the trend prediction and some groups will do better. Since we instead see that all groups do about the same relative to the prediction, we should infer that our data is not a random sampling of scores that are described by the mean of that trend line. In fact, the scores seem to be better described by other unseen factors that are specific to each year.
Deleted Comment
As folks became better-read and educated they began to understand that IQ test questions were a sort of puzzle, not a real honest question. The answer was expected to solve the puzzle, not be right in any way.
E.g. There are no Elephants in Germany. Munich is in Germany. How many elephants are there in Munich? A) 0 B) 1 C)2
Folks back then might answer B or C, because they figure hey there's probably a zoo in Munich, bet they have an elephant or two there. And be marked wrong.
https://en.m.wikipedia.org/wiki/Raven's_Progressive_Matrices
That’d be like if you told me a tree was 10 feet tall, then asked me how tall it was and I said “10 feet 1 inch” because I figured it had grown at least an inch in the interim. Why figure when you should already know?
Because there's a decent chance the initial statement isn't true.
[EDIT] To clarify, it's the difference between a seasoned test-taker understanding implicitly that they're looking at a (very simple) logic puzzle, not a question about reality, and someone taking the question at face-value (and assuming the first statement's some kind of trick, or simply an error). In a sense, answering it "correctly" demands that you act dumber.
[EDIT 2] I just checked, out of curiosity, and in fact the person who answers it "wrong" is closer to correct than the person who answers it "right", in actual reality. In a not-unreasonable sense, the "moron" who gets this wrong is more-correct than the trained monkey who answers zero. There's a zoo, evidently within the borders of Munich, and they do have elephants.
If someone came up to you on the street and said "You are an elephant. Are you an elephant?" you wouldn't say "yes".
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
-General intelligence (g) is the best predictor of job performance.
-The predictive validity of g is higher than work experience.
-G facilitates the acquisition of job-related knowledge, resulting in better job performance.
-G provides predictive validity for all jobs, regardless of complexity level.
-Specific aptitude tests tailored for each job do not add predictive validity over general intelligence tests.
-These findings could impact employee selection and training practices in various industries.
Dead Comment
Also, intelligence tests are but a tiny part of psychology, I would hardly call it a "whole field".
Sure, and NQ tests do too, because look how well graciousness correlates with cheerfulness! That can't be an accident, can it?
Less snarkily, a better analogy would be athletic ability. Suppose you take a bunch of people and measure how fast they can run, how well they can shoot free throws, and how far they can throw a football. Will the results be correlated? Of course, some people are more athletic than others. Does that mean there's a quantity called 'athleticism' that we can objectively measure with a number? No; and not because all people are equally athletic, but because you're trying to take a squishy subjective English language word and pretend it's a scalar value.
> I would hardly call it a "whole field".
The problem isn't the size of the field, it's that academics work within their field, they don't refute it. There's a very uncomfortable result about IQ tests that a generation of psychologists have tried to explain away, and I maintain that the reason they haven't succeeded is because they are institutionally incapable of saying, "Hey, maybe this is pseudoscience."
A lot of scientific research is bad and you shouldn't trust it when it comes from a pre-credibility revolution field!
Long time ago someone explained to me that the engineering of IQ tests was actually drafted from a very large pool of (regularly updated) questions, where statistical significance was extracted to form a _core symposium_ of questions to sample from. Also, the IQ score itself was normalized to be normally distributed centered at 100.
With this understanding, I was under the impression that IQ was a relative measure, at a specific point in time, of one's placement in the distribution.
Which meant to me that IQ cannot "drop" across a population, the mean will always be 100. And IQ scores cannot be compared on a time series basis, since they are only cross sectional measures.
Is that all wrong? Is there some truth to it?
That point of time is somewhere in the past. And when tests are renormalized, there is a conversion of "this score on the old test is that score on the new test". This allows for comparisons of IQ over time, across different versions of the same test. This is how the Flynn effect was first discovered.
Tests usually have solid conversions between them, you can can compare across different tests as well. Such conversions allow more verification of the Flynn effect.
If you go back far enough, you will find tests for children measuring mental age vs physical age, taking a ratio, and multiplying by 100. They fell on a distribution that was close enough to normal centered at 100 with a standard deviation of 15 or 16 that adult tests were developed to match them.
Since we don't know all the factors behind the Flynn effect in the first place, we also don't know all the factors behind why it might be reversing now.
Does it really make sense to compare raw scores from different tests? If that were the case then the normalization step would be useless, we would have an absolute measure of intelligence.
It's a 2 hour video essay that covers psychometry generally as a lens to understanding a book called the Bell Curve that was a flashpoint for questions about the validity of IQ science generally (but most especially how it applies to race). It took a good chunk of my Sunday to get through it, but it was really enjoyable and gave me a ton of insight into a ton of buzzwords and studies I had heard of but couldn't really dig my teeth into.
I think with this topic, getting a briefer, less nuanced summary than something like this would be a mistake because of how much misunderstanding of these topics permeates popular culture. The video also provides a number of studies and books to keep going beyond the relatively breif 2 hours of content it provides.
It uses the famed/infamous book the Bell Curve as a case study and delves into how they were originally created, how they are updated, how the term hereditery when used in genetics means something that is sometimes counter-intuitive to the definition used in popular culture (for example whether someone wears earings has high heritability, whereas having 2 arms has effectively zero herritability) the statistical meaninfullness of factorization (G-factor) of domains of IQ into a single numerical value, how these domains came to be defined, the current state of understanding regarding the local vs enviornmental source of IQ for individuals, how the Flynn effect was observed, etc.
But to answer a bit of your earlier question. When IQ tests are created, they create a set of questions, test it on a sample group, and set the average value to 100 and higher/lower scores depending on what the distribution of correct answers is. The Flynn effect happened because researchers noticed while the average for new tests always is set at 100, people scoring 100 on a more recent test were generally scoring even higher on previous years tests. The article of the reverse Flynn effect is a little bit sensationalist because as it mentions while some areas (like Spatial Reasoning) are improving, others are apparently starting to get lower. This calls into question a bit the idea of a G-Factor which is an assumption that their is a common factor of intelligence that covaries across all IQ domains (spatial reasoning, reaction time, etc... ) which is the theoretical reasoning behind IQ being meaningfully represented as a single numerical value rather than a multi-dimensional value.
Dead Comment
Dead Comment
This allows you to model the test as a hierarchical statistical model with some general intelligence factor (denoted G) at the top and then specific cognitive tasks branch off from there. You can then infer what G is just by statistical inference on the "branches" (the performance on the individual cognitive tasks); similar to how you might infer someone's height if you only had access to their leg and arm lengths, as these are highly correlated with each other and also with height.
I believe IQ scores are always population normed to have a mean of 100 but unnormalized scores are likely available to compare across time.
It's cool that that's your understanding, but you're wading into territory that gets people sterilized and killed. I have a lot of trust in science, but not here. This video was informative for me: https://youtu.be/UBc7qBS1Ujo
For example, here's an IQ test; let's say its given to 5,000,000 Canadians, and 3,000 Texans (1):
I would expect a normal distribution for Canadians, but for the Texans to score in the bottom quintile (regardless of "general intelligence").(1) I asked ChatGPT to come up with this test.
(edit: formatting)
IQ is an example of factor analysis [0] where an unobserved "general intelligence factor" g is derived from observed items. The items are chosen so that responses correlate with each other and with g (basically, if the same person took two IQ tests with different questions then the score should be about the same). There may be some intermediate factors like verbal ability or spatial ability. The items here will be chosen so that they correlate only with items of the same factor--e.g. verbal items correlate with verbal items but do not correlate with spatial items.
A raw score on the test is not meaningful; individuals are compared against the population of test takers to determine their rank in the population. First, raw scores of the population are normalized so that the mean is 100 and the standard deviation is 15 (this is arbitrary; it's just the scale they use). Then an individual test taker can be compared with the population on this scale. The percentile rank can be obtained directly from the IQ score and vice versa.
You have some questions about the shifting mean of 100. In practice the normed distribution is computed from a norming group rather than recomputing the norm after ever test. A particular population can shift from the norming group (either over time or because it's a group with different characteristics) which is where things like the Flynn Effect come from. So a lot depends on the norming group.
I hope that answered some of your questions.
[0] https://en.wikipedia.org/wiki/Factor_analysis
The questions are in difficulty order and they’re all worth the same so because of the time limit you won’t be able to answer them all correctly. The quicker you are at solving problems, the higher the score you will get.
Your score then shows how you compare to the population percentage wise and that’s your IQ. The average score would give you an average IQ.
Dead Comment
Hmmm I wonder if an increase use of videogames paired with a decrease in the amount of time parents can spend communicating with their children might be related.
Note that over the last 30 years it's vastly transitioned from one parent staying home raising children to both parents working.
> A study of Norwegian military conscripts' test records found that IQ scores have been falling for generations born after the year 1975, and that the underlying cause of both initial increasing and subsequent falling trends appears to be environmental rather than genetic.
https://en.wikipedia.org/wiki/Intelligence_quotient
In 2019, a group of researchers from the University of Oslo published a study that found no evidence of a decline in IQ scores over time in Norway, despite claims of such declines in other countries. The researchers argued that the methodology used in previous studies may have contributed to false conclusions about declining IQ scores.
In contrast, a group of researchers from the University of Amsterdam published a study in 2018 that reported a decline in IQ scores in the Netherlands over the past several decades. The researchers suggested that changes in educational systems, such as increased emphasis on testing and memorization, may be contributing to the decline.
Deleted Comment
Dead Comment
https://news.ycombinator.com/item?id=29798887
Maybe these tests are declining because they are measuring skills that are decreasingly relevant? I'm not certain I believe this myself but it's an interesting thought.
Not that its wrong to question, I just think you'd need to do more work supporting the idea that language skills are less important today for some reason.
So what? Being able to understand expressive language and quickly context shift vocabulary is extremely valuable. If you don't have the vocabulary to identify context, being able to look up the definition of words will only get you so far. Additionally, you can't write words if you don't know that they exist.
This site is dense in terms and acronyms that generally will not appear in a dictionary, which does not appear to be unusual. Many interests and professions have a frightening number of terms and acronyms now.
One day... I believe.
https://www.census.gov/newsroom/press-releases/2020/estimate...
Here is a pretty clear indication that people are just not having children.
Maybe the percentage of stay at home parents has stayed the same but the number of stay at home parents has shrunk because the number of all parents has just shrunk as well.
None of that really helps indicate why IQ in certain metrics related to communication would be in decline. Since the percentages are the same you would think outcomes would be similar then.
So are kids getting dumber or are parents just getting worse?
Or other factors in our environment are contributing to this. An increase in smart devices autocorrecting and doing 'math' for us for example.
From a strict evolutionary perspective I have doubts that a high IQ is useful anymore.
A big part of "The Bell Curve" was arguing that no interventions could change IQ except genetics and so any money spent on low IQ people (African-Americans in the book, but the author followed up by attacking poor people more generally) was a pointless waste.
It turns out he wasn't just an asshole, he was also wrong.
"The Bell Curve: Intelligence and Class Structure in American Life is a 1994 book by psychologist Richard J. Herrnstein and political scientist Charles Murray, in which the authors argue that human intelligence is substantially influenced by both inherited and environmental factors."
That statement completely contradicts what your claim about the book, and now I am disinclined to trust you.. Later on another statement also completely contradicts what you are saying:
"According to Herrnstein and Murray, the high heritability of IQ within races does not necessarily mean that the cause of differences between races is genetic. On the other hand, they discuss lines of evidence that have been used to support the thesis that the black-white gap is at least partly genetic, such as Spearman's hypothesis. They also discuss possible environmental explanations of the gap, such as the observed generational increases in IQ, for which they coin the term Flynn effect"
It should be self-evident that you can lower IQ through environment (injury, developmental issues, malnourishment, etc.). So even if you believe there's a genetic ceiling to IQ, Flynn effect (and reverse) don't contradict that.
I have yet to read “the bell curve” said, but did they really use an argument that flew in the face of the abundant evidence of IQ increases unlinked to genetics as a result of better nutrition and education? Hell America gained a few IQ points nationwide from banning leaded gasoline alone so we also knew of environmental means to affect IQ levels. This was all known about and very well established at the time of authorship. Is there an excerpt?
Dead Comment
When the collective hive mind of a society is organized around anti-intellectualism (the core ethos of America), then it will subsidize a combination of stupid (less talent) and mental laziness (lack of productive application of talent). This is how a society enters the dustbin of history and emerges as a brutal and backwards people.
America and western society is so taken with intellectualism that they spend their prime years for having children doing precocious amounts of studying while their brains are in a state of peak plasticity. Americans right now are like modern day Irish elk who need increasingly large antlers (degrees) to win the reproductive game but are so encumbered by this that it becomes difficult to reproduce at all. So fertility is sub-replacement and Americans have reacted by bringing in immigrants so they can spend more time working on their degrees.
What sort of immigrants do they bring in? University graduates!
https://fred.stlouisfed.org/series/BARTERICMP25UPZSUSA
> lack of productive application of talent
https://fred.stlouisfed.org/series/LNS12027662
?
As an aside, I think Idiocracy is the weakest of Mike Judge's works.
The only things being selected for in modern humans, if anything, are going to be things like disease resistance, maybe tolerance to some chemical contaminants in food & water, air pollution. And even then only in some parts of the world.
And sexual selection takes place faster and can lead populations to scenarios that induce natural selection.
On the individual level stupider people reproduce more so evolutionary speaking it's more efficient to be stupid. However if a small portion of the population has high IQ they can move society forward via say the discovery of electricity, mathematics, etc, etc. This propels every individual forward as a whole at the detriment of a few individuals who are to nerdy or geeky to get laid that often.
Thus from a high level perspective, there is selection pressure that works on the population of people that makes it so that our genes have the mechanisms in place to produce an occasional genius via specific combinations of traits or via simple on/off switch mutations that easily occur.
For reference this is a short and informative video on the aforementioned topic: https://youtu.be/sP2tUW0HDHA
Dead Comment
I am not rejecting this point but I have a hard time accepting it Carte Blanche. If anything if IQ is lowering for generic reasons I would suspect birth control as a cause especially since it’s a more recent phenomenon than means tested welfare.
("The rich outbred the poor" also seems very dubious, labor was still very manual, so you gotta have someone to do it.)
Deleted Comment
Dead Comment
Dead Comment
[1] https://www.sciencedirect.com/science/article/pii/S016028962...