Dead Comment
The choices for both were Science, Math, English, History/Social Sciences and Physical Education, plus did not attend college for the second.
Math is highly predictive of ATC performance. English is a key requirement due the communication-heavy role. Physical Education is linked to confidence which is a strong predictor of graduation rates.
That leaves History/Social Sciences and Science as oddballs. If you did poorly in Science or History/Social Sciences in high school, that likely didn't change in college, so you would have gotten at least 15 points by answering it the same way for both questions.
I'm not sure there was an expectation that someone would get them both right. Rather having different answers get 15 points ensures people answering both the same way didn't which likely would make the test a bit too easy to pass.
This test just looks like a big five personality test mixed with some socioeconomic and academic questions.
Or infinitely better than being an active ATC, which earned 0?
The situation here was the ATC was chronically understaffed and unable to fill positions. So an effort for them to boost applications makes sense even under non-DEI principles.
> "The empirically-keyed, response-option scored biodata scale demonstrated incremental validity over the computerized aptitude test battery in predicting scores representing the core technical skills of en route controllers."
I.e the aptitude test battery is WORSE than the biodata scale.
The second citation you offered merely notes that the AT-SAT battery is a better predictor than the older OPM battery, not that is the best.
I'd also say at a higher level that both of those papers absolutely reek of non-reproduceability and low N problems that plague social and psychological research. I'm not saying they're wrong. They are just not obviously definitive.
You're mistaken, it's the opposite. The first one found that AT-SAT performance was the best measure, with the biodata providing a small enhancement:
> AT-SAT scores accounted for 27% of variance in the criterion measure (β=0.520, adjusted R2=.271,p<.001). Biodata accounted for an additional 2% of the variance in CBPM (β=0.134; adjusted ΔR2=0.016,ΔF=5.040, p<.05).
> In other words, after taking AT-SAT into account, CBAS accounted for just a bit more of the variance in the criterion measure
Hence, "incremental validity."
> The second citation you offered merely notes that the AT-SAT battery is a better predictor than the older OPM battery, not that is the best.
You're right, and I can't remember which study it was that explicitly said that it was the best measure. I'll post it here if I find it. However, given that each failed applicant costs the FAA hundreds of thousands of dollars, we can safely assume that there was no better measure readily available at the time, or it would have been used instead of the AT-SAT. Currently they use the ATSA instead of the AT-SAT, which is supposed to be a better predictor, and they're planning on replacing the AT-SAT in a year or two; it's an ongoing problem with ongoing research.
> I'd also say at a higher level that both of those papers absolutely reek of non-reproduceability and low N problems that plague social and psychological research. I'm not saying they're wrong. They are just not obviously definitive.
Given the limited number of controllers, this is going to be an issue in any study you find on the topic. You can only pull so many people off the boards to take these tests, so you're never going to have an enormous sample size.
Performance on the AT-SAT is not job performance.
If you have a qualification test that feels useful but also turns out to be highly non-predictive of job performance (as, for example, most college entrance exams turn out to be for college performance), you could change the qualification threshold for the test without any particular expectation of losing job performance.
In fact, it is precisely this logic that led many universities to stop using admissions tests - they just failed to predict actual performance very well at all.
No, but it was the best predictor of job performance and academy pass rate there was.
https://apps.dtic.mil/sti/pdfs/ADA566825.pdf
https://www.faa.gov/sites/faa.gov/files/data_research/resear... (page 41)
There are a fixed number of seats at the ATC academy in OKC, so it's critical to get the highest quality applicants possible to ensure that the pass rate is as high as possible, especially given that the ATC system has been understaffed for decades.
I was under the impression that AF1 flew in/out of Andrews air force base, which I (possibly naively?) assumed did not use civilian ATC. But yes, that would be great :)