If you liked this article, definitely consider checking out other articles written by Kyra (Admiral_Cloudberg). She has done a ton of articles on almost all notable (near)crashes, including their root causes, investigations, and subsequent effects on the airplane industry.
For some crashes that have interesting causes (at least from an engineering perspective) beyond "maintenance failed to identify a problem before takeoff" or "pilots fail to identify problem or take wrong actions", I strongly recommend the articles on TWA 800 (1996) [1] and the near-crash of SmartLynx Estonia 9001 (2018) [2]. The first goes into great detail exactly _how_ the FAA discovered the root cause, whereas the second one involves a logic oversight in the flight computers of a modern Airbus plane.
What a great article on TWA 800. My gramps was a retired manager from TWA and when that happened he was all over it calling his buddies still working there. Thanks for the article!
I fly a desk herding cats, I mean devs, for a living now. But as a Navy-trained aviator, this and the Colgan Air mishap in Buffalo always make me cringe. I can't entirely judge the crew, because the human mind is a strange thing that has ways of rationalizing truly abnormal situations and behavior. That said, an obvious Crew Resource Management failure compounded the issues here.
That said, I believe aerobatics and out-of-control flight training need to be a lot more common in the civilian sector for professional pilots than they are. Awareness of your angle of attack, and knowledge of how an aircraft behaves at high AOA close to (and if possible beyond) stall don't seem to be taught well in the civilian sector, or at least outside the tactical jet community. The idea that a professional pilot can't understand that a deeply stalled aircraft can be nose-up and still have a heinous sink rate is profoundly bothering to me, just like the Colgan Air pilots who couldn't recognize severe wing rock as a sign of a deeply-departed state. Both of these required (if even possible at that point) aggressive nose-down pitch inputs to break the AOA. And even then, the aircraft is going to lose a ridiculous amount of altitude before it gains enough airspeed back to give you enough pitch authority to scoop out of the ensuing dive. Horrible.
It also boggles my mind that Airbus didn't think at first to design an aircraft which avoided the split control problem they had. One person and one person only needs to be in control of the aircraft at all times.
Time and time again we learn that people do stupid things under stress. Consider Transair flight 810 [1], in which a simple engine failure during takeoff results in a complete loss of the aircraft when both pilots mistakenly identify the _wrong_ engine as malfunctioning, despite identifying the failure correctly only minutes earlier. OP has an article on that crash too, actually [2].
Another very common failure mode, and really all you can do is mandate an SOP that the crew member vocalizes which engine they're turning off, and the other crew member verifies that it's the correct engine.
The other answer to "people do stupid things under stress" is the common maxim that every emergency procedure in every aircraft starts the same way. Somewhere in the cockpit is probably a standard-issue mechanical 8-day clock, and the first thing you do in any emergency is stop and wind the clock. Not because it does anything to the aircraft, precisely the opposite. It makes you stop for a second or two for the "oh shit" moment to pass. Then you analyze what's actually going on. The other saying that's always briefed is "no fast hands in the cockpit." You may need to be expeditious, but you still vocalize what you're doing with your crew or wingman and methodically go through the emergency procedure. You get in a lot more trouble flipping switches willy-nilly based on what you think you have than if you just stop, breathe, and look at what the aircraft is telling you. I still can recite the steps from memory:
I used to fly hang gliders. For almost every issue (except for flying too fast), when you get into trouble in a glider you lower your AoA and increase speed.
You certainly learn to pay attention to the aerodynamics. Lots and lots and lots of high-bank turns. Many of them relatively low and close to stalling speed.
Some occasionally below stalling speed, if you're high enough to safely reduce margins so much that a gust of tailwind puts you below, which requires you to just calmly follow through with constant angle of attack, descending, rather than force the nose up.
Short-field landings in unfamiliar places, requiring judgements of both approach angle and terrain. Moderate to heavy turbulence all day, speed varying from stalling to redline. Retractable landing gear. Flaps if you want that source of mental load too, in all speed regimes. Constant consideration of wind. One shot landings. Mountain flying with constant consideration of where the safe exit is. Consideration of deteriorating weather. Always aware of nearest landing site.
It's really non-stop training in all the fundamental parts of flying, minus engine operation, airspace and ATC communication.
To be fair, it sounds like this crash caused airlines and regulators to demand much more adverse event training, including high-altitude stall recovery. But according to the article, Airbus hasn't addressed the split control issue…
There isn't a "split control issue," the split control setup is fundamental to the way the Airbus control schema works. Changing it would be a non-starter.
Upset and departure are different things. Some aircraft can seem strait and level, but still be fully departed and falling like a rock. This won't happen in a Cessna trainer. It is normally associated with faster aircraft. The F-16 famously has a deep stall mode that sees it fall like a pancake for many thousand feet, with little hope of recovery at low altitudes.
In the US, spin recovery training is mandatory for flight instructors. That’s the only requirement I’m aware of, although all pilots do get some relatively basic training in unusual attitude recovery.
I have zero flying experience, but man, just by playing Pilotwings on the SNES and Ace Combat on the PSX, it's ingrained in me that when you stall any aircraft, be it a hang glider or a fighter jet, you put your nose down, gain speed and recover your aircraft. I am Brazilian, and since the first time I read the accident report I couldn't believe how a professional pilot could keep pitching the nose up for minutes on end on a stall situation. It made me A LOT less thrusting of the aviation industry capacity to take me safely from point A to point B.
My lack of credentials are identical to yours (fuck Lance) but this is way too harsh. When you witness a car veer off the side of the road, "what a moron, why didn't they just turn the wheel" is a logical question, but in reality the driver may have thought to do so and been physically unable to correct a power steering failure in time.
The SNES controller had 8 buttons and was operated from the safety of your couch. I'd imagine trying to correct this sort of situation is more like playing Microsoft Flight Simulator, while you're on a roller coaster, with random features of the plane not working, hypoxia clouding your judgment, and knowing 100+ lives are at stake if you fuck anything up.
Never been anything especially serious fortunately but I lose track of the number of times when something I've seen just didn't square with my current mental model of the physical world (like what day it was to give a trivial example) and, even though I consciously noted the observance, I've basically deep-sixed it into the "that's weird" can.
It is profoundly disturbing, but we do live in an age where everything is simulated, including expertise. How much time did Captain Dubois have to save the aircraft once he was on deck? Seems like Bolin had mostly doomed them by then.
Although they were not just hand flying. They were hand flying at a moments notice in IMC with faulty instrument data in a highly complex system. That combination of factors does seem to be a huge risk factor for accidents.
I wonder if reverting back to pilot control is really the best approach in situations with faulty instruments. Perhaps it would be safer to use algorithms that can deal with bad data in a sane way. A middle ground between blindly trusting bad data (as in 737 max) and reverting to the pilot. Use other data like GPS, accelerometers, strain sensors ,fuel flow rate sensors, to sense check the primary instruments.
In post-mortems involving pilot error, what strikes me is that airliner cockpit design sounds so convoluted, unintuitive, and just plain bad. The burden of disentangling this bad design is always placed on the pilots - through extensive training and rote memorization - which inevitably fails under stress.
In this particular example, consider:
- the opposing pilot inputs being signaled only by a pair of little green lights
- the cacophony of warning lights and alarms which, together, say little more than "something is wrong"
- instruments that direct a pilot to pull up during a full stall
- sensor failures with no clear indicator
- computer safeguards suddenly removed without a stated reason
Etc. And the expectation towards the crew is to quickly and corrently reason about this stream of conflicting signals, while embroiled in a sudden emergency.
It smacks of pure engineer-driven design, assembled with serious attention to the technical issues, but with near-zero empathy for the humans who will be operating the contraption.
Reminds me of internal web tools at so many companies. They present giant messy forms, with checkboxes and dropdowns for every conceivable edge case, which have to be manipulated just so or the system explodes. And when something breaks, of course it's the user's fault every time.
The ECAM is where one needs to look and start debugging. The pilots here didn't really do that. Audible warnings are only really used if they're extremely time sensitive (GPWS, Stalling, Dual Input).
There is no "cacophony of warning lights". The overhead panel is a designed to have all buttons not be illuminated, so if you're checking what's wrong, you can immediately tell by there being a light indicator on it. Here this was limited to two ADIRs indicating FAULT. Nothing more.
Not relevant in this example, but another terrible user interface in the cockpit is the FMC. That thing with the '70s era green screen, where pilots program routes with cryptic codes, seemingly inherited from the Apollo guidance computer.
So many accidents seem to happen when pilots receive a route change and have to hastily re-program the FMC.
There should be a calm AI voice stating what they should be doing based on some heuristics. Based on angle of attack, engine power and the last recorded reliable speed, I feel a simple system should be able to make projection of the current speed and throw some warning when pilot input are becoming real stupid.
It exists, and it'a called the ECAM. Last recorded reliable airspeed is usually never used, because an upset condition can change it very drastically fast. These systems help resolve upsets way more often then they miss and these engineers did their homework.
Hearing is also the first sense to go when people panic; this flight had the stall warning blaring for over a minute straight and it didn't occur to the pilots that they may not be in overspeed, but stall.
(It is of course not perfect, sometimes conditions become dependent on one another and their order is not always great - see this simulated simultaneous engine fire and engine failure right after takeoff, where flying on your burning engine might be better than turning into a 1000ft glider: https://www.youtube.com/watch?v=ZRbLLO385_c)
I also wonder why airspeed is derived by some formula applied to the measurement of flow along a tube that can be blocked by ice and become useless. Why not use GPS primarily and fall back to that if needed? Or the other way around even.
If you expect an instrument to routinely fail, it just seems logical to at least have a backup.
> Behind the scenes, the loss of valid airspeed data had triggered a shift in the Airbus’s complex flight control laws. In “normal law,” computers interpret pilots’ side stick inputs and move the control surfaces in accordance with what is reasonable at that altitude, speed, and configuration. This improves the handling of the airplane to such an extent that no particular skill is required to fly it gracefully. Normal law also comes with full flight envelope protections in roll, pitch, speed, and load factor.
> If sensor failures occur, the controls drop down a level to “alternate law.” This law contains several sub-laws with slightly different configurations, but in general, alternate law means that some or all computer moderation of control inputs remains, but flight envelope protections are removed. The autopilot and auto thrust cease to function.
> In the event of further failures, the controls can enter direct law, in which there are no flight envelope protections and side stick inputs correspond directly to the position of the control surfaces, with no adjustment by the computer. This makes the airplane fly rather like a classic airliner, similar to most older Boeing models.
When you think of all of the Tesla accidents, this still seems to be the failure mode for autonomous systems. It's safer 95% of the time. But when it fails, it's because the users are so dependent on the systems that they don't even have the simplest of skills to prevent catastrophe.
> it's because the users are so dependent on the systems that they don't even have the simplest of skills to prevent catastrophe.
One pilot was pushing full nose down. The other full nose up. The system _did_ tell them that they were doing this, but in the sea of other alarms, it didn't register with them.
The lack of alarm prioritization and the lack of crew resource management training in the face of an emergency seem like the major human factors here. Half of that is in the design of the plane, the other half in how the company trains their pilots to handle severe emergencies.
I'd be hesitant to compare these. The aircraft still operated as designed. It detected that the input data was bad and turned the automation off. The pilot flying just didn't seem to grasp what that means.
Most Tesla Autopilot failures aren't drivers that don't know how to drive, it's the automation making bad inputs.
No, some warnings were also thrown away and never shown to the pilots because the automated systems thought they were erroneous (since they were so outside the flight envelope IIRC).
It's tempting to blame the inexperienced first officers, their training, or the CRM failures on this flight, but the blame for this crash has to land squarely on the atrociously bad UX of the Airbus cockpit.
Sensory overload and no clear readout of what is actually broken (pitot tube icing in this case) is bad but the fly-by-wire joystick configuration is what really doomed this flight. In Boeing airliners and many other types of aircraft the control sticks are mechanically linked together: you can physically feel if the other pilot is fighting your inputs:
> “Controls to the left,” Robert said, still worried about their bank angle. Pressing the priority button on his side stick, he took control and locked out Bonin, but Bonin immediately pressed his own priority button and assumed control again.
If the controls were mechanically linked Robert would have recognized his inputs were being overridden and would have been able to save the plane.
Someone in my extended family works at Airbus as an engineer (hence the throwaway account)
When I asked him about this accident, I brought up this specific point - the mechanical non-linkage.
Sure, the Airbus is fly by wire (there are no "mechanics"), but you can still program one joystick to mimic the other joystick, right? As far as I know, the Airbus plane actually averages the inputs..?? [0]
Anyway, he sorta-angrily gave me the same explanation as I just saw posted here as well: it was a crew management issue. (which of course may have played a huge role).
I am not a pilot so I am probably missing something. But he (the Airbus family member) did seem quite defensive about this. What portion was internalized corporate-comm "we are not at fault" reasoning? What portion was engineering hubris of "fly by wire is unquestionably superior to mechanical linkage"?
I don't know. But I do find it strange... and indefensible. When does the average of inputs make sense? I'm open to an explanation. Is there a good one?
Averaged inputs on the sticks was the worst thing I read about back when this happened. I can't imagine a circumstance where this is desirable, and it can only lead to confusion.
> When I asked him about this accident, I brought up this specific point - the mechanical non-linkage. Sure, the Airbus is fly by wire (there are no "mechanics"), but you can still program one joystick to mimic the other joystick, right?
That's come up at least twice in NTSB ship accident reports. Some ships have more than one control station. This is usually to allow driving from a control station out on a bridge wing, where the pier can be seen clearly. A high speed ferry plowed into a dock in New York City in 1993 because of confusion over which station had control. "The NTSB concludes that the propulsion control system on the Seastreak Wall Street used poorly designed visual and audible cues to communicate critical information about mode and control transfer status."[1]
Big throttle levers at each station, but only the ones at one station at a time did anything. The levers did not move together.
The U.S. Navy went all the way to touch-screen throttles. After a collision involving confusion over which of three control stations was driving, they're going back to big handles.[2]
The Airbus system has been criticized, but it's two people sitting side by side. The ship systems have control stations much further apart.
The 777 and 787 are fly-by-wire aircraft with mechanically-linked yokes. In extreme situations (one yoke is actually stuck and won't move at all), there is a breakaway on that link, and if you exert enough force upon it (like, really hulk at the thing), you can snap it. I don't know whether this has been necessary in flight, but the facility is there. The control surfaces are split between the yokes, so for example the left yoke pitch control commands the left-side elevator, making the plane still controllable with only one working yoke.
What would you have the system do instead of averaging the inputs? If you have 2 inputs when you should have 1, you warn with a light and sound, and what input do you use in the meantime?
> It's tempting to blame the inexperienced first officers or their training but the blame for this crash has to land squarely on the atrociously bad UX of the Airbus cockpit.
The real problem is that someone who doesn't look at his most critical instrument* in instrument flying conditions, causes a stall, doesn't respond to stall warnings, doesn't communicate, refuses to hand over controls even though he doesn't understand what is happening, is in the cockpit at all. Either the training was inadequate, or deficiencies in Bonin's flying were overlooked.
Also, the PNFs hesitancy to shout "my controls" and/or hitting the priority button, especially as he moves to pitch the plane down when it was probably still recoverable.
That the off-duty captain standing behind them is the first to vocalize they're at 10000 ft and stalling is indeed not great.
I won't like to criticize someone with a different trade skill (much less when they're already dead), but to paraphrase two of my CFI friends, their summary view was "Bonin acted stupidly". He didn't communicate he had controls, kept pulling the stick to nose up. The pilot psyche element had lot to play in this accident. He was sticking to a naive understanding of aerodynamics in panic - which was counterfactual - rather than counting on prior training experience (like what we developers would say an antipattern). Relaying a similar point, his training still wasn't enough to gather situational awareness & apply common-sense instead of a panicky behavior.
Honestly, part of the mistake is also on Dubois. When flying as a team, it is essential to establish hierarchy and make sure who has authority. As the one with the lesser number of hours, Bonin should have been relegated in favor of Robert. That being said, hierarchy should not be as much of an obstacle that the one in authority disregards the suggestions of the copilot.
>It's tempting to blame the inexperienced first officers, their training, or the CRM failures on this flight, but the blame for this crash has to land squarely on the atrociously bad UX of the Airbus cockpit.
Hard disagree. No matter how bad the UX, there just isn't any conceivable situation where Bonin's pulling up the rudder for minutes on end wasn't suicidally dangerous, or made any sense at all.
Same sentiment heard elsewhere. Pulling stick up in a stall midflight is a rookie mistake. Pulling up & increasing thrust is a takeoff peculiarity. Even if AF447's altimeter was faulty, they both missed something else in their panic - they fell 20,000ft in few minutes, going faster & faster. The g-forces are palpable enough to experienced pilot (who reky on it sometimes when they fly in complete darkness).
This incident was rookie flying behavior coupled with a complete disregard of situational awareness. The flight UX of Airbus isn't that bad: 330 & 340 have a much better panel layout & less clutter than their Boeing counterparts from late 90s.
I have to agree with you. Even non-pilots who play video games have enough common sense not to do that: they notice that when they climb sharply they lose speed and eventually stall and that when they dive back down it gives them their speed back and allows them to recover.
His inexperience cannot be denied. Must have been terrifying for the pilot when the plane's safeties and automation suddenly disengaged. Uncertainty and panic must have seized him and never let go.
Armchair amateur, Pitot tube failure is such a frequent occurrence that I’m angry we display any other warning than this one.
- It should have a needle to eject caps on it, so that pilots don’t forget the caps (Brisbane accident, among many),
- The needle should also sense, if not remove, the ice (Air France accident),
- It should wake up the pilots and reset controls to fixed values for the current altitude, since when Pitot tubes fail, autopilot and autothrottle are worse than worthless (they will automatically crash the plane).
Pitot tube failures are incredibly frustrating. In one case it's theorized a wasp nest blocked the inside of one, after not being covered while on the ground a long time.
The article misses the key takeaway from this incident IMHO.
When the controls were pegged at full aft deflection, the stall warning would cease, because instrument readings in the deep stall were considered invalid by the computer. Whenever the pilot would start to push forward to recover, the computer stopped rejecting the readings, and started sounding the stall warning again!
So every time the pilot started to do the right thing, the airplane would start screaming "STALL STALL" at him, and he would pull the stick back again to make it stop.
I firmly believe that they would still be alive if the stall warning had either not been installed at all, or had functioned properly.
> During flight 447’s plunge toward the sea, the flight directors disappeared every time the forward airspeed dropped below 60 knots. This was because an airspeed below 60 knots while in flight is so anomalous that the computers are programmed to reject such a reading as false. Furthermore, at an angle of attack threshold which corresponded quite closely to 60 knots, the stall warning would cease for exactly the same reason. This created an unfortunate correlation, wherein Bonin would pitch up, the angle of attack and airspeed would exceed the rejection thresholds, the flight director would stop telling him to fly up, and the stall warning would cease; then if he attempted to pitch down, the angle of attack data would become valid again, the flight director would tell him to pitch up, and the stall warning would return. This perverse Pavlovian relationship could have subconsciously conditioned Bonin to believe that pitching down was causing the plane to approach the stall envelope, and that by pitching up he was actually protecting the plane against stalling. This violated basic aeronautical common sense, but by this point Bonin and common sense might as well have been on different planets.
> During flight 447’s plunge toward the sea, the flight directors disappeared every time the forward airspeed dropped below 60 knots. This was because an airspeed below 60 knots while in flight is so anomalous that the computers are programmed to reject such a reading as false. Furthermore, at an angle of attack threshold which corresponded quite closely to 60 knots, the stall warning would cease for exactly the same reason. This created an unfortunate correlation, wherein Bonin would pitch up, the angle of attack and airspeed would exceed the rejection thresholds, the flight director would stop telling him to fly up, and the stall warning would cease; then if he attempted to pitch down, the angle of attack data would become valid again, the flight director would tell him to pitch up, and the stall warning would return. This perverse Pavlovian relationship could have subconsciously conditioned Bonin to believe that pitching down was causing the plane to approach the stall envelope, and that by pitching up he was actually protecting the plane against stalling.
Not a pilot but I believe that pulling the stick back is exactly the wrong thing to do in a stall. The pilot should have instinctively known that pulling back on the stick would not recover a stall. On the other hand, pilots are taught to "trust the instruments" but IIRC the aircraft had no angle of attack indicator, and the airspeed tubes were frozen over. So lots of bad or missing information, contributing to stress and panic, which does strange things to otherwise normal people.
I'm a pilot. Yes, you're correct, but there was so much else going on here... unless you're certain it's wrong, you're going to be inclined to listen to the airplane when it screams at you not to do something in an already terrifying unfamiliar situation.
It's important to note the stall indication itself wasn't disregarded, but the indicated airspeed (that is a prerequisite). This plane was in the exceptional situation of falling out of the sky faster than it was moving forward.
Autopilot, automation, algorithms, safeties, guarantees, envelopes... Illusions, gone at the first sign of trouble. I easily get used to such technological comforts and it is deeply traumatizing when they're talking away.
The badassery men are capable of when they accept this and drive instead of allowing themselves to be driven is the stuff of legends.
> [a short-circuit] left the automatic stabilization and control system without electric power.
> Cooper noted that the carbon dioxide level was rising in the cabin and in his spacesuit.
> "Things are beginning to stack up a little."
> Turning to his understanding of star patterns, Cooper took manual control of the tiny capsule and successfully estimated the correct pitch for re-entry into the atmosphere.
> Cooper drew lines on the capsule window to help him check his orientation before firing the re-entry rockets.
> "So I used my wrist watch for time," he later recalled, "my eyeballs out the window for attitude."
> "Then I fired my retrorockets at the right time and landed right by the carrier."
For some crashes that have interesting causes (at least from an engineering perspective) beyond "maintenance failed to identify a problem before takeoff" or "pilots fail to identify problem or take wrong actions", I strongly recommend the articles on TWA 800 (1996) [1] and the near-crash of SmartLynx Estonia 9001 (2018) [2]. The first goes into great detail exactly _how_ the FAA discovered the root cause, whereas the second one involves a logic oversight in the flight computers of a modern Airbus plane.
[1]: https://admiralcloudberg.medium.com/memories-of-flame-the-cr... [2]: https://admiralcloudberg.medium.com/the-dark-side-of-logic-t...
I've read the accident report, a few books, and even some of the conspiracy theories and this is the best "factual" summary.
That said, I believe aerobatics and out-of-control flight training need to be a lot more common in the civilian sector for professional pilots than they are. Awareness of your angle of attack, and knowledge of how an aircraft behaves at high AOA close to (and if possible beyond) stall don't seem to be taught well in the civilian sector, or at least outside the tactical jet community. The idea that a professional pilot can't understand that a deeply stalled aircraft can be nose-up and still have a heinous sink rate is profoundly bothering to me, just like the Colgan Air pilots who couldn't recognize severe wing rock as a sign of a deeply-departed state. Both of these required (if even possible at that point) aggressive nose-down pitch inputs to break the AOA. And even then, the aircraft is going to lose a ridiculous amount of altitude before it gains enough airspeed back to give you enough pitch authority to scoop out of the ensuing dive. Horrible.
It also boggles my mind that Airbus didn't think at first to design an aircraft which avoided the split control problem they had. One person and one person only needs to be in control of the aircraft at all times.
[1]: https://en.wikipedia.org/wiki/Transair_Flight_810 [2]: https://admiralcloudberg.medium.com/dark-waters-of-self-delu...
The other answer to "people do stupid things under stress" is the common maxim that every emergency procedure in every aircraft starts the same way. Somewhere in the cockpit is probably a standard-issue mechanical 8-day clock, and the first thing you do in any emergency is stop and wind the clock. Not because it does anything to the aircraft, precisely the opposite. It makes you stop for a second or two for the "oh shit" moment to pass. Then you analyze what's actually going on. The other saying that's always briefed is "no fast hands in the cockpit." You may need to be expeditious, but you still vocalize what you're doing with your crew or wingman and methodically go through the emergency procedure. You get in a lot more trouble flipping switches willy-nilly based on what you think you have than if you just stop, breathe, and look at what the aircraft is telling you. I still can recite the steps from memory:
- Maintain aircraft control
- Analyze the situation
- Apply the appropriate emergency procedure(s)
- Land when conditions permit
[0]https://en.wikipedia.org/wiki/US_Airways_Flight_1549
Some occasionally below stalling speed, if you're high enough to safely reduce margins so much that a gust of tailwind puts you below, which requires you to just calmly follow through with constant angle of attack, descending, rather than force the nose up.
Short-field landings in unfamiliar places, requiring judgements of both approach angle and terrain. Moderate to heavy turbulence all day, speed varying from stalling to redline. Retractable landing gear. Flaps if you want that source of mental load too, in all speed regimes. Constant consideration of wind. One shot landings. Mountain flying with constant consideration of where the safe exit is. Consideration of deteriorating weather. Always aware of nearest landing site.
It's really non-stop training in all the fundamental parts of flying, minus engine operation, airspace and ATC communication.
Q: How many pilots would attempt a stall recovery if the aircraft's instruments were not indicating they were stalled?
It is not required for Part 91 (General Aviation/"Private") or Part 135 (Charter) operators.
https://www.faa.gov/documentLibrary/media/Advisory_Circular/...
The SNES controller had 8 buttons and was operated from the safety of your couch. I'd imagine trying to correct this sort of situation is more like playing Microsoft Flight Simulator, while you're on a roller coaster, with random features of the plane not working, hypoxia clouding your judgment, and knowing 100+ lives are at stake if you fuck anything up.
I wonder if reverting back to pilot control is really the best approach in situations with faulty instruments. Perhaps it would be safer to use algorithms that can deal with bad data in a sane way. A middle ground between blindly trusting bad data (as in 737 max) and reverting to the pilot. Use other data like GPS, accelerometers, strain sensors ,fuel flow rate sensors, to sense check the primary instruments.
Deleted Comment
In this particular example, consider:
- the opposing pilot inputs being signaled only by a pair of little green lights
- the cacophony of warning lights and alarms which, together, say little more than "something is wrong"
- instruments that direct a pilot to pull up during a full stall
- sensor failures with no clear indicator
- computer safeguards suddenly removed without a stated reason
Etc. And the expectation towards the crew is to quickly and corrently reason about this stream of conflicting signals, while embroiled in a sudden emergency.
It smacks of pure engineer-driven design, assembled with serious attention to the technical issues, but with near-zero empathy for the humans who will be operating the contraption.
Reminds me of internal web tools at so many companies. They present giant messy forms, with checkboxes and dropdowns for every conceivable edge case, which have to be manipulated just so or the system explodes. And when something breaks, of course it's the user's fault every time.
There is no "cacophony of warning lights". The overhead panel is a designed to have all buttons not be illuminated, so if you're checking what's wrong, you can immediately tell by there being a light indicator on it. Here this was limited to two ADIRs indicating FAULT. Nothing more.
I think this might help: https://www.youtube.com/watch?v=0a06A78iXnQ
So many accidents seem to happen when pilots receive a route change and have to hastily re-program the FMC.
Hearing is also the first sense to go when people panic; this flight had the stall warning blaring for over a minute straight and it didn't occur to the pilots that they may not be in overspeed, but stall.
(It is of course not perfect, sometimes conditions become dependent on one another and their order is not always great - see this simulated simultaneous engine fire and engine failure right after takeoff, where flying on your burning engine might be better than turning into a 1000ft glider: https://www.youtube.com/watch?v=ZRbLLO385_c)
If you expect an instrument to routinely fail, it just seems logical to at least have a backup.
> If sensor failures occur, the controls drop down a level to “alternate law.” This law contains several sub-laws with slightly different configurations, but in general, alternate law means that some or all computer moderation of control inputs remains, but flight envelope protections are removed. The autopilot and auto thrust cease to function.
> In the event of further failures, the controls can enter direct law, in which there are no flight envelope protections and side stick inputs correspond directly to the position of the control surfaces, with no adjustment by the computer. This makes the airplane fly rather like a classic airliner, similar to most older Boeing models.
When you think of all of the Tesla accidents, this still seems to be the failure mode for autonomous systems. It's safer 95% of the time. But when it fails, it's because the users are so dependent on the systems that they don't even have the simplest of skills to prevent catastrophe.
One pilot was pushing full nose down. The other full nose up. The system _did_ tell them that they were doing this, but in the sea of other alarms, it didn't register with them.
The lack of alarm prioritization and the lack of crew resource management training in the face of an emergency seem like the major human factors here. Half of that is in the design of the plane, the other half in how the company trains their pilots to handle severe emergencies.
Most Tesla Autopilot failures aren't drivers that don't know how to drive, it's the automation making bad inputs.
Sensory overload and no clear readout of what is actually broken (pitot tube icing in this case) is bad but the fly-by-wire joystick configuration is what really doomed this flight. In Boeing airliners and many other types of aircraft the control sticks are mechanically linked together: you can physically feel if the other pilot is fighting your inputs:
> “Controls to the left,” Robert said, still worried about their bank angle. Pressing the priority button on his side stick, he took control and locked out Bonin, but Bonin immediately pressed his own priority button and assumed control again.
If the controls were mechanically linked Robert would have recognized his inputs were being overridden and would have been able to save the plane.
When I asked him about this accident, I brought up this specific point - the mechanical non-linkage.
Sure, the Airbus is fly by wire (there are no "mechanics"), but you can still program one joystick to mimic the other joystick, right? As far as I know, the Airbus plane actually averages the inputs..?? [0]
Anyway, he sorta-angrily gave me the same explanation as I just saw posted here as well: it was a crew management issue. (which of course may have played a huge role).
I am not a pilot so I am probably missing something. But he (the Airbus family member) did seem quite defensive about this. What portion was internalized corporate-comm "we are not at fault" reasoning? What portion was engineering hubris of "fly by wire is unquestionably superior to mechanical linkage"?
I don't know. But I do find it strange... and indefensible. When does the average of inputs make sense? I'm open to an explanation. Is there a good one?
---
[0] See https://news.ycombinator.com/item?id=4224707 from 2012.
"This input was averaged (read: canceled) with the other pilot's nose up input."
(and further down in the same sub-thread)
"The AF447 inputs were averaged."
That's come up at least twice in NTSB ship accident reports. Some ships have more than one control station. This is usually to allow driving from a control station out on a bridge wing, where the pier can be seen clearly. A high speed ferry plowed into a dock in New York City in 1993 because of confusion over which station had control. "The NTSB concludes that the propulsion control system on the Seastreak Wall Street used poorly designed visual and audible cues to communicate critical information about mode and control transfer status."[1] Big throttle levers at each station, but only the ones at one station at a time did anything. The levers did not move together.
The U.S. Navy went all the way to touch-screen throttles. After a collision involving confusion over which of three control stations was driving, they're going back to big handles.[2]
The Airbus system has been criticized, but it's two people sitting side by side. The ship systems have control stations much further apart.
[1] https://www.ntsb.gov/investigations/AccidentReports/Reports/...
[2] https://www.theverge.com/2019/8/11/20800111/us-navy-uss-john...
The real problem is that someone who doesn't look at his most critical instrument* in instrument flying conditions, causes a stall, doesn't respond to stall warnings, doesn't communicate, refuses to hand over controls even though he doesn't understand what is happening, is in the cockpit at all. Either the training was inadequate, or deficiencies in Bonin's flying were overlooked.
That the off-duty captain standing behind them is the first to vocalize they're at 10000 ft and stalling is indeed not great.
Hard disagree. No matter how bad the UX, there just isn't any conceivable situation where Bonin's pulling up the rudder for minutes on end wasn't suicidally dangerous, or made any sense at all.
This incident was rookie flying behavior coupled with a complete disregard of situational awareness. The flight UX of Airbus isn't that bad: 330 & 340 have a much better panel layout & less clutter than their Boeing counterparts from late 90s.
His inexperience cannot be denied. Must have been terrifying for the pilot when the plane's safeties and automation suddenly disengaged. Uncertainty and panic must have seized him and never let go.
* The ECAM did show IF SPD DISAGREE: ADR CHECK PROC.
* The plane did shout DUAL INPUT several times, and a button to lock out the other pilot is right on the stick. You can hear it being used by Bonin.
I'd say this is a failure of lack of crew resource management (not even technical ability) more than anything else.
And there were _several_ incidents with Airbuses that were caused by dual inputs.
Deleted Comment
- It should have a needle to eject caps on it, so that pilots don’t forget the caps (Brisbane accident, among many),
- The needle should also sense, if not remove, the ice (Air France accident),
- It should wake up the pilots and reset controls to fixed values for the current altitude, since when Pitot tubes fail, autopilot and autothrottle are worse than worthless (they will automatically crash the plane).
https://en.wikipedia.org/wiki/Birgenair_Flight_301
When the controls were pegged at full aft deflection, the stall warning would cease, because instrument readings in the deep stall were considered invalid by the computer. Whenever the pilot would start to push forward to recover, the computer stopped rejecting the readings, and started sounding the stall warning again!
So every time the pilot started to do the right thing, the airplane would start screaming "STALL STALL" at him, and he would pull the stick back again to make it stop.
I firmly believe that they would still be alive if the stall warning had either not been installed at all, or had functioned properly.
> During flight 447’s plunge toward the sea, the flight directors disappeared every time the forward airspeed dropped below 60 knots. This was because an airspeed below 60 knots while in flight is so anomalous that the computers are programmed to reject such a reading as false. Furthermore, at an angle of attack threshold which corresponded quite closely to 60 knots, the stall warning would cease for exactly the same reason. This created an unfortunate correlation, wherein Bonin would pitch up, the angle of attack and airspeed would exceed the rejection thresholds, the flight director would stop telling him to fly up, and the stall warning would cease; then if he attempted to pitch down, the angle of attack data would become valid again, the flight director would tell him to pitch up, and the stall warning would return. This perverse Pavlovian relationship could have subconsciously conditioned Bonin to believe that pitching down was causing the plane to approach the stall envelope, and that by pitching up he was actually protecting the plane against stalling. This violated basic aeronautical common sense, but by this point Bonin and common sense might as well have been on different planets.
Deleted Comment
These AA instructional videos by Captain Vanderburgh are fascinating - even to a non-pilot like myself.
They are especially relevant as we dip our toes into automobile auto-pilot and the "automation dependency" that comes along with it.
Of particular note in this video:
@ 12:30: "... tactily connected to the airplane ..."
@ 14:15: "... we see automation dependent crews, lacking confidence in their own ability to fly an airplane are turning to ther autopilot ..."
@ 17:35 - 18:15: (just listen)
[1] https://www.youtube.com/watch?v=5ESJH1NLMLs
The badassery men are capable of when they accept this and drive instead of allowing themselves to be driven is the stuff of legends.
https://en.wikipedia.org/wiki/Mercury-Atlas_9
https://en.wikipedia.org/wiki/Gordon_Cooper
> Cooper lost all attitude readings.
> [a short-circuit] left the automatic stabilization and control system without electric power.
> Cooper noted that the carbon dioxide level was rising in the cabin and in his spacesuit.
> "Things are beginning to stack up a little."
> Turning to his understanding of star patterns, Cooper took manual control of the tiny capsule and successfully estimated the correct pitch for re-entry into the atmosphere.
> Cooper drew lines on the capsule window to help him check his orientation before firing the re-entry rockets.
> "So I used my wrist watch for time," he later recalled, "my eyeballs out the window for attitude."
> "Then I fired my retrorockets at the right time and landed right by the carrier."
Deleted Comment
https://admiralcloudberg.medium.com/children-of-the-magenta-...
https://www.vanityfair.com/news/business/2014/10/air-france-...
All of his articles about transportation disasters (Columbia, M/V Estonia, many other plane crashes) are very good and highly recommended.