Uncategorized

The Digital Doctor: Automation, Aviation and Medicine

Miracle Tow

 The story of Chesley “Sully” Sullenberger – the “Miracle on the Hudson” pilot – is a modern American legend. I’ve gotten to know Captain Sullenberger over the past several years, and he is a warm, caring, and thoughtful person who saw, in the aftermath of his feat, an opportunity to promote safety in many industries, including healthcare.

In my continuing series of interviews I conducted for my upcoming book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Agehere are excerpts of my interview with Sully, conducted at his house in San Francisco’s East Bay, on May 12, 2014.

Bob Wachter: How did people think about automation in the early days of aviation?

Sully Sullenberger:  When automation became possible in aviation, people thought, “We can eliminate human error by automating everything.” We’ve learned that automation does not eliminate errors. Rather, it changes the nature of the errors that are made, and it makes possible new kinds of errors. The paradox of cockpit automation is that it can lower the pilot’s workload in phases of flight when the workload is already low, and it can increase the workload when the workload is already high.

If you’re in the cruise portion of a long distance flight, it’s possible to have the airplane programmed to fly a specified vertical and horizontal path for many hours without much intervention at all. That relegates the pilots to the role of monitor, something that humans are not good at. Witness the TSA security agents, who are screening countless passengers looking for that one time in thousands where there’s a threat. Humans are much better doers than monitors.

The other problem with technology is, at least for now, it can only manage what has been foreseen and for which it’s been programmed. So one of the weaknesses of technology is that it has a hard time handling “black swan” events, like our flight.

BW: Why are computers so good in aviation and so mediocre in healthcare?

SS:  I think there are a number of reasons. First we have a long history of putting safety as one of our highest priorities. From the very beginning we’ve known that aviation is an inherently risky endeavor. And, in the media age, we all see the video of the huge orange fireball as the jet fuel explodes. That gives us a sense of urgency.

The second component is that we take a systems approach. We look at it from end to end. We look at connectedness, the interrelatedness of all the things that we do, of the systems and the airplane, the human system, the technology system, and the air traffic control system. So we have to be the absolute master of the machine and all the systems, of the passengers, of the environmental conditions… All simultaneously, all continuously.

And third, we have this formal lessons-learned process that does root-cause analysis, makes findings about fact, causes and contributing factors, and makes recommendations for improving the system in terms of the designs, the policies, procedures, training, human performance, and standards. It’s a self-correcting mechanism.

BW:  In the early days of aviation, was there an overarching philosophy about the relationship between the people and the machines?

SS:  Yes. That the pilot should absolutely be informed about everything going on in his or her aircraft, and ultimately able to affect the outcome, to change it, to control it.

BW:  So the pilot would be in charge?

SS:  Always be in charge.

I asked whether there was another camp in aviation – folks who believed that the technology could, and should, fly the plane.

SS: Airbus was, and still is, different than Boeing in their automation philosophy. Airbus tended to embrace letting the airplane do a lot of things, where Boeing’s philosophy was that the pilot is and should be in control of every part of the process. For example, Airbus would set hard limits in the digital flight control system laws, the fly-by-wire laws that would prohibit the pilot from exceeding certain limitations in how fast or slow the airplane could fly, how steeply it could bank, how many degrees you could tilt the angle of the wing from horizontal or raise the nose above horizontal. Whereas Boeing takes the approach that there are rare occasions where, to avoid colliding with another airplane or crashing into the ground, you need to pull harder than the flight control system might otherwise allow. I tend to favor having the pilot directly and completely in control of the airplane. The downside of that is that every pilot has to be trained well, be highly experienced, and have a deep understanding of airplanes and how they work.

BW:  But you had that experience because you had a whole career before all of this technology. How does a young person get that? And how do we prevent the erosion of skills as the technology gets better?

SS: As we use the technology more and more – and we’re encouraged to do so by our airlines because it’s so efficient – then we get the sense that it’s almost infallible. And, because we haven’t done much manual handling of the airplane, we lose confidence in our ability to manually control the airplane as well as the automation can. That sometimes makes some pilots reluctant to quickly and effectively intervene when they see things going wrong or when the automation isn’t doing what they expect.

BW:  Is the solution forcing them to fly manually more of the time?

SS:  Yes. We have to design our systems to require our engagement. We cannot design a system that’s so hands off that we are simply required to sit there and watch it for 14 hours. That’s simply not going to work.

Another piece of the problem is that our systems should offer more options than all or nothing. I’ve been proposing an a la carte menu, with increasing or decreasing levels of technology they can use. The only question we have to ask ourselves is what level of technology is most appropriate for this phase of flight. The answer is the one that keeps us engaged and aware and able to quickly and effectively intervene, and also keeps our workload neither too high nor too low.

We turned to January 15, 2009, the day Sully safely landed an Airbus A320 on the Hudson River, within a few minutes of a bird strike that disabled both the plane’s engines.

SS: What happened to us was a very rare event. And it happened late in my career, after I’d been an airline pilot for 29 years, a captain for 21 of those. In all that time, 42 years of flying, I had never experienced in flight the actual failure of even a single engine. That’s how reliable our propulsion technology is. And, in all that time I had never been in a situation in which I had doubted the outcome of a flight. I had never been challenged by a situation that I didn’t think I could handle.

BW:  Is there a strange part of one’s brain that says, “I hope that something like that would happen because I’ve been training for it my whole life”?

SS:  No.

BW:  Is that too crazy?

SS: No, it’s not, because it happens a bit in combat. I served in the Vietnam era but I was never in combat. I always wondered if I would have been up to the challenge. I think it’s just natural when you train for that to wonder: Would I be brave enough? Would I be resilient enough? Would I be a good enough leader? Would I be clever enough, strong enough to complete the mission and keep the people I was responsible for safe? I was prepared never to know the answer to that question. But I always wondered about it.

But in the airline world, it’s different. Our job is to make it calm and predictable. We make it look easy. We work so hard to plan and anticipate and have alternatives for every course of action.

And so on this flight, just 100 seconds after takeoff, I saw the birds about two seconds before we hit them. At that point we’re travelling at 316 feet per second, and there was not enough time or distance to maneuver a jet airplane away from them. It was like a Hitchcock film. I saw the birds fill the windscreen, I could hear the thumps and thuds as we struck them. And as the birds entered the center of both jet engines and began to damage them, I heard terrible noises I’d never heard before, severe vibrations and then within seconds, the burning smell came into the cabin.

We had a great advantage in that there was no ambiguity. I knew what the cause was and what that entailed, what that meant for us, and I knew it was going to be, after such a routine career, the challenge of a lifetime. Yet it was such a sudden shock, the startle factor was enormous. I could feel my blood pressure shoot up, my pulse spike, my perceptual field narrow. It was really distracting, and it was marginally debilitating. It absolutely interfered with my cognitive processing. It didn’t leave me with the ability to do the math on altitude and distance.

I had to do three things that, in retrospect, made the difference. First, I had to summon up from somewhere within me this professional calm that really isn’t calm at all. It meant having the discipline to compartmentalize and focus on the task at hand in spite of the stress.

Second, even though we had never trained for this, because I had such a well-defined paradigm in my mind about how to solve any aviation emergency, I was able to impose that paradigm on this situation and turn it into a problem that I could solve.

Finally, because of the extreme workload and time pressure I knew that there was not time for me to do everything I really needed to do. I chose to do the most important things, and try to do them very, very well. And I had the discipline to ignore everything else that I did not have time to do. Calming myself, setting clear priorities, managing the workload, load shedding… those were the key.

BW:  Did you do all that consciously, as in: “Here are all the things one might do, but here are the six that I’m going to do?” Or was it all instinct?

SS: I would say that I was able to quickly synthesize a lifetime of training and experience and intuitively understand that that was the approach I needed to take. It was partly a result of my military flight training, from being a fighter pilot. It was all very intuitive. Fly the airplane first. Make good choices. Pick the best place to land. That sort of process.

But I immediately did two very specific things that I remembered from the dual engine flameout checklist. I turned on the engine ignition so, if the engines could recover thrust, they would. And I started our aircraft’s auxiliary power unit, which is a small jet engine, a small turbine that has its own electrical generator. In a fly-by-wire airplane like the Airbus, it’s especially important to have an uninterrupted supply of electrical power because the plane does not have a mechanical, wires and pulleys connection between the flight control stick or wheel in the cockpit and the flight control surfaces on the wings and tail. Instead the pilot’s inputs are interpreted and mediated by flight control computers, which send electrical impulses to actuators that move the flight control surfaces of wings and tail. And that requires electric and hydraulic power.

Without that power, our systems and our information displays would have degraded. And the power also allowed us to keep intact all the flight envelope protection, which prevented us from getting too fast or too slow or banking too steeply.

As we’re heading toward the water, I’m the one controlling the airplane, but my first officer, Jeff Skiles, is monitoring the performance of the airplane and my performance. I didn’t have time to tell him how to help me. Right before landing he intuitively understood that he needed to help me judge our altitude. And so he stopped trying to use the checklist to regain thrust from what turned out to be irreparably damaged engines and he began to call out to me altitude and airspeed. You’re 200 feet, you’re 100… that kind of stuff. And if I had misjudged that height by even a fraction of a second – I might have raised the nose too early, got too slow, lost lift and then dropped it in hard. If I waited too long we would hit nose first way too fast with too great a rate of descent and that could be very bad.

And even though I was not at the minimum speed at which the Airbus flight control protections would have prevented me from raising the nose further no matter how hard I pulled, there was a little-known feature of the Airbus software – that no airline pilots, no airlines knew about, only a few Airbus engineers knew about – that prevented me from getting that last bit of maximum aerodynamic performance out of the wing. This was discovered very late in the investigation.

BW:  You didn’t perceive it?

SS:  Well, I’m not sure.

BW:  You had other things on your mind.

SS:  I wasn’t sure. But it turns out at the end I was not quite at the maximum angle the wing could be allowed to try to create lift. And yet even though I was pulling back, commanding full nose up on the side stick, the flight control computers prevented me from getting any more performance. That was something the investigators debated.

BW:  It’s interesting that you spoke about how you were glad you were in this automated envelope of protection. I might have guessed that you would have wanted at that moment to be completely off cruise control and have total control yourself?

SS:  Well, it’s not at all cruise control. The automation was not flying the defined path for me. But it put guardrails out there. So it kept me from making a gross error. It kept me from stalling the airplane.

BW:  So you were happy that it was there at that moment?

SS:  It was not something I consciously thought about but it was a good thing to have the airplane be fully operational. But it turned out that because of this little known aspect of the flight control system, we hit a little bit harder than we would have had we been able to get that last little bit of lift out of the wing right before we landed. The NTSB recommended that the airlines teach the pilots about this feature, because we didn’t know about it.

I asked Sully what lessons he drew about the relationship between people and machines from the Hudson landing?

SS:  I don’t think there’s any way that technology could have done what we did that day.  Absolutely no way.

BW:  You can’t envision a future in which the engineers have built in a “both engines fail” mode and the technology perceives it and starts doing X and Y?

SS:  I think that would be possible, but then that doesn’t take into account the whole divert decision. About where your flight path is going to intersect the earth’s surface in 208 seconds. And someone would have had to have anticipated this specific circumstance, this particular black swan event and programmed it to do that.

Theoretically, maybe technology could do some of these things. But to handle the whole thing from start to finish required a lot of innovation, a lot of – it required us to take all the things that we had learned, adapt it, apply it in a new way to solve a problem we never anticipated and never trained for and get it right the first time. In 208 seconds.

BW: I can’t help but think that one of the riskiest things about technology is that it seems inconceivable that a young pilot today will have your skills and instincts, because they’ve grown up in such a different environment.

SS:  And that’s another great challenge – we have to find a way to pass on this institutional knowledge. And not just the what, but also the why. You see pilots of my generation, especially ones who have wanted to fly since we were five years old – we just couldn’t get enough. We couldn’t learn enough about the history of our profession, about historic accidents, about why we do what we do. Because almost every procedure we have, almost every rule in the Federal Aviation rulebook, almost every bit of knowledge we have is because someone, somewhere died. Often many people did. And so we have learned these important lessons at great cost, literally bought with blood. We dare not forget and have to relearn them.

Screen Shot 2015-02-23 at 12.50.46 AMSully then discussed the 1989 landing of United Flight 232 in Sioux City, Iowa, after a catastrophic engine failure severed all of the DC-10’s hydraulic systems, rendering the plane’s flight controls completely inoperable. A pilot named Al Haynes and his crew miraculously conjured up a system of using differential thrust to control the plane sufficiently to save 185 of the 297 people on board.

SS: I remember having that radio communication back and forth with Patrick Harten, the air traffic controller. I said to Patrick, “I’m not sure we can make any runway. We may end up in the Hudson.” At that moment, I was thinking, that’s just like what Al Haynes said when the air traffic controller at Sioux City said –

BW:  Did Sioux City actually cross your mind at that moment?

SS:  Yeah, I thought about that flight, because at one point as they were approaching the Sioux City airport, the controllers want to give you the widest discretion they can to solve the problem any way you think is possible. And there were multiple runways of different directional orientations at Sioux City. The controller said to Haynes, “You’re clear to land on any runway.” At that point the control of the airplane was still so tenuous that Al said, almost laughing, “You want to be particular and make it a runway, huh?” And that thought – of Al’s comment – crossed my mind.

And so, yeah, we have to pass on this knowledge.

13 replies »

  1. Even the best systems and ERP’s are only as good as the people who man them. We at crmprogrammer.com experience this reality every day. Success has many fathers, failure has none. So, although the FOSS(Free Open Source Software) we provide probably the most secure and robust solution around, it become a favorite punching bag, nevertheless. We cater to myriad industries, including aviation. We test out software rigorously. You may test our software with your test cases. End of the day, you can see for yourself, whether the solution is as good as what we claim it to be. Also, we are happy to provide you the source, to ensure your peace of mind. It is really great, how you have summed up the issues plaguing software solutions in healthcare. Great software indeed need trained manpower to operate them.

  2. Thanks Dr. Wachter,

    when I was teaching my law & medicine students and the inevitable issue of patient safety and “jumbo jet/day” presented – that you wisely alluded to, I would ask the students to consider an axiom from my redneck car racing days . . . “speed costs money, just tell me how fast you want to go.”

    In other words, we can have a “100% safe” health care system. It’s just going to cost much more than anyone can afford, both in actual $$ and in the time required to spend on every patient. I can practice essentially error-free medicine if I am seeing four patients per day. I can complete all the JACHO-required questionnaires on firearms, seat belts, med allergies, vaccinations, “safe at home,” CAGE profiles, also while administering smoking cessation and diabetic teaching. I can query patients multiple times per visit on their wong-baker faces and use a clunky EMR to document a 37 page encounter for a level III E&M visit.

    This will make the hospital attorneys beam with pride, the nursing supervisors ecstatic, but rather quickly the hospital administrator will be fired because revenues will dry up, and the 127 patients still sitting in the ER waiting room will being rioting in anger because they had to wait 17 hours for their kid to be evaluated for a “cold.”

    Just tell me how fast you want me to go . . .

  3. When Sully’s jet went down, the FAA investigated; and it does so for near misses and other lesser events.

    When EHRs go down, no one investigates; and no one investigates the other errors, deaths, and injuries.

    Hello?

  4. The FAA would not put up with the crap that afflicts the HIT of medical care. There is zero oversight of the HIT or the vendors. To make matters worse, the hospitals are beholden to the vendors.

    There is EHR anarchy in most hospitals.

  5. I would guess that there is nothing in avionics, aircraft engineering or in mechanical engineering in general that compares with the staggering complexity of biology’s interactome. See “Uncovering disease-disease relationships through. the incomplete interactome.” Jorg Menche et al, Science 20 Feb, 2015.

  6. “just do not make the doctor touch it except to sign something important.”

    Like your billing statement?

  7. Great job, Sully.

    Bob. If you interview doctors who are also pilots you will learn the universe of medical practice is vastly larger than the universe of aviation.

    The alarms on everything were created by people who become alarmed easily. One commonly used ubiquitous piece of equipment audibly and loudly alarms just because it was turned on! Alarm fatigue is real.

    The best piece of automation in medicine is the digital x-ray system. The worst is the EMR.

    I serve the EMR. It does not serve either the patient or me, the doctor. A total waste of time. I am not knocking the computer…just do not make the doctor touch it except to sign something important.

  8. I appreciate the sentiment from Dr. Roboto and lawyerdoctor — since the safety field began with the “we’re killing a jumbo jet a day” analogy in the 1999 IOM report, there has been a constant tension between using aviation examples but not overusing them. Not only can they be annoying to clinicians, but I think some of the lessons haven’t been all that helpful, in part because our worlds are so very different. For example, we embraced the model used by the aviation reporting system and told clinicians to “report everything,” not recognizing that we were overwhelming our reporting systems with data, most of it non-actionable (aviation, on the other hand, is already so safe that “report everything” unearths a manageable number of actionable hazards).

    But I can tell you that the time I spent with Sully, and the day I spent with Boeing engineers who are responsible for their cockpit design, were some of the most instructive experiences I had in writing my book. Not that designing a computer system for a cockpit is the same as designing an EHR — they’re wildly different — but the embedded philosophy of user-centered design in the aviation industry was a far cry from what I saw when I spoke to several IT vendors.

    As one vivid example, when I told Sully and the Boeing folks about the volume of alerts the average clinician sees in a day (not only CPOE alerts by bedside alarms in the ICU), they were flabbergasted. We need to embrace some of these lessons in healthcare if we’re going to get it right.

  9. I agree with Dr. Roboto,

    Dr. Sullenberger is a true American hero and deserves all the accolades and praise due him.

    But I too am “weary of our endless cross-industry comparisons.” Yes, flying is inherently a dangerous endeavor. So is cutting open a human and taking out the diseased parts.

    There are many differences (and pitfalls) in comparing medicine and aviation. One of the most striking is that with aviation, you start with a perfectly good airplane, check the weather, wind, elevation, takeoff weight, etc., and fly from point A to point B. If something goes wrong, either something broke (which means error in making the part, or maintaining the part), or it’s “pilot error.”

    In medicine we don’t get to start with a perfectly good airplane. We get to play the hand we are dealt, e.g., 76 y/o smoker with renal insufficiency, diabetes, CHF with ejection fraction of 25%, liver disease, on steroids for chronic arthritis, comes in the door of ER with an upper GI bleed, systolic BP 80/palp.

    If this guy dies, is it because of “doctor error”?? What “systems changes” to our hospital, our ER operations, our medical training, etc. will make him survive? This is not a function of simply applying enough technology and double/triple/quadruple checking everything to insure a “safe” encounter for this patient.

    Now if the doctor asks for vancomycin and the nurse administers vibramycin, then ok, we have a problem that might be amenable to such “system change.” But until Boeing and Airbus start manufacturing humans, AND have control of their myriad self-destructing behaviors, then I think the comparison is going to continue to fall short.

  10. I worry that we don’t know enough about the new generation of errors that are being created as complex systems and technologies interact …

    I’m not sure what the solution is here.

    Do we need the equivalent a black box cockpit recorder?

  11. “We’ve learned that automation does not eliminate errors. Rather, it changes the nature of the errors that are made, and it makes possible new kinds of errors. ”

    A very good thought, which needs to be considered moving forward.

  12. Great points about automation, but …

    Not to detract from Captain Sullenburger’s accomplishment – but I’m growing a little weary of our endless cross-industry comparisons.

    Commercial aviation is a great tool for dramatizing the importance of patient safety. Want to get the class to wake up? Show video of a 747 crashing.The class will wake up. We’ll fill the seats at the conference session.

    Makes a nice break from the mindnumbingly boring talks about patient-centricness and meaningful usefulness.

    But the problem comes in when we try applying what is a good teaching tool to a system that is fundamentally different. Every problem starts to look like a teachable moment. Every process starts to look like a 747.

    How do we get away from that?