By Rick Schrenker

Just over four years ago, Carl V. Jones II made the following remark in a 24×7 Magazine Soapbox column: “Every year, more and more equipment becomes network-capable. As a result, the frequency of outages and reboots will also increase. In order to compensate for this increase, we must have a plan for quick response and repair.”

Rick Schrenker

Rick Schrenker

Jones’ remarks motivated me to respond in a related column (Read it here.) regarding the phenomenon known as the “normalization of deviance”—a term coined by Dian Vaughan in her study of the Challenger disaster. Although I was very impressed by Jones’ column, I saw his above statement as risking tacit acceptance of a new technological norm where reliability—and perhaps, by extension, safety—can be expected to decrease in order to reap the benefits the technology is purported to make available. And that, of course, begs the question: “Safety at what price?”

Revisiting the definition of “normalization of deviance”: The gradual process through which unacceptable practice or standards become acceptable. As the deviant behavior is repeated without catastrophic results, it becomes the social norm for the organization.”

At What Cost?

I chose to revisit this topic in light of one of the aspects of the recent Boeing crashes—that being the move from the old norm where the FAA oversaw all regulatory aspects of plane design and development to now, where the aircraft manufactuer shares this responsibility. But is it only in hindsight that we can see a “gradual process” as resulting in an “unacceptable practice or standard?”

This begs a related question: Can organizations accept deviation from standard practices so frequently that doing so becomes a norm unto itself? And is there any evidence that this occurs in healthcare delivery?

In an article on that very topic, John Banja noted “health professionals typically justify practice deviations as necessary, or at least not opposed to accomplishing their ethically unimpeachable objective of relieving their patients’ pain and suffering.” (I encourage you to read his book for yourself).

Banja goes on to cite examples of rationalizations to justify the deviations from practice:

  • “The rules are stupid and inefficient!”
  • “I’m breaking the rule for the good of my patient!”
  • “The rules don’t apply to me. You can trust me”

One, in particular, caught my eye: “The work itself, along with new technology, can disrupt work behaviors and rule compliance.”

Changing Times

When I entered the field in 1979, the hospital where I worked was in the process of changing over to a new patient monitoring system. When I left in 1990, the hospital had just replaced that system. During the 11 years it was in place, there was no software to upgrade or patch—nor were there hospital network interactions to consider, wired or wireless.

Hardware-based systems didn’t change much—if at all—over their useful life. Caregivers, whether for patients or equipment, essentially worked with the same equipment for more than a decade. That was the norm. Fast-forward to 2015’s norm that: “Every year, more and more equipment comes network-capable.”

So, are you and your fellow HTM professionals following the same practices today that you did in 2015? 1990? Before and after Y2K?

Perhaps even closer to home, I challenge you to reflect on the following: Have you ever failed to follow a PM, installation, or any other procedure to the letter? And if you didn’t, how did you rationalize your decision? If it didn’t result in a negative consequence, did you become more comfortable making those decisions over time? Did any of your decisions become unofficial new norms? If so, did you later replace them with other unofficial new norms?

How small can a rule-bending deviation from standard practice be made without requiring formal approval? And is that before or after the rule is bent? Given the increasing interaction between devices—coupled with how fast relatively independent systems are changing—are there situations where a change of any size must be formally approved prior to implementation? Does your department have a policy for this? Should it?

Finally, while I was writing this column, I read a report describing how the U.S. FDA’s drug approval processes have changed over the last few years. The story specifically speaks to the recent approval of a blood thinner previously approved for adults and now for children based on a single, clinical trial consisting of 38 pediatric patients. Hopefully, this decision will never be criticized in hindsight, but when and how should we make potentially risky decisions when not doing so carries risks of its own?

Finally, what’s “normal” in HTM? How rapidly is “normal” changing? And are our methods of change-control up to the related technology management task?

Rick Schrenker is a systems engineering manager for Massachusetts General Hospital. Questions and comments can be directed to chief editor Keri Forsythe-Stephens at [email protected]

References:

  1. Banja, J, The normalization of deviance in healthcare delivery Bus Horiz. 2010; 53(2): 139. (cited at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2821100/, Last accessed May 19, 2019)
  2. “A new approval for an old blood thinner”, Axios, https://www.axios.com/fragmin-blood-thinner-approval-children-pfizer-d86af821-897e-463b-a60b-9de68ce4003f.html. Last accessed May 21, 2019.