We’ve tried complaining. We’ve tried educating. What else can we do about medical device user errors? Plenty!

 Getting involved in the medical equipment acquisition process can help ensure that devices are easy for hospital staff to use—thereby decreasing the chance of user errors.

We’ve tried complaining. We’ve tried educating. What else can we do about medical device user errors? Plenty! Medical technology is an essential part of the health care delivery system. Medical devices are used in patient care for diagnosis, treatment, and monitoring. When medical devices are not used safely and effectively, the quality of patient care suffers.

Safety and effectiveness are opposite sides of the same coin. A medical device is effective when it does what we want it to do: provide the intended diagnosis, treatment, or monitoring function. A medical device is safe when it doesn’t do what we don’t want it to do: cause harm to the patient, caregiver, facility, environment, and the like.

Success in using a medical device means that the appropriate levels of safety and effectiveness have been achieved. Failure means we have not been as safe or as effective as we had intended.

Many things can lead to failure in the use of a medical device. One is that the device itself can fail to function properly. That’s something that we in the clinical engineering world know a lot about. We do preventive maintenance to reduce the likelihood of failure. We inspect medical devices for evidence of failure. We repair the devices after they fail. We’re the experts in this area.

Another thing that can lead to failure is the improper use of the device, commonly referred to as “user error.” Some people prefer the term “use error” because it avoids focusing on the person using the device and opens our thinking to other factors. That makes sense in that the user himself or herself may not actually be the “cause” of the improper use. A poorly designed device interface, a noisy and stressful environment, and numerous other factors can make a medical device difficult to use properly.

However, in this article we’ll use the more common “user error” terminology and look at some things we can do about medical device user error. We already have a strong track record supporting the safe and effective use of medical technology. Here are six ways we can build on our expertise and increase our contribution to high-quality patient care.

1. Apply Murphy’s Law
“If anything can go wrong, it will.” Some people say Murphy was an optimist. In fact, in 1949 Murphy was an engineer working on a project studying the effects of sudden deceleration on the human body. When a miswired transducer delayed a test, Murphy said of the person who did the miswiring, “If there is any way to do it wrong, he’ll find it.”

Apply Murphy’s Law to medical devices: “If there is any way to misuse a medical device, someone will eventually do it.” That’s not a joke; that’s human nature. No matter how much we educate people, no matter how much we exhort them to do the right thing, no matter how much we reward them for success, no matter how much we punish them for failure, people will make mistakes.

Can the likelihood of human error be reduced? Certainly. Can it be eliminated? No. There’s no perfect machine.

There’s no perfect person. There’s no perfect system in which medical devices will always be used with complete safety and effectiveness. That’s the law, and we need to get over it! We need to open our minds and consider other ways to deal with medical device user error.

2. Understand Risk
When we contemplate the many ways that things can go wrong (always keeping Murphy’s Law in mind), we need a way to prioritize them so we can focus our efforts on what’s most important. One way to do this is to look at the risk represented by each adverse event we are considering. High-risk events are more important and deserve more of our attention.

Conceptually, the risk associated with an event equals the probability (likelihood) of the event times its severity (effect). Events that are unlikely to occur (low probability) and produce little harm (low severity) represent low-risk events that we can regard as having low priority for action. Events that happen frequently (high probability) and cause great harm (high severity) are high-risk, high-priority events. All other events lie somewhere between these extremes.

Note that this definition of risk suggests two ways for reducing risk: reducing probability and reducing severity. Well-lighted highways in good repair reduce the probability of automobile accidents. Seat belts and air bags reduce the severity of automobile accidents. Successful risk management employs multiple strategies in both categories.

Sometimes we have solid quantitative data for probability and severity, but usually we have to make do with qualitative estimates. Nevertheless, this approach can help us define approximate risk levels to use in prioritizing our risk-management activities. We can start at the top and work our way down the list as far as it is economical to do so. This process is at the heart of the various Joint Commission on Accreditation of Healthcare Organizations (JCAHO) standards calling for risk assessment. JCAHO expects health care organizations to look at the ways things can go wrong, go through a prioritization process, and start working on the high-risk items.

This general approach can be extended and formalized in a technique known as Failure Mode and Effect Analysis (FMEA). Originally developed by engineers in the automotive industry, FMEA is a process to identify failure modes (ways that things can go wrong), assess the priority of each failure mode, and mitigate the high-risk modes (usually by changes in product design). FMEA is now widely used in many industries, including health care delivery. We need it in our tool kits, and we need to learn how to use it.

3. Root, Hog—or Die
Have you noticed how many risk-management tools were developed in the engineering world? Engineers and other technical professionals often are adept at seeing the world as a system of interacting components. That helps us sort things out and understand how a system works. Fundamentally, engineering is about how things work (or don’t work). The “systems view” is not the only way to look at the world, of course, but it’s a valid one. It’s a talent we bring to the table when addressing, for example, medical device user error.

However, all talents need to be developed. Another tool we need in our tool kits is Root Cause Analysis (RCA). There are several approaches to RCA, all of which try to identify the root cause or, more likely, the root causes of an adverse event. FMEA is a proactive technique, typically conducted before a failure, while RCA is a reactive technique, typically conducted after a failure.

RCA methods are based on a recognition that the immediate (proximate) cause of an adverse event is usually not the root (fundamental) cause. The RCA process is essentially a series of “why” questions that are repeated until one or more root causes is identified. Why did the patient receive an overdose of medication through the infusion pump? Why was the pump set up incorrectly? Why was the nurse unfamiliar with the pump? Why is the pump so difficult to use? Why did the hospital select that pump? Only when we get to the root of the matter can we begin to develop responses that are likely to be successful.

4. Choose Wisely
It’s not uncommon for investigations of medical device user errors to identify the difficulty of using a medical device as an important factor. This can be the result of a confusing user interface, the absence of important functionality, the lack of standardization of models within a facility, incompatibility with other devices or practices, and so on.

One useful way to mitigate these problems is to improve the way we select medical devices. Once again, that’s a topic we know a lot about. Unfortunately, too many health care organizations fail to take advantage of that expertise. How do we change that situation? 1) Volunteer to help. People will rarely turn down an offer to share the workload. 2) Do our homework. Research the technology, and bring useful information to the table. 3) Develop our skills. For example, start to build a working knowledge of “human factors engineering.”

5. Dig Deeper
How many times has a medical device appeared in the shop with a paper sign reading “broken” (undoubtedly attached with white adhesive tape, the nursing equivalent of duct tape)? How’s that for a comprehensive root cause analysis?

And how often do we subsequently find that the device is, nevertheless, working completely within its design parameters? Too many of us chalk it up as “no problem found” or “user error,” and return the device to service. But think of it from the caregiver’s perspective. He or she was trying to use the device to achieve a clinical objective (diagnosis, treatment, or monitoring) and was unsuccessful. That’s a genuine failure in the patient care process, even if the device performed as designed.

Assuming that no harm came from the failure, we can refer to the event as a “near miss.” Some people use different terminology, such as “near hit” or “close call.” Regardless, we all know what we’re talking about: We dodged the bullet, but maybe not by much. And that’s worth a bit of investigation. A case like this presents us with a low-cost (no-harm, no-foul) opportunity to learn how to reduce risk in the future.

Of course, we don’t need to open a full-scale investigation for every “no problem found” service call. However, we do need to open our minds to understand the clinical perspective and to see how we can maximize our contributions to safe and effective patient care. Sadly, there may come a time when a medical device failure or user error causes real harm and a full-scale

incident investigation is warranted. We need to think about that before it happens and be ready to respond professionally.

6. Think Outside the Basement
The health care delivery system is in the early stages of a fundamental restructuring. The current structure probably made sense 100 years ago when it was created. But a lot has changed in the world, in medical knowledge, and in medical technology. We’re moving toward new models of teamwork, communication, and cooperation.

One of the first steps along this path is the development of a “culture of safety” that recognizes the inevitability of failure, focuses on systematic change, values transparency, encourages learning, and drives continuous improvement.

We need to be part of these changes. We have historically contributed in many ways to safety and effectiveness in patient care. But we can do more, and we will have opportunities to do so. At this point in history, medical device user error is a key challenge. But there are and will be more

challenges. If we are to become valuable and valued members of the health care delivery team, we need to “think outside the basement” and get ourselves ready to seize the opportunities that will continue to appear. 24×7

Matt Baretich is president of Baretich Engineering, which publishes the Medical Device Incident Investigation & Reporting manual and subscription service (www.baretich.com ).