Nobody likes to guess wrong when the stakes—either in patient care or health system expenditures—are high.

That’s one reason biomedical and clinical engineering departments often are asked to help devise tests to evaluate one brand of pump, monitor, or other device against those of competing manufacturers.

Organizing and administering these equipment evaluations is complex, so much so that hospitals and health systems often find ways to make purchasing decisions without conducting them.

 Clinical and biomedical engineering departments often take the lead in organizing and coordinating clinical trials and comparative evaluations, which, though used to save money, can be costly themselves.

Still, conducting evaluations can yield useful information, such as estimates of the learning time that will be required to operate equipment and subjective staff reactions to equipment. Also, departments that buy-in to a testing effort cannot really complain if their original equipment favorites don’t make the cut when judged by a panel representing the whole institution.

When equipment is tested in a simulated environment—such as running a pump that feeds intravenous (IV) fluid into a jar instead of a patient—the trial is called a comparative evaluation. When the pump is feeding IV fluid into a patient, the test becomes a clinical trial. A single testing program often involves both of these elements.

Cost Compression
Sometimes, equipment evaluations are relatively simple.

Caroline Campbell, CCE, is senior project manager in clinical engineering for Clarian Health Partners, a hospital system headquartered in Indianapolis.

Campbell is currently involved in a clinical trial of sequential compression devices (SCDs). SCDs are small pumps attached to leg sleeves that compress the legs of patients at risk of forming deep leg vein blood clots. “The sleeves squeeze the leg to keep blood from pooling in surgical patients or the bed bound,” Campbell says.

Nearly all evaluations and trials are conducted for one of two reasons: 1) a new, possibly cheaper, or better, device has come on the market, or 2) an institution is having problems with its current device and wants something that may alleviate the problems.

Clarian’s trial with SCDs is being conducted for both of those reasons. The current techology is no longer supported by the manufacturer, thus forcing a change in the technology used. Clarian, says Campbell, had been using many thigh-length compression sleeves that cover most of the leg with the older technology. But physicians think that sleeves that come only to the knee work just as well at preventing leg vein thrombosis and are less costly.

So, Clarian has put 100 of the knee-length SCDs from a single vendor on trial on live patients in a designated mix of medical and surgical units.

The trial will continue for 1 month, Campbell says, and then an evaluation committee will look at the results. The committee will use an “evaluation tool,” as Campbell calls it—a checklist that ranks factors like ease of use and acceptability in the clinical environment—to rate the leg-sleeve pumps. Based on the outcome of the evaluation, Clarian’s capital committee will decide whether or not to purchase the new pumps and sleeves.

Campbell says clinical engineering’s role in the SCDs’ evaluation was to get the devices ready for deployment. After the trial, she will make sure the devices get sent back to the manufacturer. Then, as a member of the capital committee, she will join in any decision to purchase the new SCDs. If they are purchased, clinical engineering will get the new ones deployed.

A major factor in any technology decision would be cost, Campbell says.

“We want a product that will meet the clinical needs,” Campbell says. “But, saving money is a frequent objective. I don’t think we make any decisions without knowing the monetary situation.”

Taking the Vendor’s Pulse
Clinical and biomedical engineering departments often are not the ones to initiate comparative evaluations and clinical trials, but they do become the lead departments in organizing and coordinating such evaluations.

Barry Bruns is director of biomedical engineering for the Health Alliance of Greater Cincinnati, a joint venture between six area hospitals. Bruns says his group will often check with equipment-rating agencies and other hospitals when evaluating equipment, but it also does its own evaluating.

Recently, the Alliance tested pulse oximeters. To conduct its evaluation, it chose three leading vendors and then compared those three in both simulated and clinical conditions.

New oximeter models, says Bruns, have introduced technology to get rid of “motion artifacts” (patient movement) that sometimes skew blood-oxygenation readings.

To test the accuracy of these new models, simulators were used. To test factors like ease of use as well as reliability and accuracy, the Alliance resorted to clinical trials, comparing the three brands of oximeters in use-trials on patients in all six of its hospitals’ respiratory departments.

Respiratory therapists developed a one-page checklist as an evaluative tool, Bruns says. The devices were tested for both short-term use, which might be minutes, and for more long-term use on sleeping patients, which went on for hours at a time. In the end, Bruns says, a clear winner emerged: a device that was both smaller and easier to use than the competing devices.

Bruns says some vendors were more responsive than others—another important factor. “If they’re not responsive on trial, they’re not going to be responsive when we own the device, and some were more responsive to the therapists’ questions.”

The Alliance also ran cost projections on each device, of not only the cost of the oximeter itself, but also of the accessory probes that fit on patients’ fingers. “They break, and they’re expensive to replace,” Bruns says. Each oximeter cost about $2,000, and probes ran from $100 to $250. The favored unit survived the cost analysis and was the device chosen. “We are only replacing about 10 for now, but in the future we’ll do another 20 to 50,” Bruns says. “The Alliance will save money by standardizing, and then the training needs of the staff will go down.”

Multi-Vendor Quality Assessment for Safety
Patient treatment errors are costly, and nowhere do they occur with such frequency as in the delivery of drugs to patients. In the sometimes stressful conditions at a hospital, medication mistakes are not difficult to make since they may involve nothing more than an errant keystroke.

Glenn Scales, CBET, is assistant director of the clinical engineering department at the Duke University Health System in Durham, NC. It includes the main campus
medical center and two other hospitals.

Duke’s clinical engineering department is “routinely involved in making product decisions,” and that includes overseeing evaluations or clinical trials, Scales says. Late in 2003, Duke undertook an evaluation of infusion pumps.

Infusion pumps, which control the delivery of IV medications to patients, can be programmed to deliver many different drugs in different circumstances. A misstep in programming can lead to a drug-delivery error, sometimes with fatal consequences for the patient.

“If the intent is to deliver 8.5 milliliters per hour and the nurse inadvertently puts in 805, you could deliver a lethal dose,” Scales says. An equally egregious error might be made by misentering dosage per body weight, he adds.

Newer infusion pumps are designed to prevent these sorts or errors by signaling when a prescription appears to be out of bounds, Scales says.

“For us, infusion pump errors had become a fairly big problem,” Scales says.

To address that concern, Duke tested and compared the pumps from three leading vendors. After devising a testing paradigm that would yield a measurable result in a fairly short time, volunteer clinical staff were chosen to act as equipment evaluators.

“Each device was to be evaluated by 12 different evaluators,” Scales says.

Working with pharmacology and clinical departments, Scales and his team identified four typical patient protocols, such as the delivery of an anesthetic, anticoagulant, or antibiotic to a “patient,” who might be identified as an 18-year-old male or a 17-year-old female. For each patient protocol, the team then identified 29 documentable steps in the drug-delivery process. Each of these steps was to be performed by the evaluators. The volunteer evaluators were chosen to represent a breadth of skill levels, Scales says.

Two patient rooms were set aside for the two-stage tests. In one room, a team of trainers representing a vendor would spend up to an hour teaching the evaluators how to use their particular machine. In a second room, from which vendors were barred, the newly trained evaluators were put through the 29 steps of each designated patient protocol. They might be asked to respond to an alarm on the machine or to load new tubing, Scales says. Each of the 29 steps was given a rating, based on its criticality and complexity. All of this was done under simulated conditions with the infusion pump feeding the IV solution into a beaker, he adds.

As the evaluators were completing these tasks, they were themselves being watched and graded by Scales and four other superevaluators.

“At the end, for one vendor we would wind up with 12 evaluation scores. If a nurse had done everything perfectly, the best score would be 100,” Scales says. Each evaluator would be scored by the superevaluators. When the process was completed, each vendor was given a composite score based on the performance scores of the 12 evaluators.

The clinical evaluators were also asked to rank the machines on a number of factors, including usability and patient safety. These rankings were also given numeric values and were compared among vendors. The clinical evaluators were also asked to make subjective comments, and these, too, were “critical to the evaluation process,” Scales says.

By using this strict comparative format, Duke completed its simulated testing in 6 days, a process that might otherwise have taken up to 8 weeks per vendor in clinical trials, Scales says. He says his team had expected to eliminate one of the vendor candidates in this process and then clinically test the remaining two pumps against each other. Instead, he says, one pump emerged a clear winner in the simulated trials. That pump was then placed in clinical trials on patients in four operating rooms “to make sure we got it right,” Scales says.

From start to finish, the process took 6 months. Scales says he’s not sure how much the evaluations cost Duke, not to mention the vendors. “You’re talking thousands of dollars in staff time.” Eventually, he says, Duke may deploy as many as 2,500 of the new pumps at a cost that might approach $5 million.

He says the evaluation schematic worked so well that the university is considering a similar format to evaluate another tricky piece of equipment: patient ventilators.

Saying No
A comparative trial may not always be the way to go, however.

John Doyle, BMET, is assigned to handle new equipment for the Veterans Affairs (VA) Medical Center in Portland, Ore. Doyle says the Portland VA was intent on changing its physiological monitors, but the costs and staffing requirements left little room for holding comparative trials. So, Doyle and the VA staff contacted vendors, other VA hospitals, and non-VA end-users of the devices, and selected a new monitor that way. Since all the monitors would be linked in a network where each patient’s physiological data could be tracked from a central location, they would have had to have been linked in a test to be effectively judged.

“It would have been too resource-intensive to set up a trial,” Doyle says. “We would have had to pull network cables. We ended up doing site visits and had different people from our team call references on both the user side and the maintenance side. We ended up going with a new vendor, and we bought over $2 million worth of the monitors, close to 100 of them.”

Doyle is an advocate of evaluations whenever they are feasible. In the past, he has helped coordinate evaluations of infusion pumps and electrosurgical units. In both of those cases, vendor choices were altered by the testing, he says. But he says there is frequently a “political” motive for doing comparative trials.

“If a device shows up in your department and you haven’t been involved in the decision-making, you may have an immediate resentment of this device despite its features and ease of use. But if you have buy-in on a new technology, that’s worth something.”

Like the others interviewed for this story, Doyle says the participation of biomedical or clinical engineering departments in comparative trials can “play a key role” in how effective the trials are.

“We look at how maintainable and reliable something is, but when we do that, we try to look through the eyes of our users. If we pick a dog, it will impact our role as well. Biomed has to put on the user’s hat.” 24×7

George Wiley is a contributing writer for 24×7.