Benchmarking. Just the word alone can strike fear into the hearts of some clinical/biomedical engineering departments across the country. Putting it into practice can be even more challenging. Yet, even though the debate continues over its relevance, or how much weight the process should carry in evaluating a department, the consensus is that benchmarking is here to stay.
“For so many of us, benchmarking in today’s world is mainly used to justify and validate a department’s existence, but this is too narrow of a viewpoint in today’s health care environment,” says Raymond Peter Zambuto, CCE, FASHE, FHIMSS, FACCEE, president of Linc Health, Holliston, Mass. “While benchmarking is necessary because it provides some productivity metrics, it’s anything but a simple exercise, and it by no means tells the whole value proposition for the department.”
At its most basic level, benchmarking represents a measure of a department’s effectiveness, a scorecard that helps clinical/biomedical engineering professionals gauge how they are doing from one year to another. While most departments benchmark themselves based on quantifiable metrics such as cost, quality, downtime, and productivity, in recent years the Association for the Advancement of Medical Instrumentation (AAMI) developed AAMI’s Benchmarking Solution, and the ECRI Institute created BiomedicalBenchmark. These newer programs allow departments to compare themselves to their competitors. Another system, offered by Thomson Reuters (TR), has been providing hospital-wide benchmarking for more than 20 years, including extensive data on hundreds of clinical/biomedical engineering departments.
“Benchmarking does allow me to compare what I’m doing in my hospital to a similar peer group in a comparable hospital, but ultimately what it’s really about is performance improvement, about doing what’s good for my department and my institution,” says Matt Baretich, PhD, PE, CCE, president of Baretich Engineering Inc, Fort Collins, Colo.
A subject matter expert for AAMI’s online self-assessment tool, Baretich says it is time for the industry to think outside the basement. “Many CE professionals feel like their work should speak for itself, but they must raise awareness of what they do and think beyond a specific piece of equipment on their workbench to demonstrating how their skills bring multiple value to the organization.”
In an economy where hospital administrators often operate primarily from a dollars and cents perspective, it is critical for clinical engineering departments to take a more proactive rather than a reactive approach to benchmarking, precisely to grasp the bigger picture.
The Value of Benchmarking
“Initially, the value of benchmarking was to measure my department’s performance from year to year to see if we were improving, or if we had slipped in any area, and target those in the future,” says Karen Waninger, director of clinical engineering at Community Hospitals, Indianapolis. “From there, it has evolved to comparing my facility and department to similar programs at other hospitals.”
One of the first clinical engineering directors to use AAMI’s Benchmarking Solution, Waninger remains a strong proponent of the process.
“If we want to prove our value to the administration, we must report it, measure it, and demonstrate it—and we now have the tools to do that,” she says. “Not only can we learn and improve, but we can use the data to represent to administration that we are aware of what’s happening in the bigger environment.”
AAMI’s best practices and benchmarking measurements include budgeting, staffing, and customer service. The online survey includes an analysis tool that allows clinical engineering departments to measure their procedures and policies against similar organizations. ECRI Institute’s BiomedicalBenchmark online tool features comparative data where users can share information about equipment acquisition costs and expected life and failure rates. It also provides a tool kit of benchmarking best practices for clinical engineering.
While these programs take more of the guesswork out of a formal benchmarking process, the technology is only as good as what clinical engineering departments can input. According to Baretich, some department subscribers may have difficulty answering some of the questions asked in the benchmarking self-assessment, until they have collected certain information.
“Some clinical engineering departments are at the stage where their first performance-improvement project is just gathering the information,” he says.
Many departments also utilize customer satisfaction surveys to benchmark their performance. A clinical/biomedical engineering department’s customers generally include the administrative team, risk management, clinicians, the technicians, and, ultimately, patients, all of whom bring different perspectives and different needs to the equation.
“We’ve been doing customer service surveys since the ’90s because we were concerned with how we were perceived,” says Terrance Clemans, CCE, CBET, director of technology management at St Margaret Hospital, Franciscan Alliance, Northwest Indiana Region. “It’s one thing to say you’re providing quality service, but without asking your customers, how do you really know?”
Implemented in all of his hospitals on an annual basis, one of the practical applications of those surveys, he says, has been the dissolution of one big shop in favor of smaller locations. “We are of more value when we’re closer to our customer,” Clemans says.
It can be an eye opener, Baretich says, when engineers find out that an area they thought would be important isn’t supported by customer feedback. And all of the clinical engineering department’s customers impact the final and most important customer—the patient.
“We definitely impact how the patient views the hospitals,” Waninger says. “If a nurse is dealing with broken equipment, then the patient has to wait and the clinicians are behind. If the equipment isn’t well maintained or dirty, that affects a patient’s perception.”
AAMI, she says, tracks patient-related incidents, too. “While we haven’t done as much with this measurement, it’s definitely something we may be able to build upon in the future.”
Ultimately, according to Binseng Wang, ScD, CCE, FAIMBE, FACCE, vice president with ARAMARK Healthcare Clinical Technology Services, Charlotte, NC, who supports more than 400 clinical engineering programs in North America and has used TR benchmarking data for several years, in addition to internal comparisons, the main customer for clinical engineering departments is the medical equipment. “That’s our patient, and the best measure for us is whether or not we are managing it in a cost-effective way, ensuring it is safe and reliable,” he says.
The Standardization Problem
While AAMI’s benchmarking tool does not share individual hospitals’ metrics, it bases its analysis on categories such as size, the number of beds, and whether the hospital is a teaching hospital—all of which narrow down the comparison group.
Beyond straight demographics, which can be more streamlined, there are other obstacles that block any benchmarking tool’s effectiveness—chiefly, a lack of standardization that historically has made benchmarking more of an apples to oranges comparison.
“People still say we’ve got fruit salad when it comes to benchmarking because no two departments are alike or measure the same scope of information,” Waninger says. “I believe we should just pick out the pieces of fruit that are the same and make a single dessert.”
Right now, even those in support of benchmarking would argue that this is often easier to accomplish in theory than in practice. Some departments, for example, do not take care of imaging equipment or clinical laboratories or the surgery department, while there are also great variations in how departments measure device values.
“We started out with ECRI in 1971 and were one of their first customers, and we started benchmarking internally in 1976,” Clemans says. “But even now, after all this time, departments don’t use the same standards in measuring equipment value. Some gauge the value of equipment by the purchase costs, while others look at acquisition costs, whereas we use replacement costs. It’s hard to benchmark the value of our support services if it’s based on the value of equipment purchase costs because the number will be widely different.”
According to Clemans, for devices such as imaging systems, the purchased cost is the actual price paid for a device with discounts that can vary from 10% at one hospital to 35% at another. Acquisition costs can be calculated as the cost of the device with installation included (purchase cost plus installation, or cost to acquire the technology), while replacement cost is based on the present outside service replacement cost where devices can increase in cost depending on the length of the life cycle.
Even two hospitals that both say they manage $12 million in equipment can come up with dramatically different numbers, according to Clemans. “We are viewed as more cost-effective by how low our service costs are compared to the total value of equipment,” he says. “If everybody used the same equipment value or benchmarked value, we could take support costs and weigh them against the value of equipment to come up with an extremely accurate estimate of how one department compares to another.”
Ideally, Clemans would like to see one common value list by model to clearly establish the worth of equipment being measured for benchmarking.
Adding to the confusion, clinical/biomedical engineering departments also count equipment differently, which limits the strength of a solid comparison group. “Some report a piece of medical equipment as one unit, whereas others break it down into all of its components and classify it as eight separate units,” Baretich says.
In addition to measuring the price of equipment, there are also differences in other measurements, too, such as in payroll and compensation.
“Two technicians can have the same take-home salary and benefits package, while each hospital does accounting differently and the numbers can be radically different,” Baretich says. “Part of payroll may be in your department’s budget, while fringe benefits or the cost of vacation days may be in the human resources accounting.”
Almost everyone would like to see the profession make the necessary modifications and agree to classify items and personnel the same way, which would require a forum for developing some standardization. That effort could take years, and meanwhile, many hospital leaders in belt-tightening mode still focus more on only a few metrics, such as the full-time employee or reducing the maintenance expenses of medical devices. The consensus seems to be that focusing only on the financial performance of a department can be shortsighted as well as myopic, given the far-reaching ramifications its contributions have on the hospital as a whole.
The Next Level
“The benchmarking push is definitely on from the financial side, there’s no doubt about that,” Wang says. “But that’s only one measure of a department’s effectiveness. We need to educate our health care leaders to look at the entire perspective and to view us just as they are judged by insurance companies—on how well they perform.
“If I’m the cheapest, then the equipment may not be reliable or safe, and if I cut staff too deeply then the service may be slow. But if our customers are being judged by a combination of metrics, we should all be judged in the same way. It’s not an exercise in black and white, and it’s really about finding a balance through a combination of measurements so we can see how well we’re doing and where we need to improve.”
“We work in a time when hospitals are seeking to reduce cost, minimize risk, and improve quality, but the latter two goals are hard to quantify in terms of dollars. Yet our work can affect quality of care, and some of that is not captured by traditional benchmarking,” Zambuto says.
“We want to compare how other hospitals are lowering operating costs and reducing the costs of medical equipment ownership, although that’s often overlooked by many hospitals who view clinical engineering as an expense, instead of measuring how CEs reduce expenses,” Clemans says.
While Wang acknowledges that no two hospitals are equal, just as every clinical/biomedical engineering department operates in its own way, he says, “We are doing ourselves a disservice if we don’t participate in benchmarking. Unless we help manage the data and make it better, we have no control over outcomes because, reliable or not, hospital administrators will use it to evaluate us.”
As Waninger points out, the Centers for Medicare & Medicaid Services is set to cut Medicare reimbursements to hospitals by 2% (beginning with the October 2012 reporting period), which means hospitals will have to earn pieces of that revenue back.
“We are very proactive in looking for ways to drive our costs of service down; so if someone has a way to do that, I want to know,” Waninger says. “At the end of the day, I want to help my administrator understand that we should be involved at the beginning of the acquisition process for new technology rather than when the equipment is off warranty.
“AAMI asks questions about what role we play in both the purchase and regulatory decisions, which are more anecdotal pieces of information. There are some intangible aspects of benchmarking, some tidbits that we can use that aren’t just straight numbers.”
She chooses to focus more on the strengths of benchmarking rather than any limitations of the process. “We can’t let fear of failure or even plain arrogance keep us from pushing ahead,” she says. “We’ll make the process better as we go, but we have to start somewhere.”
For Zambuto, the importance of benchmarking may be overstated since there’s so much room for improvement. It’s time, he says, for a new model and real problem solving, which, for example, could include a better design of patient rooms, alarm communications, and radio-frequency interference, as well as dealing with the myriad of problems associated with networked equipment. He provides the example of one department that successfully presses for a standardized IV pump solution based on a device with a good human-machine interface.
“This brings more value to its hospital than all the on-time preventive maintenance done in a year,” Zambuto says. “Yet, we have no metric for that, and we may never have one. And many internal metrics aren’t as relevant to hospital administration, who are much more concerned with standardized processes that reduce errors or missed work and patient care productivity. While we can start with traditional benchmarking, we cannot stop there, because at some level it becomes the floor, not the reach.”
Shelley Gabert is a contributing writer for 24×7. For more information, contact .