Quality is expensive. In the United States, we spend about $40,000 per physician just to report on quality measures. That’ $15.4 billion per year. A study by Dr. Catherine MacLean and colleagues found that of the 86 MIPPS/QPP measures (Merit-based Incentive Payment System/Quality Payment Program) that were relevant to general internal medicine, 35% were not valid and 28% were of uncertain validity. In other words the measures fell into the categories of either “sounds good but no data” or “sounds good but the data refutes it”.
When we define value in medicine, we often use the equation: value = (quality ÷ cost). Therefore, to assess value, you have to be able to measure both quality and cost. Measuring cost is relatively easy but measuring quality is not. So, we’ve put in place a number of measurable markers of quality, or at least what we think are markers of quality.
This week, I met with the Medical Director of one of the Medicare Carriers and I asked him about Dr. MacLean’s study. What he told me was that Medicare created the MIPPS/QPP measures based on recommendations by the various physician professional societies. So then, who do physicians have to blame for all of these measures? The answer is… ourselves. It was our professional societies that created these measures. As the comic strip character Pogo once said: “We have met the enemy and they is us.”
The problem is that measuring quality is not cheap. A recent study published in Health Affairs estimates that United States physician practices spend $15.1 billion to report quality every year. In all, physicians and their staff spend 15.1 hours per physician per week on quality. This includes collecting data, recording it in the medical record, and transmitting it to regulatory agencies. When broken down further, physicians spend 2.6 hours per week and their staff spend 12.5 hours per week managing quality data. In all, this works out to $40,000 per physician per year.
The cost of hospital quality measure management has not been estimated in the medical literature but it has got to be very high. Most hospitals have multiple full-time staff members devoted to collection and reporting of quality measures, everything from CLABSI rates (central line-associated blood stream infections), to mortality indices, to emergency department wait times, and to patient satisfaction.
One of the reasons for the explosion in the reporting of quality measures is the electronic medical record. In the past, auditing thousands of paper charts in physician offices to determine how many patients in the practice had a hemoglobin A-1-C measured or how many patients got a flu shot each year was simply not possible. With the advent of electronic medical records, we now have a much easier way of auditing our physician practice for markers of quality. Things that could be easily tabulated from the electronic medical record have become prime targets to become quality measures whereas those things that cannot be easily searched for and tabulated in the electronic medical record have been passed over as quality measures. So, instead of asking ourselves: “What really reflects the best quality of health care?”, we have instead asked ourselves: “What data can we get out of our electronic medical record might reflect quality?”. Consequently, we have limited our quality measures to those things that can be searched for in the electronic medical record. Although many of these searchable markers do reflect quality, many of the best indicators of quality cannot be searched for using computer analysis of the electronic medical record.
If we as a society are going to spend this much money measuring quality, we have to be sure that what we are measuring is valid. In order to improve American healthcare and ensure that Americans are getting true value in their health care, we have to be able to measure components of healthcare delivery that truly reflect quality. However, we cannot afford to measure things that do not really reflect true quality of medical care.
May 20, 2018