3 Common Hang-ups of Healthcare Data Analytics
I’m generally of the opinion that it’s healthy to keep shop talk to a minimum outside the office. However, a recent get together with friends reminded me how valuable some occasional cross-pollination can be. Somehow the conversation between myself and a long-time physician friend, Dr. M, came around to his pending EHR migration. It was clear to me that he was feeling a mix of trepidation and hopefulness, especially with the ability to effectively manage the care related to his contracted quality measures. Even though I’ve worked in healthcare analytics for years, three of his complaints surprised me and reminded me that not every health system or physician practice has reached the same level of reporting capability. I think we need to do more to level the playing field.
I’m a long-time data warehousing and reporting guy with a MacGyver personality, so it always surprises me when a clinician or business leader tells me that their reporting team can’t produce numbers in a meaningful way. Dr. M mentioned that the reporting team he gets quality measures from can only report on twelve-month rolling periods, but the quality measures are measured over a specific twelve-month window that has only just started. So, the current performance of his practice includes a majority of time when they weren’t managing toward those same measures. Huh?
On the plus side, his practice should see a nice steady trend of improvement over the next twelve months, regardless of how they actually perform at the end of the period.
Dr. M is hoping that when they switch to their new EHR system, the reporting will be better. I admire his optimism, but I suspect that any level of improvement will also require an upgrade to his reporting team. They need to be sure that the information they’re delivering provides meaningful and reliable insights, not a sense that the practice is already behind and performing terribly when they might be doing quite well.
Managing Quality Measures
Dr. M is part of a smaller practice, but he’s in a large metropolitan area and he primarily sees a Medicare population. They’re a higher risk group that needs to be getting the flu shot every year—and preferably getting it early in the season. Because Dr. M’s practice is small, they’ve ended up being lower down the vaccine delivery priority list than the big pharmacy chains. Consequently, he’s trained his patients to get their flu shot early in the season at the pharmacy when they’re picking up a prescription refill. Very convenient. Great customer service.
Except that when the quality improvement team looks at “how many flu shots have we given?,” the numbers look very low. Their instruction to the practice is that they need to be giving more flu shots – not taking into account the fact that the vast majority of patients for the practice have already gotten their flu shots.
You have to look deeper to understand the context or the clinical operations. Sometimes the data, on the surface, doesn’t tell the whole story.
Managing the Management of Quality Measures
Another thing Dr. M shared with me is all of the work that he has to go through to undo things that are being automatically done for him in the name of standardizing and streamlining achievement of quality measures. For example, when his quality team previews scheduled patients, they use a standard checklist to identify all of the quality-related questions he should ask and preschedule tests the patient would be expected to need. Except those standards are based on a collection of some 30 measures that span a dozen different contracts, none of which look at all 30 measures.
None of the work being done to help Dr. M in achieving the quality measures is tailored to the contract that patient falls under. On average, he estimates twice as much work as necessary is being done on each patient by the review team; not counting the additional time he has to spend undoing the work they’ve done. My wife calls these long-cut short-cuts. Good intentions, but it ends up making things worse in the end.
For me, the two overarching takeaways from my conversation with Dr. M were that we must never forget the need to make sure that any measurement we do is done in the context of what we desire the behavior to be, and that we heavily consider the input of the actual people doing the real work when designing measurement systems and goals.