100K Lives? Or Not?

Posted on Dec 21st, 2006 at 3:38pm.

The IHI 100K Lives Campaign brought an unprecedented level of attention and focus to getting measured results in hospital quality and safety—specifically, 3,100+ hospitals working on 6 measures to avoid 100,000 deaths over 18 months. And the results appear to be stunning—approximately 123,000 people who would have been expected to die in the 18 months between January 1 2005 and June 14 2006, if the risk-adjusted hospital death rates that prevailed in 2004 had simply continued forward, did not die during their hospitalization during the Campaign. The confidence intervals on this estimate appear to be something like +/- 20,000.

Those of us who served as "field workers" in hospitals throughout the country during the Campaign know that this work has only just begun. For many of the measures, in many Campaign hospitals, implementation is nowhere near completion. Most observers expect significant additional impact on risk-adjusted hospital mortality rates, once the six measures are fully deployed. It appears that the 100K Lives Campaign is bringing about a seismic, positive shift in the quality and safety of care in US hospitals.

Or is it? Bob Wachter and Peter Pronovost aren’t so sure. Their paper in the November issue of the Joint Commission Journal on Quality and Safety pointedly suggests that enthusiasm might have trumped science in IHI’s estimates of lives saved, as well as in IHI’s choice of at least one of the 6 measures. Wachter and Pronovost scold IHI for a number of faults: for promoting an intervention that is not known with 100% certainty to be effective (rapid response teams); for ignoring other, perhaps more effective interventions that could have been included in the campaign; for using risk adjustment methods to drive the estimate of deaths avoided; for extrapolating data from only 86% of the hospitals in the Campaign; for using unaudited self-reported mortality rate data; for that taking credit as IHI for quality and safety improvements during these 18 months, when in fact many other things were going on at the same time, including other efforts to promote 5 of the 6 Campaign interventions; and last but not least, for not properly accounting for the fact that hospital death rates had already been dropping for some years. Don Berwick’s response in the same issue of Jt. Comm J. on Quality and Safety is both graceful and helpful, and I recommend that anyone who has expended a lot of effort in the Campaign read both of these papers.

What’s my take on the controversy? I’m not an academic heavyweight like UCSF’s Wachter, or Johns Hopkins’ Pronovost. But I have been out in the field, every week, during the Campaign. And it seems to me that something happened during these 18 months. Here’s my analysis. There were about 800,000 deaths in US hospitals in 2004. Brian Jarman tells me that unadjusted Medicare death rates have been going down about 0.1-0.2% per year between 1996 and 2004 and his risk-adjusted "Hospital Standardized Mortality Rate" for US Medicare deaths during the same period has been dropping faster, at 3- 4% per year, either because of steadily better performance in the face of increasing risk of death in the hospitalized population, or because of more aggressive coding of the risk status of hospitalized patients, or both. Using the most optimistic of Jarman’s rates, against a baseline of 800,000 deaths, a 4% background annual rate of risk-adjusted decline might explain 48,000 fewer deaths during the Campaign. But not 123,000. And the idea that this dramatic change in trajectory is due to "coding creep" or hospital CEOs who are fudging their mortality numbers so that they can collect their bonuses? I don’t think so. Not from what I’ve observed on the ground—at the back door to many hospitals, where there has been a significant, sharp drop in the number of hearses pulling away from the hospital, during the period of the campaign. That has nothing to do with "coding creep."

And as for rapid response teams, it seems to me that the flaw in most of the published analyses is what I would call the "full implementation gap." Most organizations that implement RRTs find that they run into several types of barriers to full implementation. The two principal barriers are 1) nurses don’t want to look like they can’t handle the situation so they don’t call for help and 2) physicians don’t want the RRT called on their patients without having a chance to intervene themselves, first. So, many hospitals "implement" RRTs but really aren’t using the teams fully. Those institutions that are capable of executing these types of changes, system-wide, over a short time, typically notice a sharp, significant decline in code blues and related deaths. Park Nicollet Health Services implemented its RRT at 440-bed Methodist Hospital over 1 week, house-wide, and its data on codes is compelling. (see below) This was in a hospital, mind, that already has a very low Hospital Standardized Mortality Rate.

So it would have been nice to have several randomized controlled trials that were positive for RRTs before recommending widespread implementation, as Wachter and Pronovost would apparently have preferred, but the 100’s of individual case studies like Park Nicollet’s provide a rather convincing evidence set, albeit not RCTs, that convince me that the Campaign did NOT waste a lot of energy and effort of thousands of hospitals when in induced them to implement RRTs.

If something happened to death rates during the Campaign, why did it happen? As IHI’s leaders have said repeatedly, the Campaign was NOT the only factor in any improvement during the last couple of years, but when I look at what’s been going on in the hospitals and states I’ve been working in, the 100K Campaign is way ahead of whatever’s in second place. As for Wachter and Pronovost’s implication that hospitals would have done all these things anyway, without the Campaign (since 5/6 of the interventions were on the CMS or JCAHO measurement sets, or otherwise on some national radar policy body’s radar screen) my only response would be "Yes, but…when would hospitals have done them? In my lifetime? Before I retired?" The 100K Campaign brought about a truly unique sense of urgency to the national improvement agenda.

So I think something happened, and that the Campaign had a lot to do with it. Clearly, I’m not the one to settle either the "whether" or the "why" argument, and so I will leave the debate to the health services research experts, for whom this issue will no doubt generate lots of grant requests for years to come.

But while the experts worry about their grants, and their publications, I worry that many doctors, particularly academics, will seize upon the questions raised by Wachter and Pronovost and use them not as reasons to learn, but as reasons to avoid taking action, on ANY of the IHI Campaign Planks. In other words, just as the media might have presented an overly enthusiastic representation of the Campaign results, I worry that physicians’ natural skepticism will produce an overly pessimistic reading of Wachter and Pronovost’s paper, until every last question is answered by the academics. Again, my impatience comes through. "When will we get these perfect answers? What is the harm in NOT acting?"

Finally, I must say I was puzzled by the tone of Wachter and Pronovost’s paper. By describing Don Berwick as "chanting" the mantra of the campaign ("Some is not a Number, Soon is Not a Time"), by implying that IHI somehow had a "conflict of interest" in the Campaign (I’m still scratching my head on that one), and in a variety of other little ways throughout the paper, the authors convey a tone of distainful academic detachment at best, and a sort of eye-brow-raised disapproval at worst. A lot of people must be asking, "What was that all about?" IHI didn’t ask the academics’ opinion? IHI generated too much enthusiasm for improvement, and got too much of the limelight?

Perhaps we should all pause and paraphrase Harry Truman: "It’s amazing how many lives you can save when you don’t care who gets the credit." Our patients need both our science, and our enthusiastic application of the science.

 


Comments

I'm not sure that the IHI deserves as much credit as Wachter and Pronovost claim that they are getting, but I certainly agree with the jist of your response..."Who cares?" Anyone arguing that any of the 6 planks of the 100k lives campaign were not supported by science are missing the point. The fact that 5 of 6 of them were further reinforced by other outside forces made the notion of waiting for scientific validation a rather, well, "academic" argument. I would not argue that we should jump on board with every new idea that comes along, but looking carefully at what was chosen by the IHI for the 100k lives campaign I fail to see anything that we, as health care providers, shouldn't have been doing to begin with. The bottom line, however, was that we weren't doing them - and people were dying unnecessarily as a result. Jim, I certainly hope that your concern about the sceptical nature of physicians using Wachter's and Pronovost's article to slow down the progress being made is misplaced; I fear it is not though. I trust the two authors as scientists but fear that they are drawing a line in the sand in the name of science that might make it tough to cross as many of us begin preparations to move forward with the 5M lives campaign.

Posted by Shawn Stinson (Dec 25th, 2006 12:16pm)


I certainly support productive examination of the data at all levels of the evidence-hierarchy, even if done in the spirit of a respectful challenge. But I confess there are times when I wonder if we shouldn't also remind ourselves of some of the dicta we all learned in med school: e.g., "Treat the patient, not the lab values or the scan".

Posted by Pat Ridgely, MD (Dec 27th, 2006 7:49am)


Drs. Reinertsen and Berwick are far too kind in the content and tone of thir responses to Wachter and Pronovost, and the contrast must be seen as an obvious credit to the "field workers". It is clearly undeniable that there was significant improvement during the Champaign. How much? (exactly?) Who cares? How pure was the science? Who cares? The point is that IHI made something of great value happen. The question for acedemia is, "Why was there such a huge opportunity for immediate improvementin major healthcare outcomes?"

Posted by Frank Carlton, MD (Jan 3rd, 2007 11:08am)


Great response to the comments. While critique of any major endeavor is essential, I would be reluctant to 'let best get in the way of better', such as the extent of research needed to launch any intervention. I agree w/ your assessment of RRT implementation - my experience with implementation in 10 hospitals is that full implementation & the associated culture change is not a fast body of work rather one that gains momentum over time. What the Campaign work teaches us is that there is no magic bullet - our usual approach to change - but rather a constellation of work that builds & weaves towards better outcomes. Nice work, Jim Barbara Balik, RN, Ed.D

Posted by Barbara Balik (Jan 12th, 2007 10:13am)






Please enter the six characters in the image above.