Sessions and Themes
- Shifting the paradigm: Vikas Saini
- Magical thinking and Modern medicine: Harvey Fineberg
- What makes us do it?
- What will it take to get us there?: Don Berwick
- What are the knowledge gaps in avoiding avoidable care?
- Case discussions
- What are the ethical issues?
- Medical journals and the issue of avoidable care
- The schizophrenic life of the hospital CEO
- A reason to change: Shannon Brownlee
- Social responsibility of physicians: Bernard Lown
- Behavior-changing Best practices
- Global dimensions of unnecessary care: Julio Frenk
- Payment mechanisms and the Culture of medicine
- Choosing wisely and beyond: What are the next steps?
- How can patients help drive the needed change?
Magical thinking and Modern medicine: Harvey Fineberg
Below is Dr. Harvey Fineberg’s speech from the 2012 Avoiding Avoidable Care conference.
Avoidable Care: Magical Thinking and Modern Medicine
Harvey V. Fineberg, MD, PhD
Good morning, everyone. It is a pleasure to be here with you and to participate in a discussion that promises to be mutually illuminating and highly consequential.
As I reflect on the dilemmas that beset us in health care and that prompted today’s meeting, I cannot at the same time fail to notice that many of you are already solving this problem. We could go around this room person by person and get example after example of how this problem has been, is being, and will be solved. The difficult and central challenge for all of us is how to take the exemplars of success and replicate them, disseminate them, and multiply them throughout the system. How do we infuse the whole of America’s health care with the kind of ingenuity, perseverance, and inventiveness that is represented right here in this room? Frankly, I could stop at this point and open a discussion on that question because what American health care most needs is to infuse and permeate the entirety of the U.S. health care experience with the successful innovations we already have.
When I first learned about this conference, I puzzled over the meaning and choice of the term “avoidable care.” Is it inappropriate care? Excessive treatment? Unnecessary care? Something else? In truth, all care can be avoided, although that is not something we should aspire to. It matters how we define our task because different ways of talking about the problem reveal different perspectives on what is desired and what we are aiming to avoid. If you think about the question as “What is unnecessary?” then you start from an implicit assumption that something is necessary, and the burden of proof is on those who would say it is not. If you talk about avoidability, then your implicit baseline is a restraint from intervention, and the burden is on those who would say that you should intervene. We will benefit in today’s discussion by thinking about the question from both perspectives. Our ability to communicate amongst ourselves and with others depends on the clarity of our own understanding of the concept of avoidable care.
When thinking about avoidable care, I find it helpful to conceive of a three-dimensional space that places care in the context of three key attributes. First, how much benefit comes from the care, ranging from none to a great amount. Second, how much risk is entailed in the care; is it very safe or is it very risky? And finally, how many resources are consumed in the delivery of care; is it inexpensive or is it very costly, setting aside who is going to be paying those costs? We can envision care in such a three-dimensional space, as illustrated in Figure 1.
Figure 1. Key Attributes of Care
In the bottom-front corner, the cost is high, the benefit low, and the risk great—clearly a bad corner in which to be operating. The diagonally opposite, upper-rear corner of the cube, has low cost, low risk, and a great deal of potential benefit; that is the type of care and service that we want patients to experience and that we want to deliver. How do we parse the space between what is clearly desired and what is clearly not desired in order to keep ourselves away from the “avoidable care” portion of this cube? And what is it that actually defines the boundaries? This is where the burden of proof comes into play. Is the burden of proof to push you away from the bottom-front corner (prove an intervention is too costly or risky or non-beneficial, and I will avoid it) or is the burden of proof to pull you up toward the top-rear corner (prove an intervention is sufficiently beneficial, low enough in risk and reasonably affordable, and I will do it)? These two perspectives alter the standard of evidence you need to act or to refrain from action. We are going to have to come to grips with how we think about parsing the space of care for individuals and for communities along these dimensions. Do we require proof to justify action (and all else is avoidable) or do we require proof to suspend action (and all else is acceptable)? Answering either of these questions is made more difficult by the ambiguity of evidence, by variation in applicability to different individuals in different circumstances, and by individual variation in attitudes. Differing attitudes toward risk, for example, may make two patients with clinically identical circumstances fall on different sides of a divide in a space such as this. Clinical decision making is not an isolated, rational exercise, but is also immersed in a culture, a set of incentives, and a set of habits and traditions on the part of practitioners.
Our collective ability to define and live up to appropriate standards of care has enormous health and financial consequences for society. In comparison to other OECD countries, the United States is a dramatic outlier, with greater health costs and relatively poor performance on health measures such as life expectancy. These comparative data suggest that there is a great deal of opportunity for us simultaneously to save money and improve health. Many of the estimates of how much money is actually at stake in waste—waste meaning care that does not contribute to the well-being of a patient—range from hundreds of billions to a trillion dollars or more annually in our $2.5 trillion health system (Fineberg, 2012).
One area where improved safety is within reach is reducing errors related to medication, still the most common type of medical error. Avoiding medication error means getting the intended drug to the intended patient in the intended dose in the intended time through the intended route. It also means that the intended drug is actually the optimal drug for the patient, but let us just consider how we can do better with the intended drug. The combination of electronic record ordering systems and bar coding of medications has been shown to dramatically reduce drug-related errors, by more than 80% in one study (Kaushal et al., 2003).
Engaging with patients in a more concerted way can similarly produce dramatic improvements in performance. The Connected Cardiac Care program at Partners HealthCare here in Boston utilized Health IT-facilitated self monitoring and patient-clinician communication in a group of patients with heart failure. In this group, the annual rate of hospital readmission for heart failure declined by 51%. By some recent estimates, savings from this intervention to engage patients have amounted to approximately $8000 per patient to date (Cosgrove et al., 2012).
The Ascension Healthcare system includes more than 40 hospitals. When they took a systematic and system-wide approach to reducing errors and improving quality, the results across a spectrum of services were striking. Compared to national averages, they were able to achieve 43% fewer bloodstream infections per central-line day of care, 65% lower rates of birth trauma, 89% lower neonatal mortality, and 94% lower rates of pressure ulcers (Pryor et al., 2011).
Another telling institutional example is at Vanderbilt University, where 80% of inpatient orders are from IT-enabled evidence-based order sets. In the care of patients on ventilators, they combined systematic order sets with a visible, real-time dashboard that tracked whether each of the seven components of care was due, had been administered, or was overdue. This yellow-green-red performance dashboard was displayed on large monitors where everyone—nurses, doctors, patients, and families—could see the status of each patient’s care. The goal of this exercise was to reduce the incidence of pneumonia associated with ventilator management. The standard order set, introduced in 2005, reduced ventilator-associated pneumonia (VAP) from 21.5 cases per 1000 ventilator days that year to 17.5 in 2006. The rate remained steady until they added the visual cue of the dashboard in August 2007, after which the rate steadily dropped to 4.6 cases per 1000 ventilator days in 2011. Between August 2007 and November 2011, Vanderbilt estimates that these steps averted 500 cases of ventilator-associated pneumonia and 75 deaths. The dollar savings exceeded $20 million. This is a case of care that is avoidable because it is preventable (personal communication, Dr. Jeff Balser, dean of the School of Medicine at Vanderbilt University).
Evidence-based standards may increase or reduce the number of interventions, as they can correct for errors of omission as well as errors of commission. In another example, Vanderbilt used an IT-based decision support model to guide selection of tests applied to bone marrow samples in patients with newly diagnosed leukemia, lymphoma, or myeloma. Based on a retrospective review of 601 patients, they found the decision guide identified an average of 0.4 additional tests per patient that otherwise would have been wrongly omitted and eliminated an average of 1.3 tests that were unnecessary but would have been ordered. This intervention combines improved clinical care with resource savings, and the net reduction of nearly one test per bone marrow sample, scaled nationally, could amount to a half billion dollars in savings (J. Balser, personal communication).
In a number of instances, evidence-based standards and checklists have been widely deployed to great advantage. Simply following systematic, evidence-based approaches to inserting and managing central lines, for example, reduced the number of central line infections in the U.S. from 43,000 in 2001 to 18,000 in 2009 (Centers for Disease Control and Prevention, 2011). Even well-documented improvements, however, can take a very long time to deploy widely. The use of beta-blockers following myocardial infarction, for example, varied greatly among hospitals years after it was established as effective (Bradley et al., 2001).
There are many reasons why improvements in care can take so long to spread. At the outset, we should acknowledge that the evidence for change is often less well founded than we would like. A survey of all ACC/AHA guidelines from 1984 through September 2008 found that a median of only 19% of class I guidelines for management of heart disease (atrial fibrillation heart failure, pacemakers) are based on a high level of evidence (Tricoci et al., 2009). Beyond the quality of evidence lies a host of reasons why improvements in care fail to materialize in a timely way—from culture to psychology to education to incentives. The aim of this conference, indeed, is to expose, explore, and find ways to overcome these obstacles to progress in care.
As we begin this conversation, I want to mention one kind of obstacle to improvement that I call magical thinking—convictions that individuals tend to hold despite evidence that should lead to contradictory or more nuanced beliefs. Here are some examples:
- New technology is good. It is certainly true that some new technology represents major progress, but not always. Americans seem to be more predisposed to think that new technology is good than people in a number of other countries. For example, a survey that was carried out about 10 years ago shows that 35% of Americans thought it was essential to have access to the latest technology, whereas in Germany, another technologically sophisticated country, fewer than two-thirds that number, around 20%, felt it was essential to have the latest in technology (Kim et al., 2001). Culturally, we believe in technology in America.
- Natural is good. Nature can be good, but not always. It is natural for children to die from diseases that are preventable with immunization. In this case, intervention can be very beneficial, and “letting nature take its course” is not a recipe for healthy living.
- Uncertainty is intolerable. We have a difficult time accepting that anything about our health is uncertain.
- Misunderstanding evidence. Humans have a propensity to leap to conclusions and to be misled by a range of shortcuts in thinking, framing effects, and other subtle yet powerful influences on our beliefs, well beyond what objective evidence will support (Kahneman, 2011).
- Misplaced trust. The same attributes that make the outstanding clinician trustworthy can equally make a charlatan trusted. People form very rapid judgments about others. Many psychological studies document that the amount of trust we have in others has less to do with the content of what they say, and more to do with their appearance, their style, and the way in which they present themselves. It is no surprise that personal and public trust do not necessarily adhere to responsible authorities.
There are a number of antidotes to such magical thinking as it manifests in health and health care. The aim is essentially to replace magical thinking with systems thinking: to apprehend care in its totality and its interconnected parts; to apply evidence-based guidelines and checklists; to track performance transparently and in real time; to employ root cause analysis and rapid improvement cycles to overcome institutional inertia; and to have clear and persistent leadership. One such example of leadership was demonstrated recently by the American Board of Internal Medicine Foundation and nine health professional societies who each put forward five tests and procedures that patients should think twice about. Consumer Reports is a partner in promoting this “Choosing Wisely” campaign. The tests in question reside in the middle space of the cube in Figure 1. More professional groups are expected to join in, and these are very positive steps forward.
Effective leadership in health improvement depends on an ability to stay with the problem over time, and never to be discouraged. Successful, irrepressible leadership has been the career hallmark of Bernard Lown. Just before we began this morning, Dr. Lown mentioned to me two articles in this morning’s New York Times that I had not yet seen. The first described hospitals that bring bill collectors into their emergency rooms to dun patients for payment. This, I said, was appalling. The second article, Dr. Lown went on, reported no decline in the use of PSA testing despite recommendations to discontinue its routine use. This, I said, did not surprise me. At that, Dr. Lown intensified his gaze and leaned toward me—we were already very close. “That is the difference between you and me,” he intoned, “You are not surprised—I am outraged.”
Dr. Lown is the model we should emulate. We cannot allow familiarity with shortcomings in health care lapse into passive acceptance. Like Dr. Lown, we must be outraged and impelled to action. We should relentlessly strive to create a virtuous cycle where evidence, application, and measurement continually improve care and diminish the reservoir of waste. This would lead to a learning health system, a system in which science informs care, and in which the culture of care is aligned with the incentive systems for payment and reward at all levels. This alignment would serve to continually improve the quality of care and the delivery of services to all patients. If we can find a way to move from isolated examples of improvements to a collective performance of a learning health system, we really will have accomplished the purpose of this meeting beyond gathering and sharing.
I want to close with one of my favorite Bernard Lown observations. As many of you know, Bernard shared the Nobel Peace Prize in 1985 in his role as a founding leader of the International Physicians for the Prevention of Nuclear War. Around that time he was asked whether he was an optimist or a pessimist. “I am a pessimist,” he replied, “about the past because we can do nothing to change it. But I am an optimist about the future because that is ours to make.”
Conference PowerPoint presentation:
Magical thinking & modern medicine – Harvey Fineberg, MD
Bradley, E. H., E. Holmboe, J. A. Mattera, R. N. Roumanis, M. J. Radford, and H. M. Krumholz. 2001. A qualitative study of increasing β-blocker use after myocardial infarction: Why do some hospitals succeed? JAMA 285(20):2604-2611.
Centers for Disease Control and Prevention. 2011. Vital signs: Central line–associated blood stream infections — United States, 2001, 2008, and 2009. MMWR – Morbidity & Mortality Weekly Report 60(08):243-248.
Cosgrove, D., M. Fisher, P. Gabow, G. Gottlieb, G. Halvorson, B. James, G. Kaplan, J. Perlin, R. Petzel, G. Steele, and J. Toussaint. 2012. A CEO checklist for high-value health care. Discussion Paper, Institute of Medicine, Washington, DC. http://www.iom.edu/Global/Perspectives/2012/CEOChecklist.aspx (21 June 2012).
Fineberg, H. V. 2012. Shattuck lecture: A successful and sustainable health system–how to get there from here. NEJM 366(11):1020-1027.
Kahneman, D. 2011. Thinking fast and slow. New York: Farrar, Straus and Giroux.
Kaushal, R., K. Shojania, and D. W. Bates. 2003. Effects of computerized physician order entry and clinical decision support systems on medication safety. Arch Intern Med 163(12):1409-1416.
Kim, M., R. J. Blendon, J. M. Benson. 2001. How interested are Americans in new medical technologies? A multicountry comparison. Health Aff (Milwood) 20(5):194-201.
Pryor, D., A. Hendrich, R. J. Henkel, J. K. Beckmann, and A. R. Tersigni. 2011. The quality ‘journey’ at Ascension Health: How we’ve prevented at least 1,500 avoidable deaths a year–and aim to do even better. Health Aff (Millwood) 30(4):604-611.
Tricoci, P., J. M. Allen, J. M. Kramer, R. M. Califf, and S. C. Smith. 2009. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 301(8):831-841.