Episode: Nudges, Pushes, and the Ethical Challenge of Behavioral Architecture
Guests: Mitesh Patel, MD, MBA
Host: Tim Hoff
Transcript by: Tom Wagner
Listen to the podcast here
This interview has been lightly edited for clarity.
Tim Hoff (Host): Welcome to Ethics Talk, the AMA Journal of Ethics podcast on ethics and health and healthcare. I’m your host Tim Hoff.
Why do we make the choices about our healthcare that we make? We’ve discussed this previously, our October 2019 show on decision science demonstrated how complex decisions can get especially decisions about our own health. Dr. Brian Zikmund-Fisher explained:
Brian Zikmund-Fisher: And so, at age 28 I went into an intake appointment and asked doctors to help layout what my choices were.
They said, “Look, you really have two choices. Choice one is that we continue to use platelet transfusions and do other kinds of supportive care. You probably have five maybe ten years of life doing that. But eventually it’s going to stop working and you are going to die. Or you can go ahead with a bone marrow transplant. And if it works, you’ll have a long and normal life. But the process is going to be very demanding, it’s going to put you at a lot risk of infection and the honest truth is that you probably have a 1 and 4 chance that the procedure itself is going to end up killing you within six months to a year.”
I was 28 and my wife had just given birth to my daughter. So, I had to make that choice.
HOFF: We covered a lot of ground in that episode: How the human brain processes statistics, the nonquantifiable elements of decision-making such as quality of life and more. But we didn’t explore one of the most common and most influential parts of decision-making in healthcare – the nudge. When they work best, nudges are inconspicuous, unobtrusive, seamlessly integrated suggestions built into how options are framed, curated, ordered, and presented. Nudges, depending on your definition of what a nudge is exactly, are everywhere, which and where information is displayed on prescription bottles. Notifications on your phone casually noting that you took twice as many steps last week as this week. Sometimes nudges benefit you. Sometimes they’re harmful and sometimes they’re ethically neutral. But one thing for sure, nudges are there to exert influence. And nudging doesn’t only apply to patient intake or interruption of information. The work environments of clinicians use nudge too, in all sorts of places: default settings in electronic health records or on ventilators for example, or which drugs are listed or not listed on a hospital’s drug formulary. Despite their subtlety, nudges have clear influences on how healthcare is delivered and received.
Mitesh Patel: 80% of our patients were getting daily imaging, that means if you were getting 14 fractions or doses of radiation, you could get up to 14 x-rays or CT scans most of which are no longer covered by insurance and your family might get hit with the bill after you pass away. Not to mention this wastes a lot of time for patients in their valuable last days of life. So they came to us and said, “This is obviously something we want to change, it’s a department priority for the year we sat down with every clinician we can’t move the needle more than five or ten percent can we work with you and come up with a nudge.” We said, “Sure this sounds like a great opportunity what are you thinking?” They said, “We would like to put a price on a ct-scan so that when a clinician orders unnecessary imaging they are reminded that this costs $3,258 or whatever.” We said, “That’s interesting, we had done a price transparency study before and the data had been mixed that was for unnecessary lab testing.” We said, “Before we decide on the nudge, let’s actually understand why . . . what’s driving the behavior, seems like there’s national guidelines here and the evidence is clear, why is so much, so many imagining tests being ordered?”
Very quickly it became apparent, what we found out was that the template for ordering tests didn’t have a default set. When clinicians are ordering curative intent radiation, they quickly just click on it and they clicked daily images for the first option. But here they worked with a three-page form with 15 fields, they were just kind of copying over their behavior, the template had been copied over actually from the other one. And so then the question began what do we need to do to address this? Do we need to get an IT analyst list? This actually wasn’t Epic or our main EHR so that there were concerns about whether we could get somebody from the other company. What we realized was it was a Word document embedded with an electronic screen, so you could literally right click on the box, select “None” and move that to the default. So that’s what we did, it took literally five seconds.
They had been doing a bunch of things; this was the priority of the year, project of the year, they had education, how were we going to demonstrate that this small change which took five seconds and cost nothing was what drove the needle down. What we did was we designed this as what’s called a step wedge cluster randomized trial which basically just means we rolled this out to all the practices but we randomized the order. We decided just to keep it simple and pick two group the university practices and the community practices and let’s flip a coin. The university practices got randomized to go first our target was 20% because we felt one and five case s is what the radiation oncologists told us were unusual and still needed imaging for palliative cancer patients.
Right away after turning it on, the rate dropped from about 75% to 25%, almost exactly where we wanted. Three months later we turned it on to community practices and had significant drop as well. Because we rolled it out in a staggered fashion it’s now been implemented in all our radiation oncology practices. The overall imaging rate went from about 70% to 30%. Again, costs really nothing, actually made it easier for clinicians to do the right thing. Saves about 5000 unnecessary imaging tests per year. If that wasn’t enough what we found out almost unexpectedly was that the patient visits were able to happen 20% faster because we didn’t have to take patients and send them to the CT scanner or the x-ray machine and then bring them back and then wait for the read. Not only were patients able to get home faster but the health system was able to get more patients in and care for more patients which improves patient access and improves obviously health systems financial situation.
HOFF: That was Dr Mitesh Patel the director of the Penn Medicine Nudge Unit and the Ralph Muller Presidential Associate Professor in the Perleman School of Medicine and Wharton School of the University of Pennsylvania. Dr. Patel joined us to discuss how nudges can be used in contexts from individual patient interactions to large scale public health campaigns and how to identify and avoid potential ethical pitfalls of how nudges guide behavior. Dr. Patel welcome and thank you for being here.
PATEL: Thank you for having me on.
HOFF: The Penn Medicine Nudge Unit is the first behavioral design team embedded within a health system. So, to start can you tell us why a health system needs a nudge unit and what you do to serve your colleagues and patients there?
PATEL: Sure, medical decisions are being made constantly within health systems. Clinicians are deciding whether or not to offer or prescribe tests and treatments to patients and patients are deciding whether or not they should consider those or whether they should ask clinicians about other concerns that they may have. All of that is influence by the way that choice architecture within the health system is designed, the way the information is framed with the choices are laid out both to clinicians and patients. In most situations that’s done unsystematically often times, sometimes, it’s chaotic and it turns out that there’s a lot more science now in terms of how we can use behavioral science and nudges to motivate and align long-term goals of clinicians and patients together. And that if we do it systemic process we can actually rapidly improve the delivery of health care really at low-cost and often times with really small changes to the choice environment.
HOFF: One example that might be easy to implement this kind of thing is in wearable devices. And either medical devices themselves or these medical-adjacent sort of consumer health devices seem to present many opportunities for these nudges to not only be used more but to be personalized in some way to the wearer or user. Can you give us some examples of nudging being integrated into this kind of technology and what the merits and drawbacks are of these personalized nudges?
PATEL: So many people have gotten excited about using wearable devices to try to motivate people towards better habits and better health. That can be a real big opportunity because a substantial portion of our longer-term health relays on our everyday health behaviors things like: are you physically active in getting exercise, what do you eat, do you smoke or not, do you have medication do you take it? A lot of people have looked towards wearable devices to be able to passively track some of these behaviors and then try to find ways to combine either information or nudges to try to motivate people in real time - further, can personalize these or learn from people’s behavior. It turns out there are a couple challenges with this. Not everybody has a wearable device, right now maybe 10% of people have a wearable and use it all the time. Many people who get a wearable stop using it over time within a few months. Wearables claim to do and track a lot of different behaviors, but there hasn’t been a lot of rigorous assessment other than things like physical activity and more recently heart rate. Nonetheless, the technology will evolve, and I think that will improve.
We’ve done a bunch of clinical trials looking at ways to combine social and financial incentives to try to motivate patients to be physically active, lose weight, and so on. We’ve had a lot of success in both of those realms, but really the main lessen we found is the design of the intervention really is important. Subtle changes to the way that you design – let’s say gamification or game-based program to try engage people in physical activity goals – can lead to dramatically differences.
We did a national trial with over-weight and obese employees from Deloitte Consulting from 40 states across the US and found that competitive game that leverage social incentives was really effective at improving physical activity and that physical activity level stayed higher even when you turned the game off. But if we designed that to be collaborative instead of competitive, for that population it wasn’t nearly as effective. The other thing we’ve learned is that often times wearables by themselves are not enough. Most people especially patients who are either less motivated or a lot of chronic health conditions you need to combine a wearable with some type of behavior change strategy. There are some promising ones like the one I mentioned that can be used more broadly.
HOFF: You said that a competitive framework was more useful for this population. Does it seem like a collaborative framework is working elsewhere or is it just across the board competitive incentives are a little bit more useful?
PATEL: It really varies. This gets into the idea of “can we personalize these things?” We did the first clinical trial ever conducted with a Framingham Heart cohort. It’s been around for more than 80 years outside of Boston, Massachusetts ,and they’ve tracked their health and their behaviors for decades but never done an intervention until a few years ago when we partnered with them. What we found was a collaborative intervention among the families was affective at improving physical activity and those physical activity rates stayed there at in the three months after we turned the game off. In that situation we think a collaborative incentive worked because these are families who lived together, who know each other well, and they’re more likely to work together better than in an employee population where we took people from this consulting firm across the country and paired them with two other people who they never met before. You also might think that people who work for a consulting firm might be more competitive by nature than older family members who live in the community. We’ve done a bunch of psychometric tests in all of these studies, things like the big five personality test, looking at risk preferences, social networks. Right now, we’re working on trying to create algorithms that can help us predict if someone is going to be more likely to be successful with collaboration, competition, or support. We have a couple of these trials going on in diabetic populations, patients with heart disease, so trying to understand how they might vary for patients who have a higher risk or higher level of medical conditions than someone who in the community who is healthy.
HOFF: Does the effectiveness of a nudge, whether it’s collaborative or competitive, depend just on the person that’s receiving the nudge or is it also dependent on what the goal behavior is? Are collaborative nudges more effective for fitness-based changes than they are for diet-based changes or things like that?
PATEL: I think there are a lot of factors that influence them. I think they may differ. You can’t expect that if you create some sort of nudge or incentive design for physical activity that the same impact is going to happen, is going to be effective for weight loss or medication adherence. There’s obviously some lessens you can take, but you need to adapt it for the behavior. Physical activity is something you have to do every day, whereas the flu shot is something you get once a year and then you’re done. So, you might think about a different intervention for these transactional behaviors than these continuous behaviors.
And then there are all kinds of other things that are important, you think about the environment – we were talking about the choice environment – the literal environment. If it’s cold in colder months people are less active than when it’s warmer, so you might think about different nudges in those situations. We find that patients who use wearables before experience the intervention differently than people who have never used the wearables before. Probably not surprising, but just shows you that past experiences in general are really impactful. And then people sentiment and their mood varies from day to day. So, there’s a lot of opportunity I think to really start to hone in and personalize these nudges. We’re just really starting to scratch the surface. Most of the work that’s been really about how do nudges work on a population level for a cohort of patients. I think the next area we’re particularly interested in and starting to do some work on is, “How can we start to tailor these nudges towards the experiences and the needs of the individual?”
HOFF: There seems to be a danger inherent to nudges when you approach the line between a “nudge” and a “push” so to speak especially when the nudges are being used by a clinician or health care organization to guide patient behavior. Should we be worried about nudges undermining patient autonomy in that way?
PATEL: Well I think one thing that’s surprising to many people they don’t recognize is that whether you know it or not you’re already being nudged. There’s somebody decided that they’re going to put some information on your medication bottle or on your information about your next visit or when you need to visit. The choices for you to select different things are already there. I think by default people are being nudged. Our goal is really to align those nudges with the longer-term goals of patients and to actually not make this secret, we want this to be transparent. There’s actually good evidence to show that when nudges are transparent they tend to be more effective. For probably two reasons. One is because in that process we understand how to make those nudges better, we learn how to fit them into the lives of the scenarios and situations of patients and clinicians. The other is that people really understand why they’re getting certain information or why the options are framed in a certain way. They’re able to kind of understand that and align their goals with that. I think that there’s actually… we created a nudge unit within our health system to really do this more systematically because we saw that there are some situations where nudges were actually causing harm to patients, they had to do extra steps or clinicians had to take extra steps to be able to do the right thing. And there are opportunities to make the right thing the easy choice.
HOFF: That’s interesting that nudges are effective when they are transparent. There are circumstances in which you can think a patient who was being nudged would want to know so that they specifically can push back against that nudge. Are there examples of that that you can think of nudges that are typically . . . receive more push back than others or types of behaviors that often times are a little bit more entrenched that nudges aren’t quite as effective at changing?
PATEL: Well it’s important to keep in mind that not everything can be or should be nudged. There are some situations where nudges are really appropriate and have a good role and you can really align things so that we can move the needle based on guidelines and evidence. And others where maybe evidence doesn’t exist or there’s really a lot of preference based on the patient, the clinician, and some conversation that needs to go on to understand that. That’s part of our role as a nudge unit, is to understand and make decisions with our team on where and what type of nudge we should use and which situations we shouldn’t.
For example, when you are thinking about how to align someone’s prescribing behavior, and there’s guidelines for that, so let’s say statins we know reduce the risks of heart attack and stroke and there are national guidelines for which patients meet those criteria and we can actually identify from the electronic health record easily whether patient meets criteria or not. There, we can find effective nudges that align with the longer-term goals of improving care. Obviously a patient still has to agree and a clinician still has to offer, but we can try to nudge that conversation to happen.
In other scenarios there may not be evidence around whether or not a patient should get a certain treatment or test, or you may not be able to collect data to understand whether a patient is high-risk or low-risk. Let’s take the example of someone who just came to a health system for the first time. You just moved across the country, just showed up to a health system, they have no information on you and so there’s really no way to pull data to understand if they are elidable for a statin or not and that data has to either be entered. Or there might be another scenario where a patient has a question about something for which there just not guidelines yet. So, we tend to focus on things where it’s a little bit more clear and there’s evidence backing things. I think it’s important to make sure that the nudges are aligned with evidence. Then we like to get input for both clinicians and patients to help make sure that we’re framing things in the right way
HOFF: So we’ve seen nudges being used in these organizational level policy decisions such as increasing the level of generic prescriptions by adjusting default settings in EHRs, like you were talking about. But how can and should nudges be used in these macro-level health policy decisions, for example on the scale of national public health?
PATEL: I think there is a lot of opportunities here. When you think about public health efforts a lot of it has been focused on education. You think about right now we’re in this situation of COVID-19, and there’s been a lot rapidly emerging evidence on whether or not we should be wearing mask and social distancing and things of that nature. But even for past decades there’s been similar discussions on the risk of flu shots and so on and so forth. The way that information is disseminated, the way it’s offered to people or framed in these messages can have a big impact on their behaviors.
Let’s take a simple example of wearing masks to prevent the spread of COVID-19. One way to say that is that evidence suggests that masks can help reduce the spread, people should decide if they want to do that, kind of the opt-in approach, people have to actively or do work to feel like they’re doing something. The other approach could be, we recommend everyone wear masks unless they decide not to or they don’t want to, that’s like the opt-out, that sets the norm which is that everyone should be wearing masks. There’s a lot of evidence to shows that the opt-out framing and really setting what the norm is can have a big impact on people’s behavior. There’s lots of opportunities I think for that in public health. And then certainly there are other opportunities within ... a lot of these efforts are moving digitally. There’s going to be choice environments just like there are on the electronic health record or patient portals that become aware for choosing . . . it’s already happened for choosing your health insurance through Obamacare and so on and so forth. Actually there’s going to be other opportunities. I think there are a lot of opportunities, similar lessens and apply them to public health efforts.
HOFF: Ways in which implicit bias affects patient clinician interactions are well documented at this point. Physicians for example tend to under prescribe treatment for pain in black patients. Nudges that call attention to a need for more equitable prescribing behavior, for instance, could help us address some of these kinds of health inequity. Are nudges being used in this way now and if so, what do we know about how effective they’ve been in motivating equitable care for patients?
PATEL: I think this is a big opportunity. Despite many decades of efforts, there’s still exists disparities in care by many factors. A lot of them were related to socioeconomics and race and ethnicity. One of the ideal things about nudges is that you can design them when the evidence suggests to really almost automate some of the care. As oppose to relying on clinicians to think about whether or not a patient should get a test or treatment and then allowing this implicit unconscious bias to enter. We can start to automate that. For example, you mentioned already an example of some of the work we’ve done in generic prescribing where that generic prescribing rates used to be lower than our peer institutions. Now they’re essentially 99%. We switched it from opting-in to generics to opting-out whereby default a generic gets sent to the pharmacy unless you check a box. There used to be disparities in who got generic prescriptions based on a variety of different factors. Now the generic prescribing rate is 99%, those disparities are essentially down to zero. While that’s a drastic move, other ones we still seen significant increases . . . for example we’ve had examples where referring people to cardiac rehab which is an evidence-based therapy and treatment after having a heart attack or stroke can reduce readmissions and reduce mortality was only being offered to 15% of patients who were discharged from a hospital. We found out because this was very manually process, required paper forms, and cardiologists had to do all the work. We automated that using technology and set it up as a default. So, all the clinician had to do was signoff before the patient was discharged, they no longer had to fill out a bunch of forms and do all this stuff and someone else handled the discussions. Our cardiac rehab referral rate is now 85%, been that way for two years. You can imagine that there were large gaps in care in disparities here that basically were resolved because we were able to automate a lot this. It’s got to be a situation where it’s clear that the patient can benefit. There are a lot of those situations that exist: cancer screening, flu vaccinations, statin prescribing, international guidelines for all of these things. Many places under preform or have disparities in care and nudges can be really effective at helping to address those issues.
HOFF: Dr Patel thank you very much for taking the time to join us today.
PATEL: Thank you for having me on.
Hoff: That’s our podcast this month. Thanks to Dr Mitesh Patel for joining us. Music was by the Blue Dot Sessions. For more on nudges, be sure to visit www.journalofethics.org to read this month’s issue, “Behavioral Architecture in Health Care.” Follow us on Twitter @journalofethics for our latest news and updates, and we’ll be back next month with a podcast on risk management ethics. Talk to you then.