AMA Journal of Ethics®

Illuminating the art of medicine

Journal of Ethics Header

AMA Journal of Ethics®

Illuminating the art of medicine

Virtual Mentor. June 2004, Volume 6, Number 6.

Op-Ed

  • Print

Physician Report Cards and Quality Improvement

An interview with David B. Nash about physician report cards and their influence on health care quality improvement.

VM interview with David B. Nash, MD, MD, conducted by Phil Perry, MSJ

David B. Nash, MD, MBA, is chairman of the Department of Health Policy, Jefferson Medical College, Philadelphia, Pa.

Q. What should VM readers—medical students and educators—know about physician-level report cards?

A. By way of background, physician-specific report cards have been around now for more than a decade. Probably dating back to 1991-1992 with the publication of the open heart surgery data, the so-called CABG data from both New York state and Pennsylvania, this has rapidly evolved to what have been called physician profiles.

At the hospital level, the convergence of several different streams of data was powered by computers: the economic information merged with clinical information, for example. The number of tests and procedures ordered per doctor was linked to outcome, length of stay, morbidity, mortality, and readmission rate. I think we've seen the rapid evolution over a decade of very robust physician-specific practice profiles.

With the advent of these profiles, a normal distribution of physician behavior became apparent very quickly. And what followed from the recognition of this normal distribution was that some doctors performed better than others.

The public's reception of physician report cards is very mixed. The issue today, among many, is, do people really select a physician based on the report cards? There's evidence on both sides of the issue.

Q. What is your own opinion on the value of physician report cards and their effect on quality improvement?

A. I think physician report cards are one tool in the toolbox for measuring and improving the quality of medical care. A serious challenge to the reliability of report cards is the small number of patients attributable to any given doctor. But also, it takes away from the team approach to medicine when we single out an individual doctor as ultimately responsible for the outcome. It deflects our attention from the system-ness and processes of care by focusing on any single individual. It's very hard to create a reliable, valid, reproducible report card. On the other hand, once you give doctors good information about how they stack up relative to a peer group, get out of the way because they'll trample you on the way to change.

Q. So in that regard it has been constructive?

A. In the final analysis, report cards have been a powerful stimulus to change behavior in the right direction. Probably the best example of that is the National Diabetes Quality Improvement Alliance project. The American Diabetes Association and the AMA joined together (with other groups) to give doctors a special certificate or extra recognition as being a quality provider. http://www.ama-assn.org/ama/pub/category/11018.html.

The other place report cards have had an impact is managed care organizations, which actually pay doctors more based on their performance. That's a growing trend; it's been around for decades, and more and more managed care organizations and even some employer groups are grading physicians based on all kinds of different scales and tying reimbursement in part to the grade on those scales.

Q. What makes a report card helpful or valuable?

A. The good report cards have reliable, valid, reproducible information. They cover both clinical and nonclinical process measures. They're used in a nonpunitive way. And in certain circumstances, they are effectively tied to reimbursement. Those are the attributes of a good report card.

Q. Moving to the future, should health consumers expect some sort of national standards and a report card that's available everywhere, or will we see, region by region, local report cards?

A. I think for the next five years, you're still going to see a very regional approach to this issue, driven by the different stakeholders in different region. For example in Orlando, the driver is the Central Florida Health Care Coalition, an employer coalition. In other towns, it's the managed care organization that's the driver, and in still other venues, it's the hospital itself, participating actively in process improvement, that uses the report cards to drive behavior change. So, we're a long way away from a national, standardized report card document.

Q. The National Committee for Quality Assurance and their "Physician Practice Connection" is something that is mentioned as a major program. Are they a good example?

A. That's a good national example. That is about the state of the art. http://www.ncqa.org/ppc/index.htm

Q. You suggested it would help to have a combined clinical and managerial report card. What are some of the clinical measures?

A. Readmission rate, nosocomial infection, iatrogenic complication, length of stay, timely administration of antibiotics—it depends on what you're profiling. The indicators, for example, in the national voluntary reporting system about ACE inhibitor use, include all of those. Those could be components of a robust profile.

Q. And what are the managerial measures?

A. Managerial measures are more like utilization of resources, timeliness of admission and discharge; length of stay could be construed as managerial. Again, it's the confluence of those two streams of information, largely process measures, that is important.

Q. I detected that you think report cards are not the one and only tool for quality improvement. What else should we be working on?

A. There's a huge list. One needs to think about financial incentives; you need feedback of information using a practice profile; you need continuing education programs; you need doctor participation in the activity—the standard Eisenberg 6 rules for changing doctor behavior. [Editor's note: Eisenberg's methods for changing doctor behavior are: education, feedback, participation, administrative rules, incentives, and penalties [1]. Report cards or closure of the feedback loop is only one of the 6 criteria.

Q. Do you expect this physician report card trend to continue?

A. Very much so. There are dozens of pay-for-performance programs across the country, many of which are tied to physician-level reporting.

Q. What advice would you give to young physicians who are starting out and will practice in a profession with report cards?

A. First, I'd say be aware that this is occurring and practice based on the evidence. The only people that should worry about report cards usually don't.

So, mind your p's and q's. I wouldn't dramatically change what I do based on a report card, if I'm practicing based on the evidence. My feeling again: transparency in everything. Practice based on the evidence, and you won't have any problem with a report card. And the only people who have problems with report cards are the ones who aren't paying attention.

I tell people, what is report card time like at your house? Do you just say "do better next time" or do you provide the resources? Do you say "here's a tutor; I want to see your homework, let's go talk to the teacher"? It's not any different. If you just say to doctors, "here's your report card, we expect you to improve your performance," without any kind of resources, training, education, that's not going to work.


Reference

  1. Eisenberg JM. Doctors' Decisions and the Cost of Medical Care. Ann Arbor, Michigan: Health Administration Press; 1986.

Additional Resources

David B. Nash, MD, MBA, is The Dr. Raymond C. and Doris N. Grandon
Professor of Health Policy and chairman of the Department of Health Policy, Jefferson Medical College, Philadelphia, Pa.

The viewpoints expressed on this site are those of the authors and do not necessarily reflect the views and policies of the AMA.