In the Literature
Jul 2007

Outside the (Pill)box: Can Physician Performance be Assessed by Objective Measures?

Sanjiv Bajaj, MD
Virtual Mentor. 2007;9(7):494-496. doi: 10.1001/virtualmentor.2007.9.7.jdsc1-0707.

 

Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296(9):1094-1102.

Physicians have long believed that self-learning and experience appropriately refine our patient care skills. We have been taught that we can analyze our own deficiencies and use these observations to guide future improvement. Our ongoing educational requirement thus consists largely of the unstructured and self-guided continuing medical education (CME) credit system. David A. Davis and colleagues challenge this belief in their Journal of the American Medical Association article, "Accuracy of Physician Self-assessment Compared With Observed Measures of Competence" [1].

The authors conducted a systematic review of studies that compared physician self-assessment with independent markers of physician competence. Because their inclusion criteria were so narrow, they found only 17 articles, three of which use two external comparisons each, resulting in a total of 20 comparisons between self- and external assessment [2]. Thirteen of these suggested either no relationship or an inverse one between the two forms of assessment; seven indicated a positive relationship. The authors concluded that physicians "have a limited ability to self assess," and that professional development "may need to focus more on external assessment" [3].

The external measures of competence used in the studies that Davis et al. analyzed were: objective, structured clinical exams (OSCEs); reports of encounters with standardized patients; performance on simulation training and in other exams; chart audits (one study); and ability to explain concepts of evidence-based medicine to a blinded interviewer. Many of these measures are less than ideal. OSCEs and standardized patients, as many of us who have experienced them would attest, are often painfully artificial encounters, even for medical students. We develop strategies for maximizing our performance on these tests that we would never employ in practice. The authors' claim that "training may reduce the variation between self- and external assessments by encouraging the internalization of objective measurements or benchmarks of performance" [4] supports the conclusion that performance on these tests relies largely on test-taking strategies. We must question whether test-taking ability is a valid pursuit for doctors. Why should we care about these objective measures if they have no bearing on clinical performance? Unfortunately, only one study looked at clinical outcomes.

Objective measurement nevertheless surrounds physicians from the time they enter medical school. Multiple-choice tests have gained a following in medical school because of their perceived objectivity and the ease of grading them. But they test knowledge at the most basic levels of connections. Graded essays on medical topics would be more appropriate and would force students to synthesize and apply information.

Objective Versus Subjective Assessment

Of much greater concern is whether we desire the objectification of all aspects of medicine. Most of our field concerns subjective patient complaints. The experience and judgments physicians bring to the clinical encounter are likewise subjective. How, then, can objective measures accurately assess our performance? If we wish to determine the quality of the physician and his self-assessment, perhaps we should choose a more subjective measure.

Subjective phenomena abound in medicine. The placebo effect undoubtedly exists. It proves perplexing to many because it often leads to resolution of objective as well as subjective symptoms. That it has a greater effect on the latter, however, implies that our minds control the way we experience symptoms. If subjective experience mediates all symptoms, and patients are the ultimate judge of their conditions, how can we judge physician performance solely on objective measures [5]? By focusing on the objective, we neglect the "art of medicine."

A complete objectification of medicine would obviate the need for physicians. It would allow for design of a computer algorithm patients could use instead of visiting doctors. This program would provide perfect evidence-based recommendations in response to patient input. But we know that physicians serve an important role in filtering patient complaints, organizing them into a meaningful framework, and tailoring treatment to the patient. Moreover, effective clinicians can alleviate suffering with their words and actions. A disproportionate focus on objective measures risks losing these critical aspects of our field (and putting us out of a job).

In fact, the very studies on which we rely for data—randomized controlled trials—contain critical flaws. This type of research intuitively seems ideal for teasing out relations, but we must realize that almost nothing in medicine applies to 100 percent of patients. The fact that treatment A positively impacts 80 percent of the subjects, while treatment B helps 50 percent of the participants, does not make A superior to B for a given individual, despite the evidence. What's to say that experience couldn't teach a physician which patients would respond better to the "less effective" treatment? Furthermore, every study carries a quantifiable statistical probability of incorrectness. Our "objective" data, therefore, cannot attain the status of "universally true."

One medical school professor during my second year said, "Half of what we teach you will be wrong by the time you finish residency." Every year we uncover new mechanisms that shed light on disease and invalidate old concepts, many of which appeared incontrovertible at the time. While the professor's prediction may have been an overstatement, the principle behind it holds—we cannot be sure of anything we currently believe.

I do not mean to denigrate objective measures as devoid of value. They advance our understanding and provide us with a basis for development—but they are nothing more than a basis. We must not allow the allure of easy black-and-white comparisons to cover the haziness behind our numbers. We must not relegate the subjective to the waste bin of medicine.

Davis et al. pursue a laudable goal in evaluating physician self-assessment. Perhaps they are correct when they suggest that our current measures need more development to provide accurate data. We must design evaluation methods that account for objective and subjective measures, while allowing for differences in physician techniques. Above all, such measures should gather information about the subjective experience of the patient. This data might not be easily quantified in a study, but it might facilitate physician-guided improvement in medical practice. Perhaps we should de-emphasize objective external quantification of physician prowess and instead devote our energies to creating better methods of facilitating practitioner improvement.

References

  1. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A Systematic Review. JAMA. 2006;296(9):1094-1102.
  2. The study used self- and external competency measures and excluded studies of medical students and other health professionals, patient-focused response, reports on the development of assessment tools, and educational evaluations.

  3. Davis et al, 1094.

  4. Davis et al, 1101.

  5. The Davis et al. study did take into account simulated patients and OSCE participants. While simulated patients and participants in an OSCE have a subjective viewpoint, their reporting is based on objective guidelines and checklists. Thus the reported scores are objective.

Citation

Virtual Mentor. 2007;9(7):494-496.

DOI

10.1001/virtualmentor.2007.9.7.jdsc1-0707.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.