The medical student performance evaluation, known as the MSPE or dean’s letter, summarizes a medical student’s performance at the time of application for postgraduate training. In each medical school, considerable effort is exerted to produce this longitudinal account of performance. For the MSPE to be valuable it should serve as an objective and unabridged summary of the student’s performance without obscuring or eliminating the very information that might predict difficulty in residency. The presently recommended MSPE format is the result of several calls over the last 20 years for standardization in reporting.
The AAMC convened an advisory committee to make recommendations to help standardize the structure and content of the dean’s letter [1]. This effort was initiated because, despite previous calls for standardization: (1) there was a lack of uniformity among letters, (2) there was a need to assess professionalism, and (3) the value of the dean’s letter in the GME community was declining. Ten years later, despite this call for standardization and uniformity, wide variability in what gets transmitted from the school to the postgraduate programs persists, and residency program directors still rank the MSPE as the least valuable of 16 academic selection criteria [2].
Has the Utility of the MSPE been Optimized?
There are many sources for the variability in the MSPEs that undermine its usefulness to residency program directors. Edmond et al. [3] compared 532 dean’s letters with each student’s transcripts to look for concordance between the two. They found that negative indicators such as a failing or marginal grade in a preclinical course or clerkship, a leave of absence, or a requirement to repeat the entire year of the medical school were omitted from letters 27 to 50 percent of the time. They concluded that some authors suppress negative information in their letters. More recently, in a study by Shea et al. [4], only 69 percent of MSPEs contained comments about student performance that were produced exactly as written. Only 13 percent of MSPEs had specific professionalism sections; the majority of comments about professionalism were imbedded in other areas of the document. Mentions of gaps in study, or leaves of absence, and adverse action against the student were infrequent. Moreover, despite the AAMC guidelines against including a final summary recommendation, 39 percent of MSPEs still did so.
Unique to the MSPE and not found in the transcript are the narrative descriptions of the student’s performance during the clinical years. There is considerable variance in the meaning of grading systems across schools. Alexander et al. [5] analyzed data from 119 LCME-accredited schools and documented eight different grading systems using 27 unique sets of descriptive terminology. There was great variability in the percentage of students eligible for honors among schools, ranging from 2 percent to 93 percent. In a single school, the correlation of descriptive terminology with use of the highest grade varied between 18 and 81 percent. Furthermore, regardless of AAMC guidelines, only 17 percent of MSPEs provided comparative class data [4]. This variance makes the job of the program director extraordinarily difficult. Durning and Hemmer [6] call for “more credible and transparent interpretation of what grades mean” within the institution. They called too for “honest and forthright” narratives that include information that allows the reader to understand the evaluation and grading processes.
Another factor that makes the MSPE less useful is the misuse of common words. Naidich et al. [7] reported that the word “excellent” was used by 75 percent of the medical schools. In some cases 65 percent of the students were classified as “excellent.” Some schools used the word “excellent” to characterize students in as low as the thirty-third percentile, while in others the term applied only to students at or above the ninety-second percentile. Naidich et al. concluded that the authors of the MSPE exaggerate the quality of their graduates, which diminishes the value of the MSPE for residency selection. Likewise approximately 34 institutions use the term “good” to describe students in the bottom half of the class [8]. Some schools characterize the bottom 25 percent as “good,” while others characterize students between the twenty-fifth and fiftieth percentiles as good.
Despite the variability, there is evidence that the MSPE can predict performance in residency. Lurie et al. [9] correlated the 4 categories of their MSPEs with the residency program directors’ evaluations for 2 consecutive years. They found that graduates in the bottom category, “good,” were likely to underperform in residency, while those in the “very good” category could underperform or over perform during residency.
In a study of performance predictors in an anesthesia residency program, Swide et al. [10] reported that program directors frequently question the accuracy of the professional behaviors reported in the majority of MSPEs, and maintain the belief that “the MSPE, in general, avoids ‘negative’ comments, rendering a section on professionalism inherently unreliable” [11]. Whether this is the result of the lack of a reliable tool to measure professional behavior or the school’s unwillingness to disclose information that would reflect poorly on the student or institution is unclear. Durning et al. [12] analyzed the graduates from their medical school to determine whether an appearance before the student promotions committee was predictive of performance during residency. Asking residency program directors to rate their graduates as “above average,” “average,” or “below average,” they found that students who appeared before their student promotions committee (regardless of the cause) were at higher risk of receiving below average performance ratings during the PGY-1 year.
Is Who Writes the MSPE Important?
Recently, questions have been raised regarding the authorship of the MSPE and its objectivity. Hunt argued that the role of the student affairs officer as student advocate could be in conflict with the intent of the MSPE as an objective evaluation document [13]. Schroth et al. [14] countered that student affairs officers are in the best position to prepare the MSPE because they have expertise in career counseling and are often the most knowledgeable about the student’s academic achievements and extracurricular activities. We believe that the argument about who should write the MSPE misses the point. We agree with others that transparency in the preparation of the MSPE is critical [15]. Every school should develop and transmit clear, objective, and standardized criteria. Instead of focusing on who the author of the MSPE should be, the focus should be on the content of the MSPE and how that content is determined.
Given that the MSPE presents an opportunity to portray a student’s longitudinal performance in medical school accurately, why does it fall short in so many cases? Tensions between accurate portrayal of performance and best residency match outcomes occur at the individual and at the institutional level. The authors of the MSPE want to portray their students in the best possible light. Students have worked hard not only to get into medical school but also to complete a rigorous curriculum, and traditionally that hard work has been rewarded. When a student’s path through medical school has not been smooth, a truly transparent and accurate portrayal of a student’s achievements may result in a less-than-desired outcome for that student. The school’s allegiance to the student may contribute to the omission of certain performance data.
At the institutional level, a school’s reputation may be influenced by the quality of the residency programs its students match to. Although there is debate about the value of the U.S. News and World Report’s medical school rankings [16], many institutions care about their placement on this list. Twenty percent of a school’s ranking depends upon its selectivity (the proportion of applicants who are offered admission); the school’s ability to match its students into competitive residency positions encourages students to apply to it, which determines how selective it can be. So portraying students positively may benefit not only the student, but also the school.
Where Do We Go from Here?
In the last decade the public has demanded increased transparency about physician competence [17]. The Accreditation Council for Graduate Medical Education (ACGME) initiated its outcomes project. As Dr. Thomas Nasca, who led the ACGME outcomes project, stated, “our collective ability to assure the public and our residents that we have established specialty-specific educational outcomes and can demonstrate proficiency in those outcomes in our graduates will validate the public’s trust in the graduate medical system in the United States” [18]. The American Board of Medical Specialties has also adopted the six ACGME competency domains and requires physicians to demonstrate competence for maintenance of certification [19].
Why should medical schools not adopt the same standards? The continuum of medical education begins with premedical coursework and extends through continuing medical education for the practicing physician. The development of competence certainly begins prior to residency. The Liaison Committee on Medical Education (LCME), the accrediting authority for medical education programs leading to the MD degree, has recently added standards requiring schools to educate and assess students along competency domains.
Many medical schools are transforming their curricular objectives and assessment systems to meet these standards. The advantages are numerous. Students have unique strengths and areas of weakness that, if targeted individually at the residency level, could result in more effective, individualized education. Students who cannot and may not ever meet the competency requirements could be identified early, facilitating an earlier exit from medical education [20]. Some of these competencies (professionalism, communication) are difficult to measure, making it even more imperative that deficiencies noted early in education are identified and targeted. This can only be achieved if there is transparent and effective transmission of reliable assessment information from the medical school to the residency program. The assessment of competence is certainly not a simple task. At a minimum we believe that each medical school should respect the obligation to faithfully transmit achievement in the six ACGME competency domains.
Given the conflicts that the authors and the institutions face, maybe it is time for the LCME to develop a standard that mandates accurate and complete disclosure of a student’s longitudinal performance in medical school along competency domains. Ideally the MSPE will contain (1) unedited narratives of clinical performance that explain the evaluation and grading rubric (2), numerically comparative performance data (3) reporting of all issues of professionalism lapses, academic difficulty or lapses in training and (4) elimination of all “code” words in the summary.
Clearly other agendas may interfere with the intended purpose of the MSPE. The student wants to be portrayed in the most positive way for his or her residency application; the author(s) may wish to honor the student’s effort and accomplishment; the institution wants to have a successful match to validate its selection of medical students and educational program; and the residency program directors wish to identify the “cream of the crop.” However, the most important agenda is that of providing the public with the best-trained physicians, a goal that should supersede all others.
References
-
Association of American Medical Colleges. A guide to the preparation of the medical student performance evaluation; 2002. https://www.aamc.org/linkableblob/64496-6/data/mspeguide-data.pdf. Accessed November 1, 2012.
- Green M, Jones P, Thomas JX, Jr.. Selection criteria for residency: results of a national program directors survey. Acad Med. 2009;84(3):362-367.
- Edmond M, Roberson M, Hasan N. The dishonest dean’s letter: an analysis of 532 dean’s letters from 99 U.S. medical schools. Acad Med. 1999;74(9):1033-1035.
-
Shea JA, O’Grady E, Morrison G, Wagner BR, Morris JB. Medical student performance evaluations in 2005: an improvement over the former dean’s letter? Acad Med. 2008;83(3):284-291.
- Alexander EK, Osman NY, Walling JL, Mitchell VG. Variation and imprecision of clerkship grading in U.S. medical schools. Acad Med. 2012;87(8):1070-1076.
-
Durning SJ, Hemmer PA. Commentary: grading: what is it good for? Acad Med. 2012;87(8):1002-1004.
- Naidich JB, Lee JY, Hansen EC, Smith LG. The meaning of excellence. Acad Radiol. 2007;14(9):1121-1126.
- Kiefer CS, Colletti JE, Bellolio MF, et al. The “good” dean’s letter. Acad Med. 2010;85(11):1705-1708.
- Lurie SJ, Lambert DR, Grady-Weliky TA. Relationship between dean’s letter rankings and later evaluations by residency program directors. Teach Learn Med. 2007;19(3):251-256.
- Swide C, Lasater K, Dillman D. Perceived predictive value of the medical student performance evaluation (MSPE) in anesthesiology resident selection. J Clin Anesth. 2009;21(1):38-43.
-
Swide, Lasater, Dillman, 42.
- Durning SJ, Cohen DL, Cruess D, McManigle JM, MacDonald R. Does student promotions committee appearance predict below-average performance during internship? A seven-year study. Teach Learn Med. 2008;20(3):267-272.
-
Hunt D. Student affairs officers should not oversee preparation of the medical student performance evaluation. Acad Med.2011;86(11):1337.
-
Schroth WS, Barrier PA, Garrity M, Kavan MG. Student affairs officers should oversee preparation of the medical student performance evaluation. Acad Med. 2011;86(11):1336.
-
Gliatto P, Karani R, Anand S. More about who should oversee preparation of the dean’s letter. Acad Med. 2012;87(6):680-681; author reply 681-682.
- McGaghie WC, Thompson JA. America’s best medical schools: a critique of the U.S. News & World Report rankings. Acad Med. 2001;76(10):985-992.
- Shaw K, Cassel CK, Black C, Levinson W. Shared medical regulation in a time of increasing calls for accountability and transparency: comparison of recertification in the United States, Canada, and the United Kingdom. JAMA. 2009;302(18):2008-2014.
-
Accreditation Council for Graduate Medical Education. The competencies: the ACGME and the community in 2008 and beyond. ACGME Bull. 2008(Sep):1-2.
-
American Board of Medical Specialties. Maintenance of certification competencies and criteria. http://www.abms.org/maintenance_of_certification/MOC_competencies.aspx
- Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32(8):676-682.