• Medical School Grading System: Should We Continue on this Path toward a Pass-Fail Ecosystem?

    Questions to consider as you read the blog:

    1. Both systems, Pass/Fail (P/F) and tiered grading (A-F), are attractive in their idealized state. Can either of them truly achieve their ideals in actual practice? Which one is capable of coming the closest to their ideal state?
    2. Undergraduate medical education has two roles: train learners to become doctors and prepare learners to be selected for a specialty-based residency program. Is the choice between P/F and A-F contingent on which of those two UME roles is considered to be primary?
    3. Should decisions about grading systems (P/F or A-F) be made independently by medical schools? Or should there be a broader consensus on a uniform system made by a national institution (e.g. an accreditor, a professional organization, or the U.S. Department of Education, etc.)?

    In recent years, many undergraduate medical education (UME) programs have modified their grading system for required clerkships to be dichotomous (often called pass/fail grading and referred to here as “P/F”). As of the AAMC/AACOM Curriculum SCOPE Survey 2023-2024, 21.4% of schools reported using P/F for required clerkships. The rationale for P/F is multi-faceted including: better learning outcomes, increased student motivation, improved well-being, and greater assessment validity [1]. Nonetheless, there has been pushback against P/F in clerkships and for USMLE Step 1 [2]. The crux of the objections to P/F has been that students don’t engage fully in course requirements and residency programs don’t know how to evaluate applicants – so schools should revert to tiered grading systems (referred to here as “A-F” but inclusive of Honors/High Pass/etc.).

    As a ‘thought experiment,’ picture if the reverse situation were true and P/F was the traditional norm while A-F was a trendy innovation. In this scenario, the reformers would advocate for conversion to A-F, but what case would they make to justify an upheaval in the system? If P/F was the status quo on which medical education had been built, would A-F have sufficient justification for schools to adopt it?

    In our thought experiment, the P/F environment would focus on learning for future practice and empower students to ask questions that fill knowledge gaps. It would embolden them to engage with educational experiences which stretch their knowledge without fearing lower grades. Assessments would support learning and be designed to ensure that students achieve standards of competence and the methods would include formats that amplify feedback and learning. Students could focus on non-cognitive skills – even those that are difficult to measure precisely. A panoply of co-curricular activities for medical students would enrich their residency preparation and produce indicators for program directors about which applicants would thrive in their specific residency program. The program directors, in turn, would have experience and insight about scrutinizing medical school data to identify well-suited applicants.

    If P/F were the traditional system, then what would be the argument in favor of replacing it with A-F? Advocates of A-F often posit two major benefits of tiered grading: that competition for grades impels students to excel at learning, and grade differentiations between students allow residency programs to make informed selection decisions. But, it is axiomatic that the rewards of competition are only realized if that competition is fair and valid. In a competitive environment, the “rules of the game” must be transparent and evenly enforced.

    If changing to A-F was the reform being proposed, then the burden of proof would be on proponents to demonstrate that fair competition is achievable. Yet, our current, real-life, competitive situation clearly demonstrates how elusive that fairness is [3,4]. Valid and bias-free assessments have always been necessary for fair grading; however, persistent, systematic unfairness in A-F is well-known [5].

    New ideas inevitably meet resistance; however, given the strong arguments in favor of P/F, we believe that the calls to return to A-F are actually rooted in nostalgia and tradition rather than logic or evidence. If P/F had been the established system for medical education, then making a compelling case for adopting A-F would be problematic. Thus, we conclude that med ed’s move to P/F, despite the disruption to long-standing practices, is justified and that resistance is predominantly a desire for the status quo.

    What do you think? Share your thoughts in the comment box below!

    References

    1. Iyer AA, Hayes C, Chang BS, Farrell SE, Fladger A, Hauer KE, Schwartzstein RM. Should Medical School Grading Be Tiered or Pass/Fail? A Scoping Review of Conceptual Arguments and Empirical Data. Acad Med. 2025 Aug 1;100(8):975-985. doi: 10.1097/ACM.0000000000006085.
    2. Warm E, Hirsh DA, Kinnear B, Besche HC. The Shadow Economy of Effort: Unintended Consequences of Pass/Fail Grading on Medical Students’ Clinical Education and Patient Care Skills. Acad Med. 2025 Apr 1;100(4):419-424. doi: 10.1097/ACM.0000000000005973.
    3. Lomis KD, Mejicano GC, Caverzagie KJ, Monrad SU, Pusic M, Hauer KE. The critical role of infrastructure and organizational culture in implementing competency-based education and individualized pathways in undergraduate medical education. Med Teach. 2021 Jul;43(sup2):S7-S16. doi: 10.1080/0142159X.2021.1924364.
    4. Ryan MS, Lomis KD, Deiorio NM, Cutrer WB, Pusic MV, Caretta-Weyer HA. Competency-Based Medical Education in a Norm-Referenced World: A Root Cause Analysis of Challenges to the Competency-Based Paradigm in Medical School. Acad Med. 2023 Nov 1;98(11):1251-1260. doi: 10.1097/ACM.0000000000005220.
    5. Hauer KE, Lucey CR. Core Clerkship Grading: The Illusion of Objectivity. Acad Med. 2019 Apr;94(4):469-472. doi: 10.1097/ACM.0000000000002413.

    Authors: Hugh A. Stoddard, M.Ed., Ph.D. (Associate Dean for Evaluation, Assessment, and Research) and Nadia Ismail, M.D., M.P.H., M.Ed. (Vice-Dean), Baylor College of Medicine, Houston, TX

  • Let’s Stop Calling It “Competency-Based Medical Education”

    Health professions education has a love for buzzwords. One of the most persistent, and arguably misleading, is “competency-based medical education” (CBME). It sounds progressive, rigorous, and student-centered (Boyd et al., 2015). However, the first question that comes to mind is “Did we graduate incompetent physicians before this movement?” And, if we’re being honest, what we call CBME today is not truly competency-based.

    So, what is competency-based medical education? According to Frank et al. (2010), competency-based education in medicine can be defined as “an educational approach that organizes the curriculum around defined competencies—observable abilities that integrate knowledge, skills, and attitudes—emphasizing outcomes rather than processes, and allowing learners to progress upon demonstration of competence rather than fixed time [Italics added for emphasis]”. The key element here is flexibility: in a true CBME system, time becomes a variable, and learners advance when they demonstrate mastery, not when the calendar dictates.

    In the current U.S. system of health professions education, time is fixed, regardless of how quickly learners master core competencies. Residents complete training in fixed durations—three years for internal medicine, five for surgery—with advancement (and the funding of many of the slots) tied to time-based milestones, not individual proficiency. Even if a resident demonstrates competence in all required entrustable professional activities (EPAs) by year two, they cannot graduate early. Conversely, if a learner struggles, extensions are rare and often stigmatized. So can we truly say this is competency-based?

    This time-based rigidity means that while competencies inform curricula, assessments, and evaluations, they do not govern progression. What we have then is competency-informed education. This isn’t just semantics; it’s about intellectual honesty. Calling our system “competency-based” implies a level of flexibility and learner-centeredness that we haven’t achieved. It sets expectations we don’t meet. And it undermines the very definition of competence.

    Language shapes policy. It influences accreditation standards, curriculum design, and public perception. If we want to be taken seriously as educators and reformers, we need to be precise. We should call our current model what it is: competency-informed medical education. That term acknowledges the value of competencies without pretending we’ve restructured the entire system around them.

    So what would it take to move from competency-informed to competency-based? We need to create flexible pathways, modular curricula, and assessment systems that allow learners to progress when they’re ready. This would take resources, which are often not available, and significant changes to the “rules” of accreditation and the funding underlying the processes. So until then, maybe we should stop using a term that doesn’t reflect reality.

    What do you think? Here are some questions to ponder:

    1. What barriers—cultural, logistical, economic, or regulatory—prevent us from implementing truly time-variable education in medical training?
    2. Are we unintentionally misleading stakeholders (students, faculty, accreditors, the public) by using the term “competency-based” inaccurately?
    3. What would it take—structurally and philosophically—for medical education to become truly competency-based rather than competency-informed?

    References

    Boyd VA, Whitehead CR, Thille P, Ginsburg S, Brydges R, Kuper A. Competency-based medical education: the discourse of infallibility. Med Educ 2018; 52: 45-57. https://doi.org/10.1111/medu.13467

    Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. (2010). Toward a definition of competency-based education in medicine: a systematic review of published definitions. Med Teach 2010; 32(8): 631–637. https://doi.org/10.3109/0142159X.2010.500898

    Author: Gary L. Beck Dallaghan, Ph.D.; Alliance for Clinical Education

  • Clinical Competency Committees in Undergraduate Medicine

    How do you fairly assess a medical student with discrepant clinical evaluations? Or a medical student with professionalism concerns despite successfully completing all academic and clinical requirements? These are some of the challenges faced by Clerkship Directors when grading students.

    Clinical competency committees (CCC) provide a methodical approach to assessing a medical student’s progress and readiness for the next stage of training. Unlike traditional grading policies that might promote a student who meets minimum criteria within a defined block of time, clinical competency committees evaluate a learner’s mastery of expected milestones (1).

    CCCs have consistently been used in graduate medical education to communicate expectations, standardize evaluation of trainees, identify trainees who are not on a satisfactory trajectory, and develop individualized growth plans (1). Additionally, the CCC encourages a resident to assess their current ability in various competencies, reflect on any gaps, and take accountability for future growth (1). CCCs are a requirement for accreditation of residency and fellowship programs, and the Accreditation Council for Graduate Medical Education (ACGME), has published a comprehensive guidebook for programs to use (2).

    Similar models have been used in undergraduate education (3-5). A national survey administered to internal medicine clerkship directors and conducted by the Alliance of Academic Internal Medicine revealed that 42% of respondents had some form of a grading committee. The grading committees varied considerably in content and purpose; however, they were primarily used to determine the final grade of students at risk for failing, have differing clinical evaluations, and have professionalism issues (6).

    The AAMC Core Entrustable Professional Activities (EPAs) provides a standardized framework to evaluate a medical student’s readiness to enter residency, regardless of specialty. The authors define an “entrusted learner” as one who demonstrates proficiency across 13 defined behaviors without any direct supervision. Although there are similarities, the authors distinguish EPAs from competencies in that EPAs are intended to mirror real-life situations encountered by a physician during their daily workflow. Various competencies and associated milestones are integrated into each activity (7).

    Although CCCs have the advantage of offering a standardized and transparent evaluation process based on expected competencies, there may be several barriers to successful implementation. Clerkships must determine the optimal number of committee members, types of committee members, and frequency of meetings. In addition, committee members must agree on the role of the CCC in determining grades and promoting student self-reflection and growth. Members must develop a shared mental model regarding the impact of variable grading styles used by evaluators when completing clinical evaluations, methods to address discordant data, and strategies to minimize bias (7). Despite these challenges, CCCs offer a promising method for ensuring medical students are on a successful trajectory for advancing to the next level.

    What do you think?

    • Are CCCs the optimal way to evaluate students? What are some of the limitations of this strategy?
    • Does your UME program use a CCC? If so, what were some unexpected hurdles to overcome? Can you recommend some keys to success?
    • Can you think of any examples where a CCC may have provided a different outcome in a student’s evaluation?

    References

    1. Goldhamer MEJ, et al. Reimagining the Clinical Competency Committee to Enhance Education and Prepare for Competency – Based Time-Variable Advancement. J Gen Intern Med 2022; 37 (9):2280-90.
    2. Andolsek K, et al. Accreditation Council for Graduate Medical Education Clinical Competency Committees: A Guidebook for Programs (3rd ed). https://www.acgme.org/globalassets/acgmeclinicalcompetencycommitteeguidebook.pdf
    3. Monrad SU, et al. Competency Committees in Undergraduate Medical Education: Approaching Tensions Using a Polarity Management Framework. Acad Med 2019;94(12:1865-72. doi:10.1097/ACM.0000000000002816
    4. Murray KE, et al. Crossing the Gap: Using Competency-Based Assessment to Determine Whether Learns are Ready for the Undergraduate – to – Graduate Transition. Acad Med: 2019; 94(3): 338-45 doi:10.1097/ACM.0000000000002535.
    5. Mejicano GC, et al. Describing the Journey and Lessons Learned Implementing a Competency-Based, Time-Variable Undergraduate Medical Education Curriculum. Acad Med 2018;93:S42-S48 doi:10.1097/ACM.0000000000002068.
    6. Alexandraki I, et al. Structures and Processes of Grading Committees in Internal Medicine Clerkships: Results of a National Survey. Acad Med 2025;100 (1), 78-85.
    7.  AAMC Core Entrustable Professional Activities for Entering Residency: Curriculum Developers’ Guide 2014. https://store.aamc.org/downloadable/download/sample/sample_id/63/%20

    Author: Catherine Derber, M.D.; Eastern Virginia Medical School. Organization: Clerkship Directors in Internal Medicine