• Let’s Stop Calling It “Competency-Based Medical Education”

    Health professions education has a love for buzzwords. One of the most persistent, and arguably misleading, is “competency-based medical education” (CBME). It sounds progressive, rigorous, and student-centered (Boyd et al., 2015). However, the first question that comes to mind is “Did we graduate incompetent physicians before this movement?” And, if we’re being honest, what we call CBME today is not truly competency-based.

    So, what is competency-based medical education? According to Frank et al. (2010), competency-based education in medicine can be defined as “an educational approach that organizes the curriculum around defined competencies—observable abilities that integrate knowledge, skills, and attitudes—emphasizing outcomes rather than processes, and allowing learners to progress upon demonstration of competence rather than fixed time [Italics added for emphasis]”. The key element here is flexibility: in a true CBME system, time becomes a variable, and learners advance when they demonstrate mastery, not when the calendar dictates.

    In the current U.S. system of health professions education, time is fixed, regardless of how quickly learners master core competencies. Residents complete training in fixed durations—three years for internal medicine, five for surgery—with advancement (and the funding of many of the slots) tied to time-based milestones, not individual proficiency. Even if a resident demonstrates competence in all required entrustable professional activities (EPAs) by year two, they cannot graduate early. Conversely, if a learner struggles, extensions are rare and often stigmatized. So can we truly say this is competency-based?

    This time-based rigidity means that while competencies inform curricula, assessments, and evaluations, they do not govern progression. What we have then is competency-informed education. This isn’t just semantics; it’s about intellectual honesty. Calling our system “competency-based” implies a level of flexibility and learner-centeredness that we haven’t achieved. It sets expectations we don’t meet. And it undermines the very definition of competence.

    Language shapes policy. It influences accreditation standards, curriculum design, and public perception. If we want to be taken seriously as educators and reformers, we need to be precise. We should call our current model what it is: competency-informed medical education. That term acknowledges the value of competencies without pretending we’ve restructured the entire system around them.

    So what would it take to move from competency-informed to competency-based? We need to create flexible pathways, modular curricula, and assessment systems that allow learners to progress when they’re ready. This would take resources, which are often not available, and significant changes to the “rules” of accreditation and the funding underlying the processes. So until then, maybe we should stop using a term that doesn’t reflect reality.

    What do you think? Here are some questions to ponder:

    1. What barriers—cultural, logistical, economic, or regulatory—prevent us from implementing truly time-variable education in medical training?
    2. Are we unintentionally misleading stakeholders (students, faculty, accreditors, the public) by using the term “competency-based” inaccurately?
    3. What would it take—structurally and philosophically—for medical education to become truly competency-based rather than competency-informed?

    References

    Boyd VA, Whitehead CR, Thille P, Ginsburg S, Brydges R, Kuper A. Competency-based medical education: the discourse of infallibility. Med Educ 2018; 52: 45-57. https://doi.org/10.1111/medu.13467

    Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. (2010). Toward a definition of competency-based education in medicine: a systematic review of published definitions. Med Teach 2010; 32(8): 631–637. https://doi.org/10.3109/0142159X.2010.500898

    Author: Gary L. Beck Dallaghan, Ph.D.; Alliance for Clinical Education

  • Clinical Competency Committees in Undergraduate Medicine

    How do you fairly assess a medical student with discrepant clinical evaluations? Or a medical student with professionalism concerns despite successfully completing all academic and clinical requirements? These are some of the challenges faced by Clerkship Directors when grading students.

    Clinical competency committees (CCC) provide a methodical approach to assessing a medical student’s progress and readiness for the next stage of training. Unlike traditional grading policies that might promote a student who meets minimum criteria within a defined block of time, clinical competency committees evaluate a learner’s mastery of expected milestones (1).

    CCCs have consistently been used in graduate medical education to communicate expectations, standardize evaluation of trainees, identify trainees who are not on a satisfactory trajectory, and develop individualized growth plans (1). Additionally, the CCC encourages a resident to assess their current ability in various competencies, reflect on any gaps, and take accountability for future growth (1). CCCs are a requirement for accreditation of residency and fellowship programs, and the Accreditation Council for Graduate Medical Education (ACGME), has published a comprehensive guidebook for programs to use (2).

    Similar models have been used in undergraduate education (3-5). A national survey administered to internal medicine clerkship directors and conducted by the Alliance of Academic Internal Medicine revealed that 42% of respondents had some form of a grading committee. The grading committees varied considerably in content and purpose; however, they were primarily used to determine the final grade of students at risk for failing, have differing clinical evaluations, and have professionalism issues (6).

    The AAMC Core Entrustable Professional Activities (EPAs) provides a standardized framework to evaluate a medical student’s readiness to enter residency, regardless of specialty. The authors define an “entrusted learner” as one who demonstrates proficiency across 13 defined behaviors without any direct supervision. Although there are similarities, the authors distinguish EPAs from competencies in that EPAs are intended to mirror real-life situations encountered by a physician during their daily workflow. Various competencies and associated milestones are integrated into each activity (7).

    Although CCCs have the advantage of offering a standardized and transparent evaluation process based on expected competencies, there may be several barriers to successful implementation. Clerkships must determine the optimal number of committee members, types of committee members, and frequency of meetings. In addition, committee members must agree on the role of the CCC in determining grades and promoting student self-reflection and growth. Members must develop a shared mental model regarding the impact of variable grading styles used by evaluators when completing clinical evaluations, methods to address discordant data, and strategies to minimize bias (7). Despite these challenges, CCCs offer a promising method for ensuring medical students are on a successful trajectory for advancing to the next level.

    What do you think?

    • Are CCCs the optimal way to evaluate students? What are some of the limitations of this strategy?
    • Does your UME program use a CCC? If so, what were some unexpected hurdles to overcome? Can you recommend some keys to success?
    • Can you think of any examples where a CCC may have provided a different outcome in a student’s evaluation?

    References

    1. Goldhamer MEJ, et al. Reimagining the Clinical Competency Committee to Enhance Education and Prepare for Competency – Based Time-Variable Advancement. J Gen Intern Med 2022; 37 (9):2280-90.
    2. Andolsek K, et al. Accreditation Council for Graduate Medical Education Clinical Competency Committees: A Guidebook for Programs (3rd ed). https://www.acgme.org/globalassets/acgmeclinicalcompetencycommitteeguidebook.pdf
    3. Monrad SU, et al. Competency Committees in Undergraduate Medical Education: Approaching Tensions Using a Polarity Management Framework. Acad Med 2019;94(12:1865-72. doi:10.1097/ACM.0000000000002816
    4. Murray KE, et al. Crossing the Gap: Using Competency-Based Assessment to Determine Whether Learns are Ready for the Undergraduate – to – Graduate Transition. Acad Med: 2019; 94(3): 338-45 doi:10.1097/ACM.0000000000002535.
    5. Mejicano GC, et al. Describing the Journey and Lessons Learned Implementing a Competency-Based, Time-Variable Undergraduate Medical Education Curriculum. Acad Med 2018;93:S42-S48 doi:10.1097/ACM.0000000000002068.
    6. Alexandraki I, et al. Structures and Processes of Grading Committees in Internal Medicine Clerkships: Results of a National Survey. Acad Med 2025;100 (1), 78-85.
    7.  AAMC Core Entrustable Professional Activities for Entering Residency: Curriculum Developers’ Guide 2014. https://store.aamc.org/downloadable/download/sample/sample_id/63/%20

    Author: Catherine Derber, M.D.; Eastern Virginia Medical School. Organization: Clerkship Directors in Internal Medicine