That is Rank!

Plus: Where Would You Sit in the MedEd Cafeteria?

👋Hey, it's Ky. Thanks for opening this week’s issue of the MedEdge!

As always, this week’s Newsletter is in SOAP note format:

Subjective: Nobody Likes to Be Ranked, but Everybody Loves a Good Ranking

This past Thursday, medical student Aditya Jain published an “op-med” through Doximity, entitled “The Medical School Rankings Mess: A Lose-Lose for Students and Schools”. In the piece, Jain argues that while medical school rankings are flawed, they still serve an important purpose and should be refined, not discarded.

Jain acknowledges the obvious flaws in U.S. News & World Report-style ranking systems, including:

  • Simplistic and Spurious Metrics: Reliance on easily quantified metrics like funding or reputation doesn't fully capture educational quality or student experience.

  • Self-Reinforcing Biases: Elite schools disproportionately benefit from rankings that emphasize wealth and prestige, perpetuating inequality and reinforcing their status.

But, Jain argues, these ranking systems serve an important purpose, with several upsides, including:

  • Guide for Students: Rankings help students - especially those without connections - understand the med school landscape.

  • Supports Fit-Finding: They provide insight into which schools might support specific academic or research goals.

  • Long-Term Planning: Rankings give clues about which schools can boost chances of landing a good residency.

  • Application Strategy: Rankings help students organize schools into "reach," "target," or "safety" categories, simplifying the application process.

Following the withdrawal of high-profile schools and mounting criticism, the U.S. News & World Report medical school ranking system has effectively come to an end.

Jain argues that the current “solution” is a half-measure that helps nobody: rather than provide a numerical ranking, there is now a tier-based system in which, for example, 16 programs are lumped into “Tier 1” and 36 into “Tier 2”.

Reading that, I couldn’t help but chuckle 🤣 because it sounded eerily familiar…

Several studies have reported on program directors’ opinions about the USMLE Step 1 scoring change to pass/fail … It appears that the switch to pass/fail is not a welcome change, as they believe it will be more difficult to compare applicants objectively and make the screening process more arduous.

Though unrelated, the removal of pseudo-objective ranking methods for both medical schools and medical students have elicited similar reactions.

For example, Jain argues that:

[W]hile Ivies like Harvard or Columbia can afford to forgo the exposure provided by rankings, lesser-known institutions rely on these lists to attract talented students and faculty, secure funding, and elevate their national profiles. By refusing to participate, these prestigious institutions are essentially pulling up the ladder behind them, perpetuating a cycle of inequity

Aditya Jain

Which is quite similar to:

Alarmingly, the majority of program directors (56.8%) reported that an applicant’s medical school will become a more important metric in the residency selection process. This emphasis will disadvantage students from less prestigious medical schools and international medical graduates (IMGs) who may depend on numerical Step 1 scores to demonstrate achievement

In essence, Jain and the majority of residency program directors agree: when you’re drowning in a sea of good options, a numerical rank can feel like a lifesaver. Sure, it’s not perfect, but when the alternative is treading water for days, you’ll grab whatever floats.

Objective: A Brief Tour of Debriefs

If the field of medical education was a high school cafeteria, the simulation experts would sit at the cool kids table.

I mean, let’s face it: they have robots 🤖 that spurt blood and one-way glass like in 🚓 cop movies.

But just like, well, everyone in a high school cafeteria, they are deeply misunderstood.

Underneath all of those artificial fluids and machismo is a sensitive soul that recognizes the importance of debriefing.

According to Cheng et al. (2016), debriefing is “a venue for learners to reflect on action, discuss areas for improvement, and incorporate new information with previous knowledge”.

The debrief is just as, if not more, important than the simulation activity itself.

  • Trained facilitators

  • Psychological safety

  • Ground rules

  • A learner-driven approach

… and more.

One of the best ways to ensure a successful debrief is to use an established framework as your guide.

Earlier this year, Jonathan P. Duff et al. published a helpful review of 6 such frameworks. Here’s the MedEdge overview of each:

1. Debriefing with Good Judgment

  • Superlative: 🏆 Best for Deep Reflection

  • What It Does: Combines advocacy (instructor’s observations) with inquiry (questions to explore learners’ thinking). Encourages deep reflection without judgment.

  • Breakdown:

    • Advocacy: Instructor shares their observations.

    • Inquiry: Learners explain their thought processes.

  • Sample Question: "What was going through your mind when you made that decision?"

  • Facilitator's Role: Actively engages in the debrief by providing feedback but in a way that allows learners to articulate and reflect on their own thinking.

  • Key Teaching Point: Helps learners connect their actions with outcomes and refine their decision-making.

2. PEARLS (Promoting Excellence and Reflective Learning in Simulation)

  • Superlative: 🏆 Most Structured Debrief

  • What It Does: Provides a clear, four-phase framework to guide debriefings—Reactions, Description, Analysis, and Summary.

  • Breakdown:

    • Reactions: How did the scenario make participants feel?

    • Description: What happened during the simulation?

    • Analysis: Why did it happen the way it did?

    • Summary: What are the key takeaways for future practice?

  • Sample Question: "What do you think went well during the simulation, and why?"

  • Facilitator's Role: Guides learners through each phase, helping to steer discussion and ensuring all aspects of the scenario are thoroughly reviewed.

  • Key Teaching Point: Encourages structured reflection on clinical performance, connecting emotions to learning.

3. Debriefing Diamond

  • Superlative: 🏆 Simplest Framework

  • What It Does: A straightforward four-phase model for debriefing, focusing on identifying objectives and areas for improvement.

  • Breakdown:

    • Objective: What was the goal during the scenario?

    • Adjustments: Did the learners need to change their approach?

    • Analysis: What went well and what didn’t?

    • Future Actions: What should be done differently next time?

  • Sample Question: "How did your team adjust to challenges during the scenario?"

  • Facilitator's Role: Serves as a guide to ensure participants follow the phases, keeping the discussion focused and efficient.

  • Key Teaching Point: Facilitates quick, focused debriefs that emphasize immediate learning.

4. Gather-Analyze-Summarize (GAS)

  • Superlative: 🏆 Best for Quick Reviews

  • What It Does: A streamlined debriefing process where teams gather facts, analyze performance, and summarize lessons learned.

  • Breakdown:

    • Gather: Collect relevant facts and actions.

    • Analyze: Break down what worked and what didn’t.

    • Summarize: Highlight key takeaways.

  • Sample Question: "What were the key actions that led to the scenario’s outcome?"

  • Facilitator's Role: Facilitator helps keep the process on track by ensuring key facts are gathered and analyzed.

  • Key Teaching Point: Promotes efficiency in debriefing while maintaining depth in analysis.

5. Rapid Cycle Deliberate Practice (RCDP)

  • Superlative: 🏆 Best for Immediate Feedback

  • What It Does: Provides real-time feedback and allows learners to immediately practice corrections within the scenario.

  • Breakdown:

    • Immediate Correction: Errors are addressed mid-scenario.

    • Repeat Practice: Learners repeat the scenario to implement corrections.

    • Structured Learning: Encourages a cycle of doing, reflecting, and redoing.

  • Sample Question: "What do you need to adjust right now to improve this outcome?"

  • Facilitator's Role: The facilitator plays an active role in stepping in during scenarios to provide immediate feedback and guidance.

  • Key Teaching Point: Ideal for high-stakes scenarios where correcting critical errors quickly is key to learning.

6. TeamGAINS

  • Superlative: 🏆 Best for Team Dynamics

  • What It Does: Focuses on improving team performance by addressing communication, coordination, and leadership within teams.

  • Breakdown:

    • Teamwork: How well did the team communicate?

    • Leadership: How was leadership distributed or handled?

    • Coordination: Did the team effectively manage the task together?

  • Sample Question: "How did your team work together to overcome challenges in the scenario?"

  • Facilitator's Role: Acts as a coach to help the team reflect on their interactions, focusing on communication and collaboration.

  • Key Teaching Point: Emphasizes the importance of team coordination and communication in clinical settings.

Assessment: Quick Quiz!

Can medical students be named in medical malpractice suits?

Login or Subscribe to participate in polls.

⏪Last Week’s Question:

Which of the following is the most accurate definition of Education Value Units?

Answer: A metric used to quantify the educational contributions of faculty, similar to clinical RVUs

Plan: Upcoming Dates & Events

Want to share an upcoming event? Respond to this email directly with the date, title, and URL for more information.

If you enjoy the MedEdge, will you forward it on or share it with a group you’re in? That should only take ~18 seconds, and it takes ~5 hours to write each newsletter 😊

New here? Click to Subscribe😄

P.S - Have you ever thought about sharing your expertise more widely, and even earning extra income doing so? If so, you’ll love the freebies I have for you on my website, kyanlynch.com