Objective
Compare the clinical performance of two student cohorts in OSCEs:
Control Group: Studies with traditional simulation (standardized patients).
Experimental Group: Studies with DIMEDUS AI-powered virtual patients.
Background & Rationale
The Objective Structured Clinical Examination (OSCE) is a widely used method for assessing clinical competence in medical and health professions education. It typically involves interaction with standardized patients (SPs), assessing skills such as history taking, physical exam technique, communication, and clinical reasoning.
While SP-based OSCE training is effective, it is also resource-intensive, requiring trained actors, physical space, and faculty time. As class sizes grow and logistical constraints mount, many institutions are exploring digital simulation as a scalable, accessible, and consistent alternative or supplement.
DIMEDUS provides an interactive virtual environment in which learners can engage with dynamic virtual patients across various clinical scenarios. These cases are designed to simulate realistic patient interactions and decision-making sequences, with embedded feedback and AI-driven debriefing. By integrating DIMEDUS into OSCE preparation, learners may be able to:
● Practice repeatedly at their own pace,
● Engage in self-directed learning,
● And develop clinical reasoning in a low-stakes environment.
This study aims to evaluate whether virtual patient training can match or exceed the effectiveness of traditional SP-based preparation in improving OSCE performance, with implications for cost-efficiency, accessibility, and learning outcomes.
Methodology:
Pre-test (Baseline Assessment):
OSCE-style stations or scenario-based written assessment to measure initial clinical reasoning, diagnostic approach, and communication skills.
Intervention:
Control Group: Participates in a traditional OSCE preparation program using SPs and faculty-led debriefing.
Experimental Group: Completes a structured training program using DIMEDUS virtual cases aligned with OSCE competencies, supported by AI-driven debriefing.
Post-test (OSCE Assessment):
All students complete a standardized OSCE (e.g., 5–8 stations) scored by blinded assessors using validated rubrics (e.g., checklists, global rating scales).
Retention Test:
A repeat OSCE (or a subset of cases) is conducted 3 months later to assess long-term skill retention and transfer.
Outcome Measures:
Primary Outcomes:
Diagnostic Accuracy: Appropriateness of clinical decision-making and problem identification.
Clinical Reasoning and Prioritization: Use of logical and structured decision-making frameworks.
Communication Skills: Empathy, clarity, and structure in patient interactions, assessed via standardized rubrics.
Time to Completion: Efficiency in working through stations.