Using Pretrained Large Language Models for AI-Driven Assessment in Medical Education

Jacob Cole, Joshua Duncan, Rebekah Cole

Research output: Contribution to journalArticlepeer-review

Abstract

PROBLEM: Assessing students in competency-based medical education can be time-consuming and demanding for faculty, especially with large classes and complex topics. Traditional methods can lead to inconsistencies and a lack of targeted feedback. Innovative and accessible solutions to improve the efficiency, objectivity, and effectiveness of assessment in medical education are needed.

APPROACH: From September 2024 to February 2025, the authors piloted the use of large language models (LLMs) with retrieval-augmented generation to assess students' understanding of moral injury. The authors selected and uploaded 6 seminal articles on moral injury within military and veteran populations to Google Gemini 1.5 Pro. They tasked the same LLM with creating a grading rubric based on these articles to assess 165 student responses in a military medical ethics course (Uniformed Services University of the Health Sciences). The authors uploaded both the generated rubric and the student responses to each of 3 LLMs (Google Gemini 1.5 Pro, Google Gemini 2.0 Flash, and OpenAI ChatGPT-4o) with a prompt to generate scores for the student responses.

OUTCOMES: In the authors' expert opinion, an LLM (Google Gemini 1.5 Pro) successfully generated a grading rubric that captured the nuances of moral injury and its implications for military medical practice. The LLMs' scoring accuracy was compared against 2 experienced educators to generate validity evidence. The best-performing model, OpenAI ChatGPT-4o, demonstrated an interrater reliability of 0.77 and 0.68 for reviewers 1 and 2, respectively, indicating a higher level of agreement between the LLM and both individual reviewers than between the 2 reviewers (0.57).

NEXT STEPS: While this approach shows promise, faculty oversight is necessary to ensure ethical accountability and address potential biases. Further research is needed to optimize the integration of AI and human capabilities in assessment to ultimately enhance the quality of health care professional education and improve patient outcomes.

Original languageEnglish
Pages (from-to)1442-1446
Number of pages5
JournalAcademic Medicine
Volume100
Issue number12
DOIs
StatePublished - 1 Dec 2025

Keywords

  • Humans
  • Educational Measurement/methods
  • Education, Medical/methods
  • Artificial Intelligence
  • Students, Medical
  • Competency-Based Education/methods
  • Language
  • Large Language Models

Cite this