Comparing a Script Concordance Examination to a Multiple-Choice Examination on a Core Internal Medicine Clerkship

William Kelly*, Steven Durning, Gerald Denton

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

Background: Script concordance (SC) questions, in which a learner is given a brief clinical scenario then asked if additional information makes one hypothesis more or less likely, with answers compared to a panel of experts, are designed to reflect a learner's clinical reasoning. Purpose: The purpose is to compare reliability, validity, and learner satisfaction between a three-option modified SC examination to a multiple-choice question (MCQ) examination among medical students during a 3rd-year internal medicine clerkship, to compare reliability and learner satisfaction of SC between medical students and a convenience sample of house staff, and to compare learner satisfaction with SC between 1st- and 4th-quarter medical students. Methods: Using a prospective cohort design, we compared the reliability of 20-item SC and MCQ examinations, sequentially administered on the same day. To measure validity, scores were compared to scores on the National Board of Medical Examiners (NBME) subject examination in medicine and to a clinical performance measure. SC and MCQ were also administered to a convenience sample of internal medicine house staff. Medical student and house staff were anonymously surveyed regarding satisfaction with the examinations. Results: There were 163 students who completed the examinations. With students, the initial reliability of the SC was half that of MCQ (KR20 0.19 vs. 0.41), but with house staff (n = 15), reliability was the same (KR20 = 0.52 for both examinations). SC performance correlated with student clinical performance, whereas MCQ did not (r =.22, p =.005 vs. .11, p =.159). Students reported that SC questions were no more difficult and were answered more quickly than MCQ questions. Both exams were considered easier than NBME, and all 3 were considered equally fair. More students preferred MCQ over SC (55.8% vs. 18.0%), whereas house staff preferred SC (46% vs. 23%; p =.03). Conclusions: This SC examination was feasible and was more valid than the MCQ examination because of better correlation with clinical performance, despite being initially less reliable and less preferred by students. SC was more reliable and preferred when administered to house staff.

Original languageEnglish
Pages (from-to)187-193
Number of pages7
JournalTeaching and Learning in Medicine
Volume24
Issue number3
DOIs
StatePublished - Jul 2012
Externally publishedYes

Fingerprint

Dive into the research topics of 'Comparing a Script Concordance Examination to a Multiple-Choice Examination on a Core Internal Medicine Clerkship'. Together they form a unique fingerprint.

Cite this