TY - JOUR
T1 - Beyond standard checklist assessment
T2 - Question sequence may impact student performance
AU - LaRochelle, Jeff
AU - Durning, Steven J.
AU - Boulet, John R.
AU - van der Vleuten, Cees
AU - van Merrienboer, Jeroen
AU - Donkers, Jeroen
N1 - Publisher Copyright:
© 2016, The Author(s).
PY - 2016/4/1
Y1 - 2016/4/1
N2 - Introduction: Clinical encounters are often assessed using a checklist. However, without direct faculty observation, the timing and sequence of questions are not captured. We theorized that the sequence of questions can be captured and measured using coherence scores that may distinguish between low and high performing candidates. Methods: A logical sequence of key features was determined using the standard case checklist for an observed structured clinical exam (OSCE). An independent clinician educator reviewed each encounter to provide a global rating. Coherence scores were calculated based on question sequence. These scores were compared with global ratings and checklist scores. Results: Coherence scores were positively correlated to checklist scores and to global ratings, and these correlations increased as global ratings improved. Coherence scores explained more of the variance in student performance as global ratings improved. Discussion: Logically structured question sequences may indicate a higher performing student, and this information is often lost when using only overall checklist scores. Conclusions: The sequence test takers ask questions can be accurately recorded, and is correlated to checklist scores and to global ratings. The sequence of questions during a clinical encounter is not captured by traditional checklist scoring, and may represent an important dimension of performance.
AB - Introduction: Clinical encounters are often assessed using a checklist. However, without direct faculty observation, the timing and sequence of questions are not captured. We theorized that the sequence of questions can be captured and measured using coherence scores that may distinguish between low and high performing candidates. Methods: A logical sequence of key features was determined using the standard case checklist for an observed structured clinical exam (OSCE). An independent clinician educator reviewed each encounter to provide a global rating. Coherence scores were calculated based on question sequence. These scores were compared with global ratings and checklist scores. Results: Coherence scores were positively correlated to checklist scores and to global ratings, and these correlations increased as global ratings improved. Coherence scores explained more of the variance in student performance as global ratings improved. Discussion: Logically structured question sequences may indicate a higher performing student, and this information is often lost when using only overall checklist scores. Conclusions: The sequence test takers ask questions can be accurately recorded, and is correlated to checklist scores and to global ratings. The sequence of questions during a clinical encounter is not captured by traditional checklist scoring, and may represent an important dimension of performance.
KW - Assessment
KW - Clinical skills
KW - Medical education
UR - http://www.scopus.com/inward/record.url?scp=84990216698&partnerID=8YFLogxK
U2 - 10.1007/s40037-016-0265-5
DO - 10.1007/s40037-016-0265-5
M3 - Article
AN - SCOPUS:84990216698
SN - 2212-2761
VL - 5
SP - 95
EP - 102
JO - Perspectives on Medical Education
JF - Perspectives on Medical Education
IS - 2
ER -