TY - JOUR
T1 - Digitizing U.S. air force medical standards for the creation and validation of a readiness decision support system
AU - Uptegraft, Colby C.
AU - Barnes, Matthew G.
AU - Alford, Kevin D.
AU - McLaughlin, Christopher M.
AU - Hron, Jonathan D.
N1 - Publisher Copyright:
© 2020 Oxford University Press. All rights reserved.
PY - 2020/7/1
Y1 - 2020/7/1
N2 - Introduction Deployment-limiting medical conditions are the primary reason why service members are not medically ready. Servicespecific standards guide clinicians in what conditions are restrictive for duty, fitness, and/or deployment requirements. The Air Force (AF) codifies most standards in the Medical Standards Directory (MSD). Providers manually search this document, among others, to determine if any standards are violated, a tedious and error-prone process. Digitized, standards-based decision-support tools for providers would ease this workflow. This study digitized and mapped all AF occupations to MSD occupational classes and all MSD standards to diagnosis codes and created and validated a readiness decision support system (RDSS) around this mapping. Materials and Methods A medical coder mapped all standards within the May 2018 v2 MSD to 2018 International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes. For the publication of new MSDs, we devised an automated update process using Amazon Web Service's Comprehend Medical and the Unified Medical Language System's Metathesaurus. We mapped Air Force Specialty Codes to occupational classes using the MSD and AF classification directories.We uploaded this mapping to a cloud-based MySQL (v5.7.23) database and built a web application to interface with it using R (v3.5+). For validation, we compared the RDSS to the record review of two subject-matter experts (SMEs) for 200 outpatient encounters in calendar year 2018. We performed four separate analyses: (1) SME vs. RDSS for any restriction; (2) SME interrater reliability for any restriction; (3) SME vs. RDSS for specific restriction(s); and (4) SME interrater reliability for categorical restriction(s). This study was approved as "Not Human Subjects Research" by the Air Force Research Laboratory (FWR20190100N) and Boston Children's Hospital (IRB-P00031397) review boards. Results Of the 709 current medical standards in the September 2019 MSD, 631 (89.0%) were mapped to ICD-10-CM codes. These 631 standards mapped to 42,810 unique ICD codes (59.5% of all active 2019 codes) and covered 72.3% (7,823/10,821) of the diagnoses listed on AF profiles and 92.8% of profile days (90.7/97.8 million) between February 1, 2007 and January 31, 2017. The RDSS identified diagnoses warranting any restrictions with 90.8% and 90.0% sensitivity compared to SME A and B. For specific restrictions, the sensitivity was 85.0% and 44.8%. The specificity was poor for any restrictions (20.5%-43.4%) and near perfect for specific restrictions (99.5+%). The interrater reliability between SMEs for all comparisons ranged from minimal to moderate (κ = 0.33-0.61). Conclusion This study demonstrated key pilot steps to digitizing and mapping AF readiness standards to existing terminologies. The RDSS showed one potential application. The sensitivity between the SMEs and RDSS demonstrated its viability as a screening tool with further refinement and study. However, its performance was not evenly distributed by special duty status or for the indication of specific restrictions.With machine consumable medical standards integrated within existing digital infrastructure and clinical workflows, RDSSs would remove a significant administrative burden from providers and likely improve the accuracy of readiness metrics.
AB - Introduction Deployment-limiting medical conditions are the primary reason why service members are not medically ready. Servicespecific standards guide clinicians in what conditions are restrictive for duty, fitness, and/or deployment requirements. The Air Force (AF) codifies most standards in the Medical Standards Directory (MSD). Providers manually search this document, among others, to determine if any standards are violated, a tedious and error-prone process. Digitized, standards-based decision-support tools for providers would ease this workflow. This study digitized and mapped all AF occupations to MSD occupational classes and all MSD standards to diagnosis codes and created and validated a readiness decision support system (RDSS) around this mapping. Materials and Methods A medical coder mapped all standards within the May 2018 v2 MSD to 2018 International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes. For the publication of new MSDs, we devised an automated update process using Amazon Web Service's Comprehend Medical and the Unified Medical Language System's Metathesaurus. We mapped Air Force Specialty Codes to occupational classes using the MSD and AF classification directories.We uploaded this mapping to a cloud-based MySQL (v5.7.23) database and built a web application to interface with it using R (v3.5+). For validation, we compared the RDSS to the record review of two subject-matter experts (SMEs) for 200 outpatient encounters in calendar year 2018. We performed four separate analyses: (1) SME vs. RDSS for any restriction; (2) SME interrater reliability for any restriction; (3) SME vs. RDSS for specific restriction(s); and (4) SME interrater reliability for categorical restriction(s). This study was approved as "Not Human Subjects Research" by the Air Force Research Laboratory (FWR20190100N) and Boston Children's Hospital (IRB-P00031397) review boards. Results Of the 709 current medical standards in the September 2019 MSD, 631 (89.0%) were mapped to ICD-10-CM codes. These 631 standards mapped to 42,810 unique ICD codes (59.5% of all active 2019 codes) and covered 72.3% (7,823/10,821) of the diagnoses listed on AF profiles and 92.8% of profile days (90.7/97.8 million) between February 1, 2007 and January 31, 2017. The RDSS identified diagnoses warranting any restrictions with 90.8% and 90.0% sensitivity compared to SME A and B. For specific restrictions, the sensitivity was 85.0% and 44.8%. The specificity was poor for any restrictions (20.5%-43.4%) and near perfect for specific restrictions (99.5+%). The interrater reliability between SMEs for all comparisons ranged from minimal to moderate (κ = 0.33-0.61). Conclusion This study demonstrated key pilot steps to digitizing and mapping AF readiness standards to existing terminologies. The RDSS showed one potential application. The sensitivity between the SMEs and RDSS demonstrated its viability as a screening tool with further refinement and study. However, its performance was not evenly distributed by special duty status or for the indication of specific restrictions.With machine consumable medical standards integrated within existing digital infrastructure and clinical workflows, RDSSs would remove a significant administrative burden from providers and likely improve the accuracy of readiness metrics.
UR - http://www.scopus.com/inward/record.url?scp=85089613426&partnerID=8YFLogxK
U2 - 10.1093/milmed/usaa129
DO - 10.1093/milmed/usaa129
M3 - Article
C2 - 32601707
AN - SCOPUS:85089613426
SN - 0026-4075
VL - 185
SP - E1016-E1023
JO - Military Medicine
JF - Military Medicine
IS - 7-8
ER -