Skip to main content
Top
Published in: Perspectives on Medical Education 6/2018

Open Access 01-12-2018 | Eye-Opener

Analysis of question text properties for equality monitoring

Published in: Perspectives on Medical Education | Issue 6/2018

Login to get access

Abstract

Introduction

Ongoing monitoring of cohort demographic variation is an essential part of quality assurance in medical education assessments, yet the methods employed to explore possible underlying causes of demographic variation in performance are limited. Focussing on properties of the vignette text in single-best-answer multiple-choice questions (MCQs), we explore here the viability of conducting analyses of text properties and their relationship to candidate performance. We suggest that such analyses could become routine parts of assessment evaluation and provide an additional, equality-based measure of an assessment’s quality and fairness.

Methods

We describe how a corpus of vignettes can be compiled, followed by examples of using Microsoft Word’s native readability statistics calculator and the koRpus text analysis package for the R statistical analysis environment for estimating the following properties of the question text: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (Grade), word count, sentence count, and average words per sentence (WpS). We then provide examples of how these properties can be combined with equality and diversity variables, and the process automated to provide ongoing monitoring.

Conclusions

Given the monitoring of demographic differences in assessment for assurance of equality, the ability to easily include textual analysis of question vignettes provides a useful tool for exploring possible causes of demographic variations in performance where they occur. It also provides another means of evaluating assessment quality and fairness with respect to demographic characteristics. Microsoft Word provides data comparable to the specialized koRpus package, suggesting routine use of word processing software for writing items and assessing their properties is viable with minimal burden, but that automation for ongoing monitoring also provides an additional means of standardizing MCQ assessment items, and eliminating or controlling textual variables as a possible contributor to differential attainment between subgroups.
Literature
1.
go back to reference Government Equalities Office. Equality Act 2010. 2010. Government Equalities Office. Equality Act 2010. 2010.
2.
go back to reference European Parliament. Equal Treatment Directive 2006 (2006/54/EC). 2006. European Parliament. Equal Treatment Directive 2006 (2006/54/EC). 2006.
3.
4.
6.
go back to reference Mathers J, Sitch A, Marsh JL, Parry J. Widening access to medical education for under-represented socioeconomic groups: population based cross sectional analysis of UK data, 2002–6. BMJ. 2011;342:d918–d918.CrossRef Mathers J, Sitch A, Marsh JL, Parry J. Widening access to medical education for under-represented socioeconomic groups: population based cross sectional analysis of UK data, 2002–6. BMJ. 2011;342:d918–d918.CrossRef
7.
go back to reference Woolf K, Rich A, Viney R, Needleman S, Griffin A. Perceived causes of differential attainment in UK postgraduate medical training: a national qualitative study. BMJ. 2016;6:e13429. Open. Woolf K, Rich A, Viney R, Needleman S, Griffin A. Perceived causes of differential attainment in UK postgraduate medical training: a national qualitative study. BMJ. 2016;6:e13429. Open.
8.
go back to reference Flesch R. A new readability yardstick. J Appl Psychol. 1948;32:221–33.CrossRef Flesch R. A new readability yardstick. J Appl Psychol. 1948;32:221–33.CrossRef
9.
go back to reference Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of new readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy enlisted personnel. In: Millington TN, editor. Naval Technical Training. Research Branch Report. Memphis: U. S. Naval Air Station; 1975. pp. 8–75. Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of new readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy enlisted personnel. In: Millington TN, editor. Naval Technical Training. Research Branch Report. Memphis: U. S. Naval Air Station; 1975. pp. 8–75.
10.
go back to reference Ferguson E, James D. Factors associated with success in medical school: systematic review of the literature. BMJ. 2002;324:952–7.CrossRef Ferguson E, James D. Factors associated with success in medical school: systematic review of the literature. BMJ. 2002;324:952–7.CrossRef
11.
go back to reference Babarudeen S, Sabharwal S. Assessing readability of patient education materials: current role in orthopaedics. Clin Orthop. 2010;468:2572–80.CrossRef Babarudeen S, Sabharwal S. Assessing readability of patient education materials: current role in orthopaedics. Clin Orthop. 2010;468:2572–80.CrossRef
12.
go back to reference Larson E, Foe G, Lally R. Reading level and length of written research consent forms. Clin Trans Sci. 2015;8:355–6.CrossRef Larson E, Foe G, Lally R. Reading level and length of written research consent forms. Clin Trans Sci. 2015;8:355–6.CrossRef
13.
go back to reference Janan D, Wray D. Reassessing the accuracy and use of readability formulae. Malaysian J Learn Instr. 2014;11:127–45. Janan D, Wray D. Reassessing the accuracy and use of readability formulae. Malaysian J Learn Instr. 2014;11:127–45.
14.
go back to reference Crossley SA, Allen DB, McNamara DS. Text readability and intuitive simplification: a comparison of readability formulas. Read Foreign Lang. 2011;23:84–101. Crossley SA, Allen DB, McNamara DS. Text readability and intuitive simplification: a comparison of readability formulas. Read Foreign Lang. 2011;23:84–101.
15.
go back to reference Feng L, Jansche M, Huenerfauth M, Elhahad N. A comparison of features for automatic readability assessment. Proceedings of the 23rd International Conference on Computational Linguistics: Posters. 2010. pp. 276–84. Feng L, Jansche M, Huenerfauth M, Elhahad N. A comparison of features for automatic readability assessment. Proceedings of the 23rd International Conference on Computational Linguistics: Posters. 2010. pp. 276–84.
16.
go back to reference Mailloux SL, Johnson ME, Fisher DG, Pettibone TJ. How reliable is computerized assessment of reliability? Comput Nurs. 1995;13:221–5. Mailloux SL, Johnson ME, Fisher DG, Pettibone TJ. How reliable is computerized assessment of reliability? Comput Nurs. 1995;13:221–5.
18.
go back to reference R Development Core Team. R: A language and environment for statistical computing. Wien: The R Foundation for Statistical Computing; 2015. R Development Core Team. R: A language and environment for statistical computing. Wien: The R Foundation for Statistical Computing; 2015.
21.
go back to reference Microsoft Corporation. Microsoft Office Word 2016 MSO 16.0.4639.1000, 32-bit. 2016. Microsoft Corporation. Microsoft Office Word 2016 MSO 16.0.4639.1000, 32-bit. 2016.
22.
go back to reference Gierl MJ, Lai H, Turner SR. Using automatic item generation to create multiple-choice test items. Med Educ. 2012;46:757–65.CrossRef Gierl MJ, Lai H, Turner SR. Using automatic item generation to create multiple-choice test items. Med Educ. 2012;46:757–65.CrossRef
Metadata
Title
Analysis of question text properties for equality monitoring
Publication date
01-12-2018
Published in
Perspectives on Medical Education / Issue 6/2018
Print ISSN: 2212-2761
Electronic ISSN: 2212-277X
DOI
https://doi.org/10.1007/s40037-018-0478-x

Other articles of this Issue 6/2018

Perspectives on Medical Education 6/2018 Go to the issue