Skip to main content
Top
Published in: European Archives of Oto-Rhino-Laryngology 6/2018

01-06-2018 | Miscellaneous

Procedure-specific assessment tool for flexible pharyngo-laryngoscopy: gathering validity evidence and setting pass–fail standards

Authors: Jacob Melchiors, K. Petersen, T. Todsen, A. Bohr, Lars Konge, Christian von Buchwald

Published in: European Archives of Oto-Rhino-Laryngology | Issue 6/2018

Login to get access

Abstract

Objective

The attainment of specific identifiable competencies is the primary measure of progress in the modern medical education system. The system, therefore, requires a method for accurately assessing competence to be feasible. Evidence of validity needs to be gathered before an assessment tool can be implemented in the training and assessment of physicians. This evidence of validity must according to the contemporary theory on validity be gathered from specific sources in a structured and rigorous manner. The flexible pharyngo-laryngoscopy (FPL) is central to the otorhinolaryngologist. We aim to evaluate the flexible pharyngo-laryngoscopy assessment tool (FLEXPAT) created in a previous study and to establish a pass–fail level for proficiency.

Methods

Eighteen physicians with different levels of experience (novices, intermediates, and experienced) were recruited to the study. Each performed an FPL on two patients. These procedures were video recorded, blinded, and assessed by two specialists. The score was expressed as the percentage of a possible max score. Cronbach’s α was used to analyze internal consistency of the data, and a generalizability analysis was performed. The scores of the three different groups were explored, and a pass–fail level was determined using the contrasting groups’ standard setting method.

Results

Internal consistency was strong with a Cronbach’s α of 0.86. We found a generalizability coefficient of 0.72 sufficient for moderate stakes assessment. We found a significant difference between the novice and experienced groups (p < 0.001) and strong correlation between experience and score (Pearson’s r = 0.75). The pass/fail level was established at 72% of the maximum score. Applying this pass–fail level in the test population resulted in half of the intermediary group receiving a failing score.

Discussion

We gathered validity evidence for the FLEXPAT according to the contemporary framework as described by Messick. Our results support a claim of validity and are comparable to other studies exploring clinical assessment tools. The high rate of physicians underperforming in the intermediary group demonstrates the need for continued educational intervention.

Conclusion

Based on our work, we recommend the use of the FLEXPAT in clinical assessment of FPL and the application of a pass–fail level of 72% for proficiency.
Appendix
Available only for authorised users
Literature
1.
go back to reference Sethi RKV, Kozin ED, Remenschneider AK, Lee DJ, Gray ST, Shrime MG et al (2014) Subspecialty emergency room as alternative model for otolaryngologic care: implications for emergency health care delivery. Am J Otolaryngol Head Neck Surg 35:758–765 Sethi RKV, Kozin ED, Remenschneider AK, Lee DJ, Gray ST, Shrime MG et al (2014) Subspecialty emergency room as alternative model for otolaryngologic care: implications for emergency health care delivery. Am J Otolaryngol Head Neck Surg 35:758–765
2.
go back to reference Couch ME (2010) Cummings otolaryngology—head and neck surgery, 5th edn. Elsevier, Mosby Couch ME (2010) Cummings otolaryngology—head and neck surgery, 5th edn. Elsevier, Mosby
5.
go back to reference Ericsson KA (2008) Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med 15(11):988–994CrossRefPubMed Ericsson KA (2008) Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med 15(11):988–994CrossRefPubMed
6.
go back to reference McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB (2011) Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med 86(6):706–711CrossRefPubMedPubMedCentral McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB (2011) Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med 86(6):706–711CrossRefPubMedPubMedCentral
7.
go back to reference Mcgaghie WC (2015) Mastery learning: it is time for medical education to join the 21st century. Acad Med 90(11):1438–1441CrossRefPubMed Mcgaghie WC (2015) Mastery learning: it is time for medical education to join the 21st century. Acad Med 90(11):1438–1441CrossRefPubMed
8.
go back to reference Lineberry M, Soo Park Y, Cook DA, Yudkowsky R (2015) Making the case for mastery learning assessments. Acad Med 90(November):1445–1450CrossRefPubMed Lineberry M, Soo Park Y, Cook DA, Yudkowsky R (2015) Making the case for mastery learning assessments. Acad Med 90(November):1445–1450CrossRefPubMed
9.
go back to reference Schuwirth LWT, Vleuten CPM, Van Der (2011) General overview of the theories used in assessment: AMEE Guide No. 57. Med Teach 33:783–797CrossRefPubMed Schuwirth LWT, Vleuten CPM, Van Der (2011) General overview of the theories used in assessment: AMEE Guide No. 57. Med Teach 33:783–797CrossRefPubMed
10.
go back to reference Messick S (1989) Validity. In: Linn RL (ed) Educational measurement, 3rd edn. American Counsel on Education and Macmillan, New York Messick S (1989) Validity. In: Linn RL (ed) Educational measurement, 3rd edn. American Counsel on Education and Macmillan, New York
11.
go back to reference Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R (2013) What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Adv Heal Sci Educ 19:1–18 Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R (2013) What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Adv Heal Sci Educ 19:1–18
12.
go back to reference Downing S, Yudkowsky R (2009) Assessment in health professions education. Routledge, New York Downing S, Yudkowsky R (2009) Assessment in health professions education. Routledge, New York
14.
15.
go back to reference Bloch R, Norman G (2012) Generalizability theory for the perplexed: a practical introduction and guide: AMEE Guide No. 68. Med Teach 34(11):960–992CrossRefPubMed Bloch R, Norman G (2012) Generalizability theory for the perplexed: a practical introduction and guide: AMEE Guide No. 68. Med Teach 34(11):960–992CrossRefPubMed
16.
go back to reference Andersen SAW, Foghsgaard S, Konge L, Cayé-Thomasen P, Sørensen MS (2016) The effect of self-directed virtual reality simulation on dissection training performance in mastoidectomy. Laryngoscope 126(8):1883–1888CrossRefPubMed Andersen SAW, Foghsgaard S, Konge L, Cayé-Thomasen P, Sørensen MS (2016) The effect of self-directed virtual reality simulation on dissection training performance in mastoidectomy. Laryngoscope 126(8):1883–1888CrossRefPubMed
17.
go back to reference Konge L, Larsen KR, Clementsen P, Arendrup H, von Buchwald C, Ringsted C (2012) Reliable and valid assessment of clinical bronchoscopy performance. Respiration 83(1):53–60CrossRefPubMed Konge L, Larsen KR, Clementsen P, Arendrup H, von Buchwald C, Ringsted C (2012) Reliable and valid assessment of clinical bronchoscopy performance. Respiration 83(1):53–60CrossRefPubMed
18.
go back to reference Ilgen JS, Ma IWY, Hatala R, Cook DA (2015) A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ 49:161–173CrossRefPubMed Ilgen JS, Ma IWY, Hatala R, Cook DA (2015) A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ 49:161–173CrossRefPubMed
19.
go back to reference Hodges B (2013) Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach 35(7):564–568CrossRefPubMed Hodges B (2013) Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach 35(7):564–568CrossRefPubMed
20.
go back to reference Albanese MA (2000) Challenges in using rater judgements in medical education. J Eval Clin Pract 6(3):305–319CrossRefPubMed Albanese MA (2000) Challenges in using rater judgements in medical education. J Eval Clin Pract 6(3):305–319CrossRefPubMed
21.
go back to reference Streiner DL, Norman GR (2008) Health measurement scales: a practical guide to their development and use, 4th edn. Oxford University Press, OxfordCrossRef Streiner DL, Norman GR (2008) Health measurement scales: a practical guide to their development and use, 4th edn. Oxford University Press, OxfordCrossRef
22.
go back to reference Barton JR, Corbett S, Van Der Vleuten CP (2012) The validity and reliability of a direct observation of procedural skills assessment tool: assessing colonoscopic skills of senior endoscopists. Gastrointest Endosc 75(3):591–597CrossRefPubMed Barton JR, Corbett S, Van Der Vleuten CP (2012) The validity and reliability of a direct observation of procedural skills assessment tool: assessing colonoscopic skills of senior endoscopists. Gastrointest Endosc 75(3):591–597CrossRefPubMed
23.
go back to reference Melchiors J, Todsen T, Nilsson P, Wennervaldt K, Charabi B, Bøttger M et al (2015) Preparing for emergency: a valid, reliable assessment tool for emergency cricothyroidotomy skills. Otolaryngol Head Neck Surg 152(2):260–265CrossRefPubMed Melchiors J, Todsen T, Nilsson P, Wennervaldt K, Charabi B, Bøttger M et al (2015) Preparing for emergency: a valid, reliable assessment tool for emergency cricothyroidotomy skills. Otolaryngol Head Neck Surg 152(2):260–265CrossRefPubMed
24.
go back to reference Todsen T, Tolsgaard MG, Olsen BH, Henriksen BM, Hillingsø JG, Konge L et al (2014) Reliable and valid assessment of point-of-care ultrasonography. Ann Surg 0(0):1–7 Todsen T, Tolsgaard MG, Olsen BH, Henriksen BM, Hillingsø JG, Konge L et al (2014) Reliable and valid assessment of point-of-care ultrasonography. Ann Surg 0(0):1–7
25.
go back to reference Ishman SL, Brown DJ, Boss EF, Skinner ML, Tunkel DE, Stavinoha R et al (2010) Development and pilot testing of an operative competency assessment tool for pediatric direct laryngoscopy and rigid bronchoscopy. Laryngoscope 120(11):2294–2300CrossRefPubMed Ishman SL, Brown DJ, Boss EF, Skinner ML, Tunkel DE, Stavinoha R et al (2010) Development and pilot testing of an operative competency assessment tool for pediatric direct laryngoscopy and rigid bronchoscopy. Laryngoscope 120(11):2294–2300CrossRefPubMed
26.
go back to reference Magill RA, Anderson D (2014) Motor learning and control: concepts and applications, 10th edn. Mcgraw-Hill Education, New York Magill RA, Anderson D (2014) Motor learning and control: concepts and applications, 10th edn. Mcgraw-Hill Education, New York
27.
go back to reference McKinley DW, Norcini JJ (2014) How to set standards on performance-based examinations: AMEE Guide No. 85. Med Teach 36(2):97–110CrossRefPubMed McKinley DW, Norcini JJ (2014) How to set standards on performance-based examinations: AMEE Guide No. 85. Med Teach 36(2):97–110CrossRefPubMed
28.
go back to reference Livingston SA, Zieky MJ (1982) Passing scores: a manual for setting standards of performance. Educational Testing Service, Princeton Livingston SA, Zieky MJ (1982) Passing scores: a manual for setting standards of performance. Educational Testing Service, Princeton
Metadata
Title
Procedure-specific assessment tool for flexible pharyngo-laryngoscopy: gathering validity evidence and setting pass–fail standards
Authors
Jacob Melchiors
K. Petersen
T. Todsen
A. Bohr
Lars Konge
Christian von Buchwald
Publication date
01-06-2018
Publisher
Springer Berlin Heidelberg
Published in
European Archives of Oto-Rhino-Laryngology / Issue 6/2018
Print ISSN: 0937-4477
Electronic ISSN: 1434-4726
DOI
https://doi.org/10.1007/s00405-018-4971-y

Other articles of this Issue 6/2018

European Archives of Oto-Rhino-Laryngology 6/2018 Go to the issue