Skip to main content
Top
Published in: Journal of Gastrointestinal Surgery 10/2010

01-10-2010 | 2010 SSAT Plenary Presentation

Comparison of Outlier Identification Methods in Hospital Surgical Quality Improvement Programs

Authors: Karl Y. Bilimoria, Mark E. Cohen, Ryan P. Merkow, Xue Wang, David J. Bentrem, Angela M. Ingraham, Karen Richards, Bruce L. Hall, Clifford Y. Ko

Published in: Journal of Gastrointestinal Surgery | Issue 10/2010

Login to get access

Abstract

Background

Surgeons and hospitals are being increasingly assessed by third parties regarding surgical quality and outcomes, and much of this information is reported publicly. Our objective was to compare various methods used to classify hospitals as outliers in established surgical quality assessment programs by applying each approach to a single data set.

Methods

Using American College of Surgeons National Surgical Quality Improvement Program data (7/2008–6/2009), hospital risk-adjusted 30-day morbidity and mortality were assessed for general surgery at 231 hospitals (cases = 217,630) and for colorectal surgery at 109 hospitals (cases = 17,251). The number of outliers (poor performers) identified using different methods and criteria were compared.

Results

The overall morbidity was 10.3% for general surgery and 25.3% for colorectal surgery. The mortality was 1.6% for general surgery and 4.0% for colorectal surgery. Programs used different methods (logistic regression, hierarchical modeling, partitioning) and criteria (P < 0.01, P < 0.05, P < 0.10) to identify outliers. Depending on outlier identification methods and criteria employed, when each approach was applied to this single dataset, the number of outliers ranged from 7 to 57 hospitals for general surgery morbidity, 1 to 57 hospitals for general surgery mortality, 4 to 27 hospitals for colorectal morbidity, and 0 to 27 hospitals for colorectal mortality.

Conclusions

There was considerable variation in the number of outliers identified using different detection approaches. Quality programs seem to be utilizing outlier identification methods contrary to what might be expected, thus they should justify their methodology based on the intent of the program (i.e., quality improvement vs. reimbursement). Surgeons and hospitals should be aware of variability in methods used to assess their performance as these outlier designations will likely have referral and reimbursement consequences.
Literature
1.
go back to reference Institute of Medicine (US). Committee on Quality of Health Care in America. Crossing the quality chasm a new health system for the 21st century. Washington, DC: National Academy, 2001. Institute of Medicine (US). Committee on Quality of Health Care in America. Crossing the quality chasm a new health system for the 21st century. Washington, DC: National Academy, 2001.
2.
go back to reference Institute of Medicine (US). Committee on Quality of Health Care in America, Kohn LT, Corrigan J, Donaldson MS. To err is human: building a safer health system. Washington, DC: National Academy, 2000. Institute of Medicine (US). Committee on Quality of Health Care in America, Kohn LT, Corrigan J, Donaldson MS. To err is human: building a safer health system. Washington, DC: National Academy, 2000.
3.
go back to reference Institute of Medicine (US). Committee on Redesigning Health Insurance Performance Measures Payment and Performance Improvement Programs. Performance measurement: accelerating improvement. Washington, DC: National Academies, 2006. Institute of Medicine (US). Committee on Redesigning Health Insurance Performance Measures Payment and Performance Improvement Programs. Performance measurement: accelerating improvement. Washington, DC: National Academies, 2006.
4.
go back to reference Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148(2):111–23.PubMed Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148(2):111–23.PubMed
5.
go back to reference Russell EM, Bruce J, Krukowski ZH. Systematic review of the quality of surgical mortality monitoring. Br J Surg 2003; 90(5):527–32.CrossRefPubMed Russell EM, Bruce J, Krukowski ZH. Systematic review of the quality of surgical mortality monitoring. Br J Surg 2003; 90(5):527–32.CrossRefPubMed
6.
go back to reference Faber M, Bosch M, Wollersheim H, et al. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care 2009; 47(1):1–8.CrossRefPubMed Faber M, Bosch M, Wollersheim H, et al. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care 2009; 47(1):1–8.CrossRefPubMed
8.
go back to reference Khuri SF, Daley J, Henderson W, et al. The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. National VA Surgical Quality Improvement Program. Ann Surg 1998; 228(4):491–507. Khuri SF, Daley J, Henderson W, et al. The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. National VA Surgical Quality Improvement Program. Ann Surg 1998; 228(4):491–507.
9.
go back to reference Daley J, Khuri SF, Henderson W, et al. Risk adjustment of the postoperative morbidity rate for the comparative assessment of the quality of surgical care: results of the National Veterans Affairs Surgical Risk Study. J Am Coll Surg 1997; 185(4):328–40.PubMed Daley J, Khuri SF, Henderson W, et al. Risk adjustment of the postoperative morbidity rate for the comparative assessment of the quality of surgical care: results of the National Veterans Affairs Surgical Risk Study. J Am Coll Surg 1997; 185(4):328–40.PubMed
10.
go back to reference Khuri SF, Henderson WG, Daley J, et al. The patient safety in surgery study: background, study design, and patient populations. J Am Coll Surg 2007; 204(6):1089–102.CrossRefPubMed Khuri SF, Henderson WG, Daley J, et al. The patient safety in surgery study: background, study design, and patient populations. J Am Coll Surg 2007; 204(6):1089–102.CrossRefPubMed
11.
go back to reference Davis CL, Pierce JR, Henderson W, et al. Assessment of the reliability of data collected for the Department of Veterans Affairs national surgical quality improvement program. J Am Coll Surg 2007; 204(4):550–60.CrossRefPubMed Davis CL, Pierce JR, Henderson W, et al. Assessment of the reliability of data collected for the Department of Veterans Affairs national surgical quality improvement program. J Am Coll Surg 2007; 204(4):550–60.CrossRefPubMed
12.
go back to reference Khuri SF, Henderson WG, Daley J, et al. Successful implementation of the Department of Veterans Affairs’ National Surgical Quality Improvement Program in the private sector: the Patient Safety in Surgery study. Ann Surg 2008; 248(2):329–36.CrossRefPubMed Khuri SF, Henderson WG, Daley J, et al. Successful implementation of the Department of Veterans Affairs’ National Surgical Quality Improvement Program in the private sector: the Patient Safety in Surgery study. Ann Surg 2008; 248(2):329–36.CrossRefPubMed
14.
go back to reference Bilimoria K, Cohen M, Ingraham A, et al. Effect of post-discharge morbidity and mortality on comparisons of hospital surgical quality. Ann Surg 2010 (in press). Bilimoria K, Cohen M, Ingraham A, et al. Effect of post-discharge morbidity and mortality on comparisons of hospital surgical quality. Ann Surg 2010 (in press).
18.
go back to reference Hamilton BH, Ko CY, Richards K, Hall BL. Missing data in the American College of Surgeons National Surgical Quality Improvement Program are not missing at random: implications and potential impact on quality assessments. J Am Coll Surg; 210(2):125–139 e2. Hamilton BH, Ko CY, Richards K, Hall BL. Missing data in the American College of Surgeons National Surgical Quality Improvement Program are not missing at random: implications and potential impact on quality assessments. J Am Coll Surg; 210(2):125–139 e2.
19.
go back to reference Sun J, Ono Y, Takebuchi Y. A simple method for calculating the exact confidence interval of the standardized mortality ratio with an [sic] SAS function. Journal of Occupational Health 1996; 38:196–197.CrossRef Sun J, Ono Y, Takebuchi Y. A simple method for calculating the exact confidence interval of the standardized mortality ratio with an [sic] SAS function. Journal of Occupational Health 1996; 38:196–197.CrossRef
20.
go back to reference Jones HE, Ohlssen DI, Spiegelhalter DJ. Use of the false discovery rate when comparing multiple health care providers. J Clin Epidemiol 2008; 61(3):232–240.CrossRefPubMed Jones HE, Ohlssen DI, Spiegelhalter DJ. Use of the false discovery rate when comparing multiple health care providers. J Clin Epidemiol 2008; 61(3):232–240.CrossRefPubMed
21.
go back to reference Austin PC, Naylor CD, Tu JV. A comparison of a Bayesian vs. a frequentist method for profiling hospital performance. J Eval Clin Pract 2001; 7(1):35–45.CrossRefPubMed Austin PC, Naylor CD, Tu JV. A comparison of a Bayesian vs. a frequentist method for profiling hospital performance. J Eval Clin Pract 2001; 7(1):35–45.CrossRefPubMed
23.
go back to reference Houchens R, Chu B, Steiner C. Hierachical modeling using HCUP data. US Agency for Healthcare and Research Quality 2007. Houchens R, Chu B, Steiner C. Hierachical modeling using HCUP data. US Agency for Healthcare and Research Quality 2007.
25.
go back to reference Cohen ME, Dimick JB, Bilimoria KY, et al. Risk adjustment in the American College of Surgeons national surgical quality improvement program: a comparison of logistic versus hierarchical modeling. J Am Coll Surg 2009; 209(6):687–93.CrossRefPubMed Cohen ME, Dimick JB, Bilimoria KY, et al. Risk adjustment in the American College of Surgeons national surgical quality improvement program: a comparison of logistic versus hierarchical modeling. J Am Coll Surg 2009; 209(6):687–93.CrossRefPubMed
32.
go back to reference Fedeli U, Brocco S, Alba N, et al. The choice between different statistical approaches to risk-adjustment influenced the identification of outliers. J Clin Epidemiol 2007; 60(8):858–62.CrossRefPubMed Fedeli U, Brocco S, Alba N, et al. The choice between different statistical approaches to risk-adjustment influenced the identification of outliers. J Clin Epidemiol 2007; 60(8):858–62.CrossRefPubMed
33.
go back to reference Glance LG, Dick A, Osler TM, et al. Impact of changing the statistical methodology on hospital and surgeon ranking: the case of the New York State cardiac surgery report card. Med Care 2006; 44(4):311–9.CrossRefPubMed Glance LG, Dick A, Osler TM, et al. Impact of changing the statistical methodology on hospital and surgeon ranking: the case of the New York State cardiac surgery report card. Med Care 2006; 44(4):311–9.CrossRefPubMed
34.
go back to reference Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital mortality associated with inpatient surgery. N Engl J Med 2009; 361(14):1368–75.CrossRefPubMed Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital mortality associated with inpatient surgery. N Engl J Med 2009; 361(14):1368–75.CrossRefPubMed
Metadata
Title
Comparison of Outlier Identification Methods in Hospital Surgical Quality Improvement Programs
Authors
Karl Y. Bilimoria
Mark E. Cohen
Ryan P. Merkow
Xue Wang
David J. Bentrem
Angela M. Ingraham
Karen Richards
Bruce L. Hall
Clifford Y. Ko
Publication date
01-10-2010
Publisher
Springer-Verlag
Published in
Journal of Gastrointestinal Surgery / Issue 10/2010
Print ISSN: 1091-255X
Electronic ISSN: 1873-4626
DOI
https://doi.org/10.1007/s11605-010-1316-6

Other articles of this Issue 10/2010

Journal of Gastrointestinal Surgery 10/2010 Go to the issue