Skip to main content
Top
Published in: Systematic Reviews 1/2016

Open Access 01-12-2016 | Methodology

SWIFT-Review: a text-mining workbench for systematic review

Authors: Brian E. Howard, Jason Phillips, Kyle Miller, Arpit Tandon, Deepak Mav, Mihir R. Shah, Stephanie Holmgren, Katherine E. Pelch, Vickie Walker, Andrew A. Rooney, Malcolm Macleod, Ruchir R. Shah, Kristina Thayer

Published in: Systematic Reviews | Issue 1/2016

Login to get access

Abstract

Background

There is growing interest in using machine learning approaches to priority rank studies and reduce human burden in screening literature when conducting systematic reviews. In addition, identifying addressable questions during the problem formulation phase of systematic review can be challenging, especially for topics having a large literature base. Here, we assess the performance of the SWIFT-Review priority ranking algorithm for identifying studies relevant to a given research question. We also explore the use of SWIFT-Review during problem formulation to identify, categorize, and visualize research areas that are data rich/data poor within a large literature corpus.

Methods

Twenty case studies, including 15 public data sets, representing a range of complexity and size, were used to assess the priority ranking performance of SWIFT-Review. For each study, seed sets of manually annotated included and excluded titles and abstracts were used for machine training. The remaining references were then ranked for relevance using an algorithm that considers term frequency and latent Dirichlet allocation (LDA) topic modeling. This ranking was evaluated with respect to (1) the number of studies screened in order to identify 95 % of known relevant studies and (2) the “Work Saved over Sampling” (WSS) performance metric. To assess SWIFT-Review for use in problem formulation, PubMed literature search results for 171 chemicals implicated as EDCs were uploaded into SWIFT-Review (264,588 studies) and categorized based on evidence stream and health outcome. Patterns of search results were surveyed and visualized using a variety of interactive graphics.

Results

Compared with the reported performance of other tools using the same datasets, the SWIFT-Review ranking procedure obtained the highest scores on 11 out of 15 of the public datasets. Overall, these results suggest that using machine learning to triage documents for screening has the potential to save, on average, more than 50 % of the screening effort ordinarily required when using un-ordered document lists. In addition, the tagging and annotation capabilities of SWIFT-Review can be useful during the activities of scoping and problem formulation.

Conclusions

Text-mining and machine learning software such as SWIFT-Review can be valuable tools to reduce the human screening burden and assist in problem formulation.
Appendix
Available only for authorised users
Literature
1.
4.
go back to reference O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015;4(1):5.CrossRefPubMedPubMedCentral O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015;4(1):5.CrossRefPubMedPubMedCentral
5.
go back to reference Colquhoun HL, Levac D, O’Brien KK, Straus S, Tricco AC, Perrier L, Kastner M, Moher D. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67(12):1291–4.CrossRefPubMed Colquhoun HL, Levac D, O’Brien KK, Straus S, Tricco AC, Perrier L, Kastner M, Moher D. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67(12):1291–4.CrossRefPubMed
6.
go back to reference Cohen AM, Hersh WR, Peterson K, Yen PY. Reducing workload in systematic review preparation using automated citation classification. J Am Med Inform Assoc. 2006;13:206–19.CrossRefPubMedPubMedCentral Cohen AM, Hersh WR, Peterson K, Yen PY. Reducing workload in systematic review preparation using automated citation classification. J Am Med Inform Assoc. 2006;13:206–19.CrossRefPubMedPubMedCentral
7.
go back to reference Robertson SE, Porter MF. New Models in Probabilistic Information Retrieval. Issue 5587 of British Library research & development report. London: Publisher Computer Laboratory, University of Cambridge; 1980. p. 123. Robertson SE, Porter MF. New Models in Probabilistic Information Retrieval. Issue 5587 of British Library research & development report. London: Publisher Computer Laboratory, University of Cambridge; 1980. p. 123.
8.
go back to reference Robertson S. Understanding inverse document frequency: on theoretical arguments for IDF. J Doc. 2004;60(5):503–20.CrossRef Robertson S. Understanding inverse document frequency: on theoretical arguments for IDF. J Doc. 2004;60(5):503–20.CrossRef
9.
go back to reference Blei DM, Ng AY, Jordan MI. Latent Dirichlet allocation. J Mach Learn Res. 2003;3:993–1022. Blei DM, Ng AY, Jordan MI. Latent Dirichlet allocation. J Mach Learn Res. 2003;3:993–1022.
10.
12.
go back to reference Byrd RH, Lu P, Nocedal J, Zhu C. A limited memory algorithm for bound constrained optimization. SIAM J Sci Comput. 1995;16(5):1190–208.CrossRef Byrd RH, Lu P, Nocedal J, Zhu C. A limited memory algorithm for bound constrained optimization. SIAM J Sci Comput. 1995;16(5):1190–208.CrossRef
13.
go back to reference Hooijmans CR, Tillema A, Leenaars M, Ritskes-Hoitinga M. Enhancing search efficiency by means of a search filter for finding all studies on animal experimentation in PubMed. Lab Anim. 2010;44(3):170–5.CrossRefPubMedPubMedCentral Hooijmans CR, Tillema A, Leenaars M, Ritskes-Hoitinga M. Enhancing search efficiency by means of a search filter for finding all studies on animal experimentation in PubMed. Lab Anim. 2010;44(3):170–5.CrossRefPubMedPubMedCentral
14.
go back to reference Kavlock RJ, Austin CP, Tice RR. Toxicity testing in the 21st century: implications for human health risk assessment. Risk Anal. 2009;29(4):485–7. discussion 492–7.CrossRefPubMed Kavlock RJ, Austin CP, Tice RR. Toxicity testing in the 21st century: implications for human health risk assessment. Risk Anal. 2009;29(4):485–7. discussion 492–7.CrossRefPubMed
21.
go back to reference Bekhuis T, Tseytlin E, Mitchell KJ, Demner-Fushman D. Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence. PLoS One. 2014;9(1):e86277.CrossRefPubMedPubMedCentral Bekhuis T, Tseytlin E, Mitchell KJ, Demner-Fushman D. Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence. PLoS One. 2014;9(1):e86277.CrossRefPubMedPubMedCentral
22.
go back to reference Thomas J, O’Mara A. How can we find relevant research more quickly? In NCRM MethodsNews. UK:NCRM; 2011. p.3. Thomas J, O’Mara A. How can we find relevant research more quickly? In NCRM MethodsNews. UK:NCRM; 2011. p.3.
23.
go back to reference Wallace BC, Dahabreh IJ, Moran KH, Brodley CE, Trikalinos TA. Active literature discovery for scoping evidence reviews. In: 1th ACM SIGKDD Conference onf Knowledge Discovery and Data Mining (KDD). 2013. Wallace BC, Dahabreh IJ, Moran KH, Brodley CE, Trikalinos TA. Active literature discovery for scoping evidence reviews. In: 1th ACM SIGKDD Conference onf Knowledge Discovery and Data Mining (KDD). 2013.
24.
go back to reference Shemilt I, Simon A, Hollands G, Marteau T, Ogilvie D, O’Mara-Eves A, et al. Pinpointing needles ingiant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews. Res Synth Methods. 2014;5(1):31-49. Shemilt I, Simon A, Hollands G, Marteau T, Ogilvie D, O’Mara-Eves A, et al. Pinpointing needles ingiant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews. Res Synth Methods. 2014;5(1):31-49.
25.
go back to reference Miwa M, Thomas J, O’Mara-Eves A, Ananiadou S. Reducing systematic review workload through certainty-based screening. J Biomed Inform. 2014;51:242–53.CrossRefPubMedPubMedCentral Miwa M, Thomas J, O’Mara-Eves A, Ananiadou S. Reducing systematic review workload through certainty-based screening. J Biomed Inform. 2014;51:242–53.CrossRefPubMedPubMedCentral
26.
27.
go back to reference Wallace B, Small K, Brodley C, Trikalinos T. Active learning for biomedical citation screening. Washington USA: KDD 2010; 2010. Wallace B, Small K, Brodley C, Trikalinos T. Active learning for biomedical citation screening. Washington USA: KDD 2010; 2010.
28.
go back to reference Frunza O, Inkpen D, Matwin S. Building systematic reviews using automatic text classification techniques. 2010. p. 303–11. Frunza O, Inkpen D, Matwin S. Building systematic reviews using automatic text classification techniques. 2010. p. 303–11.
29.
go back to reference Cohen AM. Performance of support-vector-machine-based classification on 15 systematic review topics evaluated with the WSS@95 measure. J Am Med Inform Assoc. 2011;18(1):104. author reply 104–105.CrossRefPubMed Cohen AM. Performance of support-vector-machine-based classification on 15 systematic review topics evaluated with the WSS@95 measure. J Am Med Inform Assoc. 2011;18(1):104. author reply 104–105.CrossRefPubMed
Metadata
Title
SWIFT-Review: a text-mining workbench for systematic review
Authors
Brian E. Howard
Jason Phillips
Kyle Miller
Arpit Tandon
Deepak Mav
Mihir R. Shah
Stephanie Holmgren
Katherine E. Pelch
Vickie Walker
Andrew A. Rooney
Malcolm Macleod
Ruchir R. Shah
Kristina Thayer
Publication date
01-12-2016
Publisher
BioMed Central
Published in
Systematic Reviews / Issue 1/2016
Electronic ISSN: 2046-4053
DOI
https://doi.org/10.1186/s13643-016-0263-z

Other articles of this Issue 1/2016

Systematic Reviews 1/2016 Go to the issue