Open Access 01-12-2018 | Review
Data-driving methods: More than merely trendy buzzwords?
Published in: Annals of Intensive Care | Issue 1/2018
Login to get accessExcerpt
Intensive care units (ICU) physicians are experiencing a rapidly expanding collection of vast amounts of data from routine practice, patients’ monitoring as well as from diagnostic or prognostic tests. However, although these data could influence their clinical decisions and management, the validity and relevance of data processing methods, in particular in case of complex data sets (i.e. so-called big data, see Table 1 for related terminology) remain to be defined. A growing body of research has recently suggested that emerging artificial intelligence (AI)-derived methods could help physicians to access, organize and use important amounts of data more easily. Nowadays, such methods have already found applications in various fields, including technology, biology, computer science or sociology [1]. However, are these approaches more than merely trendy buzzwords? Are they reliable enough to match the exponential growth of medical complexity in the critical care setting? And, last but not least, can the holistic use of massive data sources available eventually provide clinically relevant information?
Table 1
Data-driven analysis and related terminology
Big data
|
Data sets with size/complexity beyond the capacity of commonly used methodological approaches to capture, manage and process data. Big data might be defined by their high volume, large variety and the important velocity that is required to process (3v definition)
|
Closed-loop system
|
System in which some or all its outputs are used as inputs. In health care, the use of such feedback loop enables real-time analysis of patient databases and could permit to optimize clinical care leading to more efficient targeting of tests and treatments and vigilance for adverse effects (i.e. dynamic clinical data mining)
|
Cross-validation
|
Statistical technique for assessing how the results of an analysis will generalize to an independent data set. For example, doing so it could permit to estimate how accurately a predictive model will perform in practice
|
Crowdsourcing
|
The practice of obtaining needed solution by soliciting contribution from a large group of people and specially from online communities
|
Data mining
|
The process of collecting, searching through and analysing a large amount of data in a database, as to discover patterns of relationships. It is worth noting that this approach does not look for causality and simply aim to detect significant data configurations
|
Machine learning
|
Derived methods from artificial intelligence that provides computers with the ability to learn without being explicitly programmed. The process of machine learning uses the data to detect patterns and adjust programme actions accordingly
|