Published in:
22-09-2023 | Artificial Intelligence | Commentary
To BERT or not to BERT: advancing non-invasive prediction of tumor biomarkers using transformer-based natural language processing (NLP)
Author:
Ali S. Tejani
Published in:
European Radiology
|
Issue 11/2023
Login to get access
Excerpt
Applications of natural language processing (NLP) in radiology continue to grow, as radiologists realize the potential value of an expanding array of non-interpretive use cases for artificial intelligence (AI). NLP tools leveraging large neural networks, specifically transformer-based architecture, have been used for text extraction, text classification, automated dataset curation, and error recognition, among other tasks [
1‐
3]. These models are trained on large quantities of text data, allowing creation of “foundation models” or pre-trained models that can be adapted to other tasks by fine-tuning on task-specific data. Pre-training of Bidirectional Encoder Representations from Transformers (BERT), a transformer-based language representation model, on domain-specific content has led to the creation of several foundation models shown to outperform older NLP models featuring simple machine learning algorithms. For example, BioBERT is a foundation model resulting from pre-training a BERT model on biomedical text from PubMed and PubMedCentral, allowing improved performance on biomedical tasks with options to fine-tune the model for more specific text mining tasks [
4]. …