Abstract

Named Entity Recognition (NER) is an important task used to extract relevant information from biomedical texts. Recently, pre-trained language models have made great progress in this task, particularly in English language. However, the performance of pre-trained models in the Spanish biomedical domain has not been evaluated in an experimentation framework designed specifically for the task. We present an approach for named entity recognition in Spanish medical texts that makes use of pre-trained models from the Spanish biomedical domain. We also use data augmentation techniques to improve the identification of less frequent entities in the dataset. The domain-specific models have improved the recognition of name entities in the domain, beating all the systems that were evaluated in the eHealth-KD challenge 2021. Language models from the biomedical domain seem to be more effective in characterizing the specific terminology involved in this task of named entity recognition, where most entities correspond to the "concept" type involving a great number of medical concepts. Regarding data augmentation, only back translation has slightly improved the results. Clearly, the most frequent types of entities in the dataset are better identified. Although the domain-specific language models have outperformed most of the other models, the multilingual generalist model mBERT obtained competitive results.
Loading...

Quotes

0 citations in WOS
0 citations in

Journal Title

Journal ISSN

Volume Title

Publisher

Description

Citation

Villaplana A, Martínez R, Montalvo S. Improving Medical Entity Recognition in Spanish by Means of Biomedical Language Models. Electronics. 2023; 12(23):4872.

Endorsement

Review

Supplemented By

Referenced By

Statistics

Views
68
Downloads
44

Bibliographic managers

Document viewer

Select a file to preview:
Reload