Format

Send to

Choose Destination
J Cheminform. 2019 Jun 24;11(1):42. doi: 10.1186/s13321-019-0363-6.

Next generation community assessment of biomedical entity recognition web servers: metrics, performance, interoperability aspects of BeCalm.

Author information

1
Department of Computer Science, ESEI, University of Vigo, Campus As Lagoas, 32004, Ourense, Spain.
2
The Biomedical Research Centre (CINBIO), Campus Universitario Lagoas-Marcosende, 36310, Vigo, Spain.
3
SING Research Group, Galicia Sur Health Research Institute (ISS Galicia Sur), SERGAS-UVIGO, Vigo, Spain.
4
Department of Microbiology and Biochemistry of Dairy Products, Instituto de Productos Lácteos de Asturias (IPLA), Consejo Superior de Investigaciones Científicas (CSIC), Paseo Río Linares S/N 33300, Villaviciosa, Asturias, Spain.
5
Life Science Department, Barcelona Supercomputing Centre (BSC-CNS), C/Jordi Girona 29-31, 08034, Barcelona, Spain.
6
Joint BSC-IRB-CRG Program in Computational Biology, Parc Científic de Barcelona, C/Baldiri Reixac 10, 08028, Barcelona, Spain.
7
Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig de Lluís Companys 23, 08010, Barcelona, Spain.
8
Spanish Bioinformatics Institute INB-ISCIII ES-ELIXIR, 28029, Madrid, Spain.
9
Life Science Department, Barcelona Supercomputing Centre (BSC-CNS), C/Jordi Girona 29-31, 08034, Barcelona, Spain. martin.krallinger@bsc.es.
10
Joint BSC-IRB-CRG Program in Computational Biology, Parc Científic de Barcelona, C/Baldiri Reixac 10, 08028, Barcelona, Spain. martin.krallinger@bsc.es.
11
Biological Text Mining Unit, Structural Biology and Biocomputing Programme, Spanish National Cancer Research Centre, C/Melchor Fernández Almagro 3, 28029, Madrid, Spain. martin.krallinger@bsc.es.
12
Department of Computer Science, ESEI, University of Vigo, Campus As Lagoas, 32004, Ourense, Spain. analia@uvigo.es.
13
The Biomedical Research Centre (CINBIO), Campus Universitario Lagoas-Marcosende, 36310, Vigo, Spain. analia@uvigo.es.
14
SING Research Group, Galicia Sur Health Research Institute (ISS Galicia Sur), SERGAS-UVIGO, Vigo, Spain. analia@uvigo.es.
15
Centre of Biological Engineering (CEB), University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal. analia@uvigo.es.

Abstract

BACKGROUND:

Shared tasks and community challenges represent key instruments to promote research, collaboration and determine the state of the art of biomedical and chemical text mining technologies. Traditionally, such tasks relied on the comparison of automatically generated results against a so-called Gold Standard dataset of manually labelled textual data, regardless of efficiency and robustness of the underlying implementations. Due to the rapid growth of unstructured data collections, including patent databases and particularly the scientific literature, there is a pressing need to generate, assess and expose robust big data text mining solutions to semantically enrich documents in real time. To address this pressing need, a novel track called "Technical interoperability and performance of annotation servers" was launched under the umbrella of the BioCreative text mining evaluation effort. The aim of this track was to enable the continuous assessment of technical aspects of text annotation web servers, specifically of online biomedical named entity recognition systems of interest for medicinal chemistry applications.

RESULTS:

A total of 15 out of 26 registered teams successfully implemented online annotation servers. They returned predictions during a two-month period in predefined formats and were evaluated through the BeCalm evaluation platform, specifically developed for this track. The track encompassed three levels of evaluation, i.e. data format considerations, technical metrics and functional specifications. Participating annotation servers were implemented in seven different programming languages and covered 12 general entity types. The continuous evaluation of server responses accounted for testing periods of low activity and moderate to high activity, encompassing overall 4,092,502 requests from three different document provider settings. The median response time was below 3.74 s, with a median of 10 annotations/document. Most of the servers showed great reliability and stability, being able to process over 100,000 requests in a 5-day period.

CONCLUSIONS:

The presented track was a novel experimental task that systematically evaluated the technical performance aspects of online entity recognition systems. It raised the interest of a significant number of participants. Future editions of the competition will address the ability to process documents in bulk as well as to annotate full-text documents.

KEYWORDS:

Annotation server; BeCalm metaserver; BioCreative; Continuous evaluation; Named entity recognition; Patent mining; REST-API; Shared task; TIPS; Text mining

Supplemental Content

Full text links

Icon for Springer Icon for PubMed Central
Loading ...
Support Center