Datasets:

Languages:
English
License:

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for [Dataset Name]

Dataset Summary

The LINNAEUS corpus consists of 100 full-text documents from the PMCOA document set which were randomly selected. All mentions of species terms were manually annotated and normalized to the NCBI taxonomy IDs of the intended species.

The original LINNAEUS corpus is available in a TAB-separated standoff format. The resource does not define training, development or test subsets.

We converted the corpus into BioNLP shared task standoff format using a custom script, split it into 50-, 17-, and 33- document training, development and test sets, and then converted these into the CoNLL format using standoff2conll.

As a full-text corpus, LINNAEUS contains comparatively frequent non-ASCII characters, which were mapped to ASCII using the standoff2conll -a option. The conversion was highly accurate, but due to sentence-splitting errors within entity mentions, the number of annotations in the converted data was larger by four (100.09%) than that in the source data. 99.77% of names in the original annotation matched names in the converted data.

Supported Tasks and Leaderboards

This dataset is used for species Named Entity Recognition.

Languages

The dataset is in English.

Dataset Structure

Data Instances

An example from the dataset is:

{'id': '2',
'tokens': ['Scp160p', 'is', 'a', '160', 'kDa', 'protein', 'in', 'the', 'yeast', 'Saccharomyces', 'cerevisiae', 'that', 'contains', '14', 'repeats', 'of', 'the', 'hnRNP', 'K', '-', 'homology', '(', 'KH', ')', 'domain', ',', 'and', 'demonstrates', 'significant', 'sequence', 'homology', 'to', 'a', 'family', 'of', 'proteins', 'collectively', 'known', 'as', 'vigilins', '.'],
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}

Data Fields

  • id: Sentence identifier.
  • tokens: Array of tokens composing a sentence.
  • ner_tags: Array of tags, where 0 indicates no species mentioned, 1 signals the first token of a species and 2 the subsequent tokens of the species.

Data Splits

name train validation test
linnaeus 11936 4079 7143

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

This version of the dataset is licensed under Creative Commons Attribution 4.0 International.

Citation Information

@article{crichton2017neural,
  title={A neural network multi-task learning approach to biomedical named entity recognition},
  author={Crichton, Gamal and Pyysalo, Sampo and Chiu, Billy and Korhonen, Anna},
  journal={BMC Bioinformatics},
  volume={18},
  number={1},
  pages={368},
  year={2017},
  publisher={BioMed Central}
  doi = {10.1186/s12859-017-1776-8},
  issn = {1471-2105},
  url = {https://doi.org/10.1186/s12859-017-1776-8},
}
@article{Gerner2010,
  author = {Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
  doi = {10.1186/1471-2105-11-85},
  issn = {1471-2105},
  journal = {BMC Bioinformatics},
  number = {1},
  pages = {85},
  title = {{LINNAEUS: A species name identification system for biomedical literature}},
  url = {https://doi.org/10.1186/1471-2105-11-85},
  volume = {11},
  year = {2010}
}

Contributions

Thanks to @edugp for adding this dataset.

Downloads last month
123

Models trained or fine-tuned on cambridgeltl/linnaeus