Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
conll04 / README.md
phucdev's picture
Update README.md
a252269 verified
metadata
dataset_info:
  features:
    - name: entities
      list:
        - name: end
          dtype: int64
        - name: start
          dtype: int64
        - name: type
          dtype: string
    - name: tokens
      sequence: string
    - name: relations
      list:
        - name: head
          dtype: int64
        - name: tail
          dtype: int64
        - name: type
          dtype: string
    - name: orig_id
      dtype: int64
  splits:
    - name: train
      num_bytes: 358752
      num_examples: 922
    - name: validation
      num_bytes: 94688
      num_examples: 231
    - name: test
      num_bytes: 114248
      num_examples: 288
  download_size: 204955
  dataset_size: 567688
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
task_categories:
  - token-classification
language:
  - en
tags:
  - relation-extraction
pretty_name: CoNLL04
size_categories:
  - 1K<n<10K

Dataset Card for CoNLL04

Dataset Description

Dataset Summary

The CoNLL04 dataset is a benchmark dataset used for relation extraction tasks. It contains 1,437 sentences, each of which has at least one relation. The sentences are annotated with information about entities and their corresponding relation types. The data in this repository was converted from ConLL04 format to JSONL format in https://github.com/lavis-nlp/spert/blob/master/scripts/conversion/convert_conll04.py

The original data can be found here: https://cogcomp.seas.upenn.edu/page/resource_view/43

The sentences in this dataset are tokenized and are annotated with entities (Peop, Loc, Org, Other) and relations (Located_In, Work_For, OrgBased_In, Live_In, Kill).

Languages

The language in the dataset is English.

Dataset Structure

Dataset Instances

An example of 'train' looks as follows:

{
  "tokens": ["Newspaper", "`", "Explains", "'", "U.S.", "Interests", "Section", "Events", "FL1402001894", "Havana", "Radio", "Reloj", "Network", "in", "Spanish", "2100", "GMT", "13", "Feb", "94"],
  "entities": [
    {"type": "Loc", "start": 4, "end": 5},
    {"type": "Loc", "start": 9, "end": 10},
    {"type": "Org", "start": 10, "end": 13},
    {"type": "Other", "start": 15, "end": 17},
    {"type": "Other", "start": 17, "end": 20}
  ],
  "relations": [
    {"type": "OrgBased_In", "head": 2, "tail": 1}
  ],
  "orig_id": 3255
}

Data Fields

  • tokens: the text of this example, a string feature.
  • entities: list of entities
    • type: entity type, a string feature.
    • start: start token index of entity, a int32 feature.
    • end: exclusive end token index of entity, a int32 feature.
  • relations: list of relations
    • type: relation type, a string feature.
    • head: index of head entity, a int32 feature.
    • tail: index of tail entity, a int32 feature.

Citation

BibTeX:

@inproceedings{roth-yih-2004-linear,
    title = "A Linear Programming Formulation for Global Inference in Natural Language Tasks",
    author = "Roth, Dan  and
      Yih, Wen-tau",
    booktitle = "Proceedings of the Eighth Conference on Computational Natural Language Learning ({C}o{NLL}-2004) at {HLT}-{NAACL} 2004",
    month = may # " 6 - " # may # " 7",
    year = "2004",
    address = "Boston, Massachusetts, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/W04-2401",
    pages = "1--8",
}
@article{eberts-ulges2019spert,
  author       = {Markus Eberts and
                  Adrian Ulges},
  title        = {Span-based Joint Entity and Relation Extraction with Transformer Pre-training},
  journal      = {CoRR},
  volume       = {abs/1909.07755},
  year         = {2019},
  url          = {http://arxiv.org/abs/1909.07755},
  eprinttype    = {arXiv},
  eprint       = {1909.07755},
  timestamp    = {Mon, 23 Sep 2019 18:07:15 +0200},
  biburl       = {https://dblp.org/rec/journals/corr/abs-1909-07755.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

APA:

  • Roth, D., & Yih, W. (2004). A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004 (pp. 1-8). Boston, Massachusetts, USA: Association for Computational Linguistics. https://aclanthology.org/W04-2401
  • Eberts, M., & Ulges, A. (2019). Span-based joint entity and relation extraction with transformer pre-training. CoRR, abs/1909.07755. http://arxiv.org/abs/1909.07755

Dataset Card Authors

@phucdev