wiki-nre / README.md
saibo's picture
Upload dataset
6470ed1 verified
metadata
language:
  - en
size_categories:
  - 100K<n<1M
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: int64
    - name: triplets
      list:
        - name: object
          struct:
            - name: surfaceform
              dtype: string
            - name: uri
              dtype: string
        - name: predicate
          struct:
            - name: surfaceform
              dtype: string
            - name: uri
              dtype: string
        - name: subject
          struct:
            - name: surfaceform
              dtype: string
            - name: uri
              dtype: string
    - name: entities
      list:
        - name: surfaceform
          dtype: string
        - name: uri
          dtype: string
    - name: relations
      list:
        - name: surfaceform
          dtype: string
        - name: uri
          dtype: string
    - name: linearized_fully_expanded
      dtype: string
    - name: linearized_subject_collapsed
      dtype: string
  splits:
    - name: train
      num_bytes: 117206023
      num_examples: 223538
    - name: test
      num_bytes: 15597162
      num_examples: 29620
    - name: stratified_test_1K
      num_bytes: 608393
      num_examples: 1000
    - name: val
      num_bytes: 522524
      num_examples: 980
  download_size: 61105204
  dataset_size: 133934102
tags:
  - wikipedia
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: stratified_test_1K
        path: data/stratified_test_1K-*
      - split: val
        path: data/val-*

Dataset Card for "wiki-nre"

Feature

The Wiki-NRE dataset displays a significant skew in its relations distribution: the top 10 relations constitute 92% of the triplets, with the top 3 alone accounting for 69%. We have created stratified_test_1K whcih was downscaled from test set to 1,000 samples with balanced distribution of relations

image/png

image/png

Catalog[Optional]

A corresponding catalog(a list of subset of entities and relations) can be found here: https://huggingface.co/datasets/saibo/wikinre_catalog

Source

@inproceedings{trisedya-etal-2019-neural,
    title = "Neural Relation Extraction for Knowledge Base Enrichment",
    author = "Trisedya, Bayu Distiawan  and
      Weikum, Gerhard  and
      Qi, Jianzhong  and
      Zhang, Rui",
    editor = "Korhonen, Anna  and
      Traum, David  and
      M{\`a}rquez, Llu{\'\i}s",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P19-1023",
    doi = "10.18653/v1/P19-1023",
    pages = "229--240",
    abstract = "We study relation extraction for knowledge base (KB) enrichment. Specifically, we aim to extract entities and their relationships from sentences in the form of triples and map the elements of the extracted triples to an existing KB in an end-to-end manner. Previous studies focus on the extraction itself and rely on Named Entity Disambiguation (NED) to map triples into the KB space. This way, NED errors may cause extraction errors that affect the overall precision and recall. To address this problem, we propose an end-to-end relation extraction model for KB enrichment based on a neural encoder-decoder model. We collect high-quality training data by distant supervision with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that captures multi-word entity names in a sentence. Our model employs jointly learned word and entity embeddings to support named entity disambiguation. Finally, our model uses a modified beam search and a triple classifier to help generate high-quality triples. Our model outperforms state-of-the-art baselines by 15.51{\%} and 8.38{\%} in terms of F1 score on two real-world datasets.",
}

More Information needed