Datasets:

Languages:
English
Size Categories:
1K<n<10K
ArXiv:
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('json', {}), NamedSplit('test'): ('csv', {'sep': '\t'})}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1495, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1472, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1042, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 513, in infer_module_for_data_files
                  raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
              ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('json', {}), NamedSplit('test'): ('csv', {'sep': '\t'})}

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for KGEditor

Supported Tasks and Leaderboards

The purpose of the KGE Edit task is to modify the erroneous knowledge in the KGE model and to inject new knowledge into the KGE model. Thus, in response to the task objectives, we design two subtasks (EDIT & ADD). For the EDIT sub-task, we edit the wrong fact knowledge that is stored in the KG embeddings. Also, for the ADD sub-task, we add brand-new knowledge into the model without re-training the whole model.

Dataset Summary

We build four datasets for the sub-task of EDIT and ADD based on two benchmark datasets FB15k-237, and WN18RR. Firstly, we train KG embedding models with language models. For EDIT task, we sample some hard triples as candidates following the procedure below. For the ADD sub-task, we leverage the original training set of FB15k-237 and WN18RR to build the pre-train dataset (original pre-train data) and use the data from the standard inductive setting as they are not seen before.

Dataset Structure

Data Instances

An example of E-FB15k237: (Note that we have converted the ID to text for easier understanding)

{
  "ori": ["Jennifer Connelly", "type of union", "Marriage"],
  "cor": ["Stephen Sondheim", "type of union", "Marriage"],
  "process": ["[MASK]", "type of union", "Marriage"],
  "label": "Jennifer Connelly"
}

An example of A-FB15k237:

{
  "triples": ["Darryl F. Zanuck", "place of death", "Palm Springs"],
  "label": "Palm Springs",
  "head": 0
}

Data Fields

The data fields are the same among all splits. For EDIT sub-task:

  • ori: the fact in the pre-train dataset.
  • cor: corrupted triple.
  • process: the triple after replacing the wrong entity with the [MASK] token.
  • label: a classification label, the scope is the entire set of entities.

For ADD sub-task:

  • triples: the knowledge that needs to be injected into the model.
  • label: a classification label, the scope is the entire set of entities.
  • head: the head or tail entity that does not appear in pre-train.

Data Splits

Pre-trained Train Test L-Test
E-FB15k237 310,117 3,087 3,087 7,051
A-FB15k237 215,082 2,000 - 16,872
E-WN18RR 93,003 1,491 1,401 5,003
A-WN18RR 69,721 2,000 - 10,000

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

For the EDIT subtask, our data(E-FB15k237 and E-WN18RR) is based on the FB15k237 and WN18RR.

For the ADD subtask, our data(A-FB15k237 and E-WN18RR) remain the same as the inductive settings in paper.

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@article{DBLP:journals/corr/abs-2301-10405,
  author    = {Siyuan Cheng and
               Ningyu Zhang and
               Bozhong Tian and
               Zelin Dai and
               Feiyu Xiong and
               Wei Guo and
               Huajun Chen},
  title     = {Editing Language Model-based Knowledge Graph Embeddings},
  journal   = {CoRR},
  volume    = {abs/2301.10405},
  year      = {2023},
  url       = {https://doi.org/10.48550/arXiv.2301.10405},
  doi       = {10.48550/arXiv.2301.10405},
  eprinttype = {arXiv},
  eprint    = {2301.10405},
  timestamp = {Thu, 26 Jan 2023 17:49:16 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2301-10405.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Contributions

[More Information Needed]

Downloads last month
0
Edit dataset card

Models trained or fine-tuned on zjunlp/KGEditor

Space using zjunlp/KGEditor 1