wiki-kg-prob / README.md
oneonlee's picture
Update README.md
5f956f6 verified
metadata
dataset_info:
  features:
    - name: input
      dtype: string
    - name: output
      dtype: string
    - name: paragraph
      dtype: string
    - name: source
      dtype: string
  splits:
    - name: train
      num_bytes: 538343
      num_examples: 230
  download_size: 146541
  dataset_size: 538343
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - en
pretty_name: wiki-kg-prob

wiki-kg-prob

LAMA (LAnguage Model Analysis) style Knowledge Probing via WikiMIA dataset.

GitHub

https://github.com/oneonlee/wiki-kg-prob

References

@inproceedings{shi2024detecting,
    title = {Detecting Pretraining Data from Large Language Models},
    author = {Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=zWqr3MQuNs}
}
@inproceedings{petroni-etal-2019-language,
    title = "Language Models as Knowledge Bases?",
    author = {Petroni, Fabio and Rockt{\"a}schel, Tim and Riedel, Sebastian and Lewis, Patrick and Bakhtin, Anton and Wu, Yuxiang and Miller, Alexander},
    editor = "Inui, Kentaro and Jiang, Jing and Ng, Vincent and Wan, Xiaojun",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1250",
    doi = "10.18653/v1/D19-1250",
    pages = "2463--2473",
}