WikiSpell_custom / README.md
RomanCast's picture
Update README.md
663a894
|
raw
history blame
3.11 kB
metadata
license: mit
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 129624
      num_examples: 10000
    - name: validation_top1
      num_bytes: 10754
      num_examples: 1000
    - name: test_top1
      num_bytes: 10948
      num_examples: 1000
    - name: validation_1_10
      num_bytes: 11618
      num_examples: 1000
    - name: test_1_10
      num_bytes: 11692
      num_examples: 1000
    - name: validation_10_20
      num_bytes: 13401
      num_examples: 1000
    - name: test_10_20
      num_bytes: 13450
      num_examples: 1000
    - name: validation_20_30
      num_bytes: 15112
      num_examples: 1000
    - name: test_20_30
      num_bytes: 15069
      num_examples: 1000
    - name: validation_bottom50
      num_bytes: 15204
      num_examples: 1000
    - name: test_bottom50
      num_bytes: 15076
      num_examples: 1000
  download_size: 241234
  dataset_size: 261948
language:
  - en
viewer: true
task_categories:
  - text-generation
size_categories:
  - 1K<n<10K

WikiSpell

Description

This dataset is a custom implementation of the WikiSpell dataset introduced in Character-Aware Models Improve Visual Text Rendering by Liu et al. (2022).

Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words (taken from this Wiktionary extraction), and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.

The validation and test are splitted in 5 sets, sampled depending on their frequency in the corpus:

  • 1% most common words
  • 1 - 10% most common words
  • 10 - 20% most common words
  • 20 - 30% most common words
  • 50% least common words

Contrary to the original WikiSpell dataset, we compute the frequency of the words using the first 100k sentences from OpenWebText (Skylion007/openwebtext) instead of mC4.

Usage

This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed like in the following snippet:

sample = ds["train"][0]
label = " ".join(sample["text"])

The labels are not included in the dataset files directly.

Citation

Please cite the original paper introducing WikiSpell if you're using this dataset:

@inproceedings{liu-etal-2023-character,
    title = "Character-Aware Models Improve Visual Text Rendering",
    author = "Liu, Rosanne  and
      Garrette, Dan  and
      Saharia, Chitwan  and
      Chan, William  and
      Roberts, Adam  and
      Narang, Sharan  and
      Blok, Irina  and
      Mical, Rj  and
      Norouzi, Mohammad  and
      Constant, Noah",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.900",
    pages = "16270--16297",
}