Datasets:
rcds
/

Languages:
English
Multilinguality:
multilingual
Size Categories:
10M<n<100M
Language Creators:
found
Annotations Creators:
other
Source Datasets:
original
Tags:
License:
Skatinger's picture
Update README.md
e1af131
metadata
annotations_creators:
  - other
language_creators:
  - found
language:
  - en
license:
  - cc-by-4.0
multilinguality:
  - multilingual
paperswithcode_id: null
pretty_name: wikipedia pages chunked for fill-mask
size_categories:
  - 10M<n<100M
source_datasets:
  - original
task_categories:
  - fill-mask

preprocessed version of rcds/wikipedia-persons-masked

Table of Contents

Dataset Description

  • Homepage:
  • Repository:
  • Paper:
  • Leaderboard:
  • Point of Contact:

Dataset Summary

Contains ~70k pages from wikipedia, each describing a person. For each page, the person described in the text is masked with a

Supported Tasks and Leaderboards

The dataset supports the tasks of fill-mask, but can also be used for other tasks such as question answering, e.g. "Who is

Languages

english only

Dataset Structure

In /data find different versions of the full dataset, with original and paraphrased versions as well as chunked to 4096 and 512 tokens.

Use the dataset like this:

from datasets import load_dataset

dataset = load_dataset('rcds/wikipedia-persons-masked', split='train', type='original', size='512')

Data Fields

Columns are:

  • texts: the text chunks
  • masks: the names for each of the masks in the chunks

Data Splits

There are no splits, only a default train.

Dataset Creation

Created by using the tokenizer from allenai/longformer-base-4096 for the 4096 token per chunk version, and the xml-roberta-large tokenizer for the 512 token version. Chunks are split to fit those token sizes, with the splits ensuring no words are split in half. Possible improvements: Last chunk of a page might be much shorter, could join part of the previous one to have more tokens in the last chunk.

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

TODO add citation

Contributions

Thanks to @skatinger for adding this dataset.