Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Builder wiki40b is not streamable.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 584, in compute_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 179, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 222, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1775, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1230, in as_streaming_dataset
                  raise ValueError(f"Builder {self.name} is not streamable.")
              ValueError: Builder wiki40b is not streamable.

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "wiki40b"

Dataset Summary

Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

en

  • Size of downloaded dataset files: 0.00 MB
  • Size of the generated dataset: 9988.05 MB
  • Total amount of disk used: 9988.05 MB

An example of 'train' looks as follows.


Data Fields

The data fields are the same among all splits.

en

  • wikidata_id: a string feature.
  • text: a string feature.
  • version_id: a string feature.

Data Splits

name train validation test
en 2926536 163597 162274

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information


Contributions

Thanks to @jplu, @patrickvonplaten, @thomwolf, @albertvillanova, @lhoestq for adding this dataset.

Downloads last month
4,092

Models trained or fine-tuned on wiki40b