wiki_bio / README.md
SpeedOfMagic's picture
Upload dataset
3167e33 verified
metadata
dataset_info:
  config_name: continuation
  features:
    - name: input
      dtype: string
    - name: output
      dtype: string
  splits:
    - name: test
      num_bytes: 4455412
      num_examples: 72831
  download_size: 1417573
  dataset_size: 4455412
configs:
  - config_name: continuation
    data_files:
      - split: test
        path: continuation/test-*

Dataset Card for wiki_bio

This is a preprocessed version of wiki_bio dataset for benchmarks in LM-Polygraph.

Dataset Details

Dataset Description

Dataset Sources [optional]

Uses

Direct Use

This dataset should be used for performing benchmarks on LM-polygraph.

Out-of-Scope Use

This dataset should not be used for further dataset preprocessing.

Dataset Structure

This dataset contains the "continuation" subset, which corresponds to main dataset, used in LM-Polygraph. It may also contain other subsets, which correspond to instruct methods, used in LM-Polygraph.

Each subset contains two splits: train and test. Each split contains two string columns: "input", which corresponds to processed input for LM-Polygraph, and "output", which corresponds to processed output for LM-Polygraph.

Dataset Creation

Curation Rationale

This dataset is created in order to separate dataset creation code from benchmarking code.

Source Data

Data Collection and Processing

Data is collected from https://huggingface.co/datasets/wiki_bio and processed by using build_dataset.py script in repository.

Who are the source data producers?

People who created https://huggingface.co/datasets/wiki_bio

Bias, Risks, and Limitations

This dataset contains the same biases, risks, and limitations as its source dataset https://huggingface.co/datasets/wiki_bio

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset.