metadata
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 23380
num_examples: 100
- name: validation
num_bytes: 23634
num_examples: 100
- name: test
num_bytes: 24038
num_examples: 100
download_size: 55197
dataset_size: 71052
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Dataset Card for Dataset Name
This is a tiny version of https://huggingface.co/datasets/Harvard/gigaword, used for testing purposes.
Dataset Details
Dataset Description
This is a tiny version of https://huggingface.co/datasets/Harvard/gigaword.
It was created by selecting only first 100 samples from each split.
- Language(s) (NLP): English
Dataset Sources [optional]
Uses
It is supposed be used only for testing purposes.
Direct Use
It is supposed be used only when you want to test that your code works.
Dataset Structure
Dataset contains two string columns: document
and summary
. document
shows source document, and summary
is summarization of document.
Curation Rationale
To test code without waiting a long time to download gigawords.