sumstew / README.md
Joemgu's picture
Update README.md
3d3268d
---
dataset_info:
features:
- name: prompt
dtype: string
- name: target
dtype: string
- name: input_tokens
dtype: int64
- name: target_tokens
dtype: int64
- name: subset
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 3338029493
num_examples: 187221
- name: validation
num_bytes: 218403099
num_examples: 14542
- name: test
num_bytes: 201638368
num_examples: 12467
download_size: 1982559322
dataset_size: 3758070960
task_categories:
- summarization
language:
- en
- de
- fr
- it
- es
size_categories:
- 100K<n<1M
license: apache-2.0
tags:
- chemistry
- biology
---
# Dataset Card for "sumstew"
## TL;DR:
Sumstew is a abstractive, multilingual Dataset, with a balanced number of samples from a diverse set of summarization Datasets. The input sizes range up to 16384 tokens.
Filtered using a diverse set of heuristics to encourage high coverage, accuracy and factual consistency. Code to reproduce Dataset available at *TODO*
## Dataset Description
- **Dataset Identifier**: sumstew
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization. It incorporates diverse data sources such as cnn_dailymail, samsum, mlsum (de, fr, es, it), klexikon, xlsum (fr, en, es), govreport, sciqa, piqa, pumbed_qa, multinews, laysum, booksum, dialogsum, fanpage (it), ilpost (it). This data has been curated by filtering based on n-gram overlap between the source and target documents and normalized to prevent undue bias. Every instance in this dataset is prefixed by an instruction (title, summary, or qa).
## Task Information
- **Task Categories**: The tasks covered by this dataset are primarily summarization tasks.
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).
## Dataset Structure
- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
- 'prompt': The input text for the task. (dtype: string)
- 'target': The expected output for the task. (dtype: string)
- 'subset': The subset of the dataset the instance belongs to. (dtype: string)
- 'language': The language of the instance. (dtype: string)
- **Data Splits**: The dataset is split into two subsets:
- 'train' set: 187221 examples
- 'validation' set: 14542 examples
- 'test' set: 12467 examples
## Dataset Statistics
- **Max Document Length**: The maximum document length is 16384 mlong-t5 tokens.
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.
## Additional Information
- **Data Collection**: The data has been collected from a variety of sources spanning different languages and domains, ensuring a diverse and comprehensive dataset.
- **Data Cleaning**: The dataset has been filtered by checking the ngram overlap between the source and target document and dropping samples which have too much or too little overlap, and also through normalization.
- **Known Limitations**: As the dataset is generated from diverse sources, the inherent biases or limitations of those sources may persist in this dataset as well.
- **Usage Scenarios**: This dataset can be used for training and evaluating models on tasks like summarization and question-answering, in a multilingual context.
## Credits
At this point I want to thank every creator of the underlying datasets (there are too many for me to count). If there are any issues concercining licensing or you want your data removed from the dataset, feel free to DM over Twitter (link in profile).
Special thanks to @pszemraj [https://huggingface.co/pszemraj] for the inspiration.
If interested in collaboration or consulting for your project, feel free to DM https://twitter.com/StutterBuddy