File size: 3,857 Bytes
023e76f
 
 
 
 
 
 
63521c3
 
 
 
644be4a
 
 
 
023e76f
 
63521c3
 
c98384e
63521c3
 
 
 
 
 
 
1ee3aed
 
 
 
 
 
 
 
 
39b61c1
 
 
 
 
023e76f
39b61c1
 
 
 
3d3268d
39b61c1
023e76f
24bcfa5
 
 
39b61c1
24bcfa5
 
 
39b61c1
24bcfa5
 
 
 
 
 
 
 
 
 
 
3d3268d
 
 
24bcfa5
 
 
3d3268d
24bcfa5
 
 
 
 
 
 
1c3aef7
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: target
    dtype: string
  - name: input_tokens
    dtype: int64
  - name: target_tokens
    dtype: int64
  - name: subset
    dtype: string
  - name: language
    dtype: string
  splits:
  - name: train
    num_bytes: 3338029493
    num_examples: 187221
  - name: validation
    num_bytes: 218403099
    num_examples: 14542
  - name: test
    num_bytes: 201638368
    num_examples: 12467
  download_size: 1982559322
  dataset_size: 3758070960
task_categories:
- summarization
language:
- en
- de
- fr
- it
- es
size_categories:
- 100K<n<1M
license: apache-2.0
tags:
- chemistry
- biology
---
# Dataset Card for "sumstew"

## TL;DR: 

Sumstew is a abstractive, multilingual Dataset, with a balanced number of samples from a diverse set of summarization Datasets. The input sizes range up to 16384 tokens.
Filtered using a diverse set of heuristics to encourage high coverage, accuracy and factual consistency. Code to reproduce Dataset available at *TODO*

## Dataset Description

- **Dataset Identifier**: sumstew
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization. It incorporates diverse data sources such as cnn_dailymail, samsum, mlsum (de, fr, es, it), klexikon, xlsum (fr, en, es), govreport, sciqa, piqa, pumbed_qa, multinews, laysum, booksum, dialogsum, fanpage (it), ilpost (it). This data has been curated by filtering based on n-gram overlap between the source and target documents and normalized to prevent undue bias. Every instance in this dataset is prefixed by an instruction (title, summary, or qa).

## Task Information

- **Task Categories**: The tasks covered by this dataset are primarily summarization tasks.
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).

## Dataset Structure

- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
    - 'prompt': The input text for the task. (dtype: string)
    - 'target': The expected output for the task. (dtype: string)
    - 'subset': The subset of the dataset the instance belongs to. (dtype: string)
    - 'language': The language of the instance. (dtype: string)

- **Data Splits**: The dataset is split into two subsets:
    - 'train' set: 187221 examples
    - 'validation' set: 14542 examples
    - 'test' set: 12467 examples

## Dataset Statistics

- **Max Document Length**: The maximum document length is 16384 mlong-t5 tokens.
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.

## Additional Information

- **Data Collection**: The data has been collected from a variety of sources spanning different languages and domains, ensuring a diverse and comprehensive dataset.
- **Data Cleaning**: The dataset has been filtered by checking the ngram overlap between the source and target document and dropping samples which have too much or too little overlap, and also through normalization.
- **Known Limitations**: As the dataset is generated from diverse sources, the inherent biases or limitations of those sources may persist in this dataset as well.
- **Usage Scenarios**: This dataset can be used for training and evaluating models on tasks like summarization and question-answering, in a multilingual context.

## Credits

At this point I want to thank every creator of the underlying datasets (there are too many for me to count). If there are any issues concercining licensing or you want your data removed from the dataset, feel free to DM over Twitter (link in profile).
Special thanks to @pszemraj [https://huggingface.co/pszemraj] for the inspiration.

If interested in collaboration or consulting for your project, feel free to DM https://twitter.com/StutterBuddy