Sasha Luccioni commited on
Commit
63209c7
1 Parent(s): 28afab6

Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR (#4337)

Browse files

* Eval metadata batch 3: Quora, Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR

* Update datasets/quora/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update README.md

removing ROUGE args

* Update datasets/rotten_tomatoes/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update datasets/rotten_tomatoes/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update datasets/squad/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update datasets/squad_v2/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update datasets/squad/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update datasets/squad_v2/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update datasets/squad_v2/README.md

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

* Update README.md

removing eval for quora

Co-authored-by: sashavor <sasha.luccioni@huggingface.co>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

Commit from https://github.com/huggingface/datasets/commit/8ccf58b77343f323ba6654250f88b69699a57b8e

Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -19,6 +19,18 @@ task_categories:
19
  - summarization
20
  task_ids:
21
  - summarization-other-reddit-posts-summarization
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
 
24
  # Dataset Card for Reddit Webis-TLDR-17
@@ -49,7 +61,7 @@ task_ids:
49
 
50
  ## Dataset Description
51
 
52
- - **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
53
  - **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
54
  - **Paper:** [https://aclanthology.org/W17-4508]
55
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -81,7 +93,7 @@ Known ROUGE scores achieved for the Webis-TLDR-17:
81
 
82
  ### Languages
83
 
84
- English
85
 
86
  ## Dataset Structure
87
 
@@ -176,7 +188,7 @@ This dataset has been created to serve as a source of large-scale summarization
176
 
177
  Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
178
 
179
- Although filtering was performed abusive language maybe still be present.
180
 
181
  ## Additional Information
182
 
 
19
  - summarization
20
  task_ids:
21
  - summarization-other-reddit-posts-summarization
22
+ train-eval-index:
23
+ - config: default
24
+ task: summarization
25
+ task_id: summarization
26
+ splits:
27
+ train_split: train
28
+ col_mapping:
29
+ content: text
30
+ summary: target
31
+ metrics:
32
+ - type: rouge
33
+ name: Rouge
34
  ---
35
 
36
  # Dataset Card for Reddit Webis-TLDR-17
 
61
 
62
  ## Dataset Description
63
 
64
+ - **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
65
  - **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
66
  - **Paper:** [https://aclanthology.org/W17-4508]
67
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
93
 
94
  ### Languages
95
 
96
+ English
97
 
98
  ## Dataset Structure
99
 
 
188
 
189
  Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
190
 
191
+ Although filtering was performed abusive language maybe still be present.
192
 
193
  ## Additional Information
194