Mario Šaško commited on
Commit
f6a8a5a
1 Parent(s): 724c8fd

Update gigaword card and info (#3775)

Browse files

* Fix gigaword url

* Update card

* Revert url chang

* Regenerate info

Commit from https://github.com/huggingface/datasets/commit/ff3227aa40059ed101191a36aa7273c63633159d

Files changed (2) hide show
  1. README.md +37 -29
  2. dataset_infos.json +1 -1
README.md CHANGED
@@ -1,11 +1,27 @@
1
  ---
 
 
 
 
2
  languages:
3
  - en
 
 
 
 
 
 
 
 
 
 
 
 
4
  paperswithcode_id: null
5
- pretty_name: gigaword
6
  ---
7
 
8
- # Dataset Card for "gigaword"
9
 
10
  ## Table of Contents
11
  - [Dataset Description](#dataset-description)
@@ -33,10 +49,10 @@ pretty_name: gigaword
33
 
34
  ## Dataset Description
35
 
36
- - **Homepage:** [https://github.com/harvardnlp/sent-summary](https://github.com/harvardnlp/sent-summary)
37
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
38
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
39
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
  - **Size of downloaded dataset files:** 551.61 MB
41
  - **Size of the generated dataset:** 918.35 MB
42
  - **Total amount of disk used:** 1469.96 MB
@@ -48,35 +64,23 @@ around 4 million articles. Use the 'org_data' provided by
48
  https://github.com/microsoft/unilm/ which is identical to
49
  https://github.com/harvardnlp/sent-summary but with better format.
50
 
51
- There are two features:
52
- - document: article.
53
- - summary: headline.
54
-
55
  ### Supported Tasks and Leaderboards
56
 
57
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
58
 
59
  ### Languages
60
 
61
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
62
 
63
  ## Dataset Structure
64
 
65
- We show detailed information for up to 5 configurations of the dataset.
66
-
67
  ### Data Instances
68
 
69
- #### default
70
-
71
- - **Size of downloaded dataset files:** 551.61 MB
72
- - **Size of the generated dataset:** 918.35 MB
73
- - **Total amount of disk used:** 1469.96 MB
74
-
75
  An example of 'train' looks as follows.
76
  ```
77
  {
78
- "document": "train source",
79
- "summary": "train target"
80
  }
81
  ```
82
 
@@ -84,7 +88,6 @@ An example of 'train' looks as follows.
84
 
85
  The data fields are the same among all splits.
86
 
87
- #### default
88
  - `document`: a `string` feature.
89
  - `summary`: a `string` feature.
90
 
@@ -104,17 +107,24 @@ The data fields are the same among all splits.
104
 
105
  #### Initial Data Collection and Normalization
106
 
107
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
108
 
109
  #### Who are the source language producers?
110
 
111
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
112
 
113
  ### Annotations
114
 
115
  #### Annotation process
116
 
117
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
118
 
119
  #### Who are the annotators?
120
 
@@ -150,8 +160,7 @@ The data fields are the same among all splits.
150
 
151
  ### Citation Information
152
 
153
- ```
154
-
155
  @article{graff2003english,
156
  title={English gigaword},
157
  author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
@@ -171,7 +180,6 @@ The data fields are the same among all splits.
171
  author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
172
  year={2015}
173
  }
174
-
175
  ```
176
 
177
 
1
  ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
  languages:
7
  - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - extended|gigaword_2003
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - summarization
20
  paperswithcode_id: null
21
+ pretty_name: Gigaword
22
  ---
23
 
24
+ # Dataset Card for Gigaword
25
 
26
  ## Table of Contents
27
  - [Dataset Description](#dataset-description)
49
 
50
  ## Dataset Description
51
 
52
+ - **Repository:** [Gigaword repository](https://github.com/harvardnlp/sent-summary)
53
+ - **Leaderboard:** [Gigaword leaderboard](https://paperswithcode.com/sota/text-summarization-on-gigaword)
54
+ - **Paper:** [A Neural Attention Model for Abstractive Sentence Summarization](https://arxiv.org/abs/1509.00685)
55
+ - **Point of Contact:** [Alexander Rush](mailto:arush@cornell.edu)
56
  - **Size of downloaded dataset files:** 551.61 MB
57
  - **Size of the generated dataset:** 918.35 MB
58
  - **Total amount of disk used:** 1469.96 MB
64
  https://github.com/microsoft/unilm/ which is identical to
65
  https://github.com/harvardnlp/sent-summary but with better format.
66
 
 
 
 
 
67
  ### Supported Tasks and Leaderboards
68
 
69
+ - `summarization`: This dataset can be used for Summarization, where given a dicument, the goal is to predict its summery. The model performance is evaluated using the [ROUGE](https://huggingface.co/metrics/rouge) metric. The leaderboard for this task is available [here](https://paperswithcode.com/sota/text-summarization-on-gigaword).
70
 
71
  ### Languages
72
 
73
+ English.
74
 
75
  ## Dataset Structure
76
 
 
 
77
  ### Data Instances
78
 
 
 
 
 
 
 
79
  An example of 'train' looks as follows.
80
  ```
81
  {
82
+ 'document': "australia 's current account deficit shrunk by a record #.## billion dollars -lrb- #.## billion us -rrb- in the june quarter due to soaring commodity prices , figures released monday showed .",
83
+ 'summary': 'australian current account deficit narrows sharply'
84
  }
85
  ```
86
 
88
 
89
  The data fields are the same among all splits.
90
 
 
91
  - `document`: a `string` feature.
92
  - `summary`: a `string` feature.
93
 
107
 
108
  #### Initial Data Collection and Normalization
109
 
110
+ From the paper:
111
+ > For our training set, we pair the headline of each article with its first sentence to create an inputsummary pair. While the model could in theory be trained on any pair, Gigaword contains many spurious headline-article pairs. We therefore prune training based on the following heuristic filters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extraneous editing marks? (3) Does the title have a question mark or colon? After applying these filters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocessing step using PTB tokenization, lower-casing, replacing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release.
112
+ The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million tokens and 69K word types with the average title of length 8.3 words (note that this is significantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the
113
+ first 75-characters of the input.
114
 
115
  #### Who are the source language producers?
116
 
117
+ From the paper:
118
+ > For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014).
119
 
120
  ### Annotations
121
 
122
  #### Annotation process
123
 
124
+ Annotations are inherited from the annotatated Gigaword data set.
125
+
126
+ Additional information from the paper:
127
+ > Our model only uses annotations for tokenization and sentence separation, although several of the baselines use parsing and tagging as well.
128
 
129
  #### Who are the annotators?
130
 
160
 
161
  ### Citation Information
162
 
163
+ ```bibtex
 
164
  @article{graff2003english,
165
  title={English gigaword},
166
  author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
180
  author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
181
  year={2015}
182
  }
 
183
  ```
184
 
185
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"default": {"description": "\nHeadline-generation on a corpus of article pairs from Gigaword consisting of\naround 4 million articles. Use the 'org_data' provided by\nhttps://github.com/microsoft/unilm/ which is identical to\nhttps://github.com/harvardnlp/sent-summary but with better format.\n\nThere are two features:\n - document: article.\n - summary: headline.\n\n", "citation": "\n@article{graff2003english,\n title={English gigaword},\n author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},\n journal={Linguistic Data Consortium, Philadelphia},\n volume={4},\n number={1},\n pages={34},\n year={2003}\n}\n\n@article{Rush_2015,\n title={A Neural Attention Model for Abstractive Sentence Summarization},\n url={http://dx.doi.org/10.18653/v1/D15-1044},\n DOI={10.18653/v1/d15-1044},\n journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},\n publisher={Association for Computational Linguistics},\n author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},\n year={2015}\n}\n", "homepage": "https://github.com/harvardnlp/sent-summary", "license": "", "features": {"document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": {"input": "document", "output": "summary"}, "builder_name": "gigaword", "config_name": "default", "version": {"version_str": "1.2.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 2, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 451514, "num_examples": 1951, "dataset_name": "gigaword"}, "train": {"name": "train", "num_bytes": 916673137, "num_examples": 3803957, "dataset_name": "gigaword"}, "validation": {"name": "validation", "num_bytes": 45838081, "num_examples": 189651, "dataset_name": "gigaword"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1USoQ8lJgN8kAWnUnRrupMGrPMLlDVqlV": {"num_bytes": 578402958, "checksum": "bc0c4a2e1aa19cf2123688b87bc2d778c0d8fc24a4090e3c10a27c5faa1b898b"}}, "download_size": 578402958, "dataset_size": 962962732, "size_in_bytes": 1541365690}}
1
+ {"default": {"description": "\nHeadline-generation on a corpus of article pairs from Gigaword consisting of\naround 4 million articles. Use the 'org_data' provided by\nhttps://github.com/microsoft/unilm/ which is identical to\nhttps://github.com/harvardnlp/sent-summary but with better format.\n\nThere are two features:\n - document: article.\n - summary: headline.\n\n", "citation": "\n@article{graff2003english,\n title={English gigaword},\n author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},\n journal={Linguistic Data Consortium, Philadelphia},\n volume={4},\n number={1},\n pages={34},\n year={2003}\n}\n\n@article{Rush_2015,\n title={A Neural Attention Model for Abstractive Sentence Summarization},\n url={http://dx.doi.org/10.18653/v1/D15-1044},\n DOI={10.18653/v1/d15-1044},\n journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},\n publisher={Association for Computational Linguistics},\n author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},\n year={2015}\n}\n", "homepage": "https://github.com/harvardnlp/sent-summary", "license": "", "features": {"document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "document", "output": "summary"}, "task_templates": null, "builder_name": "gigaword", "config_name": "default", "version": {"version_str": "1.2.0", "description": null, "major": 1, "minor": 2, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 915249388, "num_examples": 3803957, "dataset_name": "gigaword"}, "validation": {"name": "validation", "num_bytes": 45767096, "num_examples": 189651, "dataset_name": "gigaword"}, "test": {"name": "test", "num_bytes": 450782, "num_examples": 1951, "dataset_name": "gigaword"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1USoQ8lJgN8kAWnUnRrupMGrPMLlDVqlV": {"num_bytes": 578402958, "checksum": "bc0c4a2e1aa19cf2123688b87bc2d778c0d8fc24a4090e3c10a27c5faa1b898b"}}, "download_size": 578402958, "post_processing_size": null, "dataset_size": 961467266, "size_in_bytes": 1539870224}}