system HF staff commited on
Commit
ea7d319
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other-Microsoft Research Data License
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended|other-Open-American-National-Corpus-(OANC1)
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - summarization
20
+ ---
21
+
22
+ # Dataset Card for [Dataset Name]
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for [MsrTextCompression]](#dataset-card-for-dataset-name)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563
56
+ - **Repository:**
57
+ - **Paper:** https://www.microsoft.com/en-us/research/wp-content/uploads/2016/09/Sentence_Compression_final-1.pdf
58
+ - **Leaderboard:**
59
+ - **Point of Contact:**
60
+
61
+ ### Dataset Summary
62
+
63
+ This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ Text Summarization
68
+
69
+ ### Languages
70
+
71
+ English
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ It contains approximately 6,000 source texts with multiple compressions (about 26,000 pairs of source and compressed texts), representing business letters, newswire, journals, and technical documents sampled from the Open American National Corpus (OANC1).
78
+
79
+ - Each source text is accompanied by up to five crowd-sourced rewrites constrained to a preset
80
+ compression ratio and annotated with quality judgments. Multiple rewrites permit study of the impact of operations on human compression quality and facilitate automatic evaluation.
81
+ - This dataset is the first to provide compressions at the multi-sentence (two-sentence paragraph)
82
+ level, which may present a stepping stone to whole document summarization.
83
+ - Many of these two-sentence paragraphs are compressed both as paragraphs and separately sentence-bysentence, offering data that may yield insights
84
+ into the impact of multi-sentence operations on human compression quality.
85
+
86
+ | Description | Source | Target | Average CPS | Meaning Quality | Grammar Quality |
87
+ | :------------- | :----------: | -----------: | -----------: | -----------: | -----------: |
88
+ | 1-Sentence | 3764 | 15523 | 4.12 | 2.78 | 2.81 |
89
+ | 2-Sentence | 2405 | 10900 | 4.53 | 2.78 | 2.83 |
90
+
91
+ **Note**: Average CPS = Average Compressions per Source Text
92
+
93
+ ### Data Fields
94
+
95
+ ```
96
+ {'domain': 'Newswire',
97
+ 'source_id': '106',
98
+ 'source_text': '" Except for this small vocal minority, we have just not gotten a lot of groundswell against this from members, " says APA president Philip G. Zimbardo of Stanford University.',
99
+ 'targets': {'compressed_text': ['"Except for this small vocal minority, we have not gotten a lot of groundswell against this," says APA president Zimbardo.',
100
+ '"Except for a vocal minority, we haven\'t gotten much groundswell from members, " says Philip G. Zimbardo of Stanford University.',
101
+ 'APA president of Stanford has stated that except for a vocal minority they have not gotten a lot of pushback from members.',
102
+ 'APA president Philip G. Zimbardo of Stanford says they have not had much opposition against this.'],
103
+ 'judge_id': ['2', '22', '10', '0'],
104
+ 'num_ratings': [3, 3, 3, 3],
105
+ 'ratings': [[6, 6, 6], [11, 6, 6], [6, 11, 6], [6, 11, 11]]}}
106
+ ```
107
+
108
+ - source_id: index of article per original dataset
109
+ - source_text: uncompressed original text
110
+ - domain: source of the article
111
+ - targets:
112
+ - compressed_text: compressed version of `source_text`
113
+ - judge_id: anonymized ids of crowdworkers who proposed compression
114
+ - num_ratings: number of ratings available for each proposed compression
115
+ - ratings: see table below
116
+
117
+ Ratings system (excerpted from authors' README):
118
+
119
+ - 6 = Most important meaning Flawless language (3 on meaning and 3 on grammar as per the paper's terminology)
120
+ - 7 = Most important meaning Minor errors (3 on meaning and 2 on grammar)
121
+ - 9 = Most important meaning Disfluent or incomprehensible (3 on meaning and 1 on grammar)
122
+ - 11 = Much meaning Flawless language (2 on meaning and 3 on grammar)
123
+ - 12 = Much meaning Minor errors (2 on meaning and 2 on grammar)
124
+ - 14 = Much meaning Disfluent or incomprehensible (2 on meaning and 1 on grammar)
125
+ - 21 = Little or none meaning Flawless language (1 on meaning and 3 on grammar)
126
+ - 22 = Little or none meaning Minor errors (1 on meaning and 2 on grammar)
127
+ - 24 = Little or none meaning Disfluent or incomprehensible (1 on meaning and 1 on grammar)
128
+
129
+ See **README.txt** from data archive for additional details.
130
+
131
+ ### Data Splits
132
+
133
+ There are 4,936 source texts in the training, 448 in the development, and 785 in the test set.
134
+
135
+ ## Dataset Creation
136
+
137
+ ### Annotations
138
+
139
+ #### Annotation process
140
+
141
+ Compressions were created using UHRS, an inhouse crowd-sourcing system similar to Amazon’s Mechanical Turk, in two annotation rounds, one for shortening and a second to rate compression quality:
142
+
143
+ 1. In the first round, five workers were tasked with abridging each source text by at least 25%, while remaining grammatical and fluent, and retaining the meaning of the original.
144
+ 2. In the second round, 3-5 judges (raters) were asked to evaluate the grammaticality of each compression on a scale from 1 (major errors, disfluent) through 3 (fluent), and again analogously for meaning preservation on a scale from 1 (orthogonal) through 3 (most important meaning-preserving).
145
+
146
+ ## Additional Information
147
+
148
+ ### Licensing Information
149
+
150
+ Microsoft Research Data License Agreement
151
+ ### Citation Information
152
+
153
+ @inproceedings{Toutanova2016ADA,
154
+ title={A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs},
155
+ author={Kristina Toutanova and Chris Brockett and Ke M. Tran and Saleema Amershi},
156
+ booktitle={EMNLP},
157
+ year={2016}
158
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing. \n", "citation": "@inproceedings{Toutanova2016ADA,\n title={A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs},\n author={Kristina Toutanova and Chris Brockett and Ke M. Tran and Saleema Amershi},\n booktitle={EMNLP},\n year={2016}\n}\n", "homepage": "https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563", "license": "Microsoft Research Data License Agreement", "features": {"source_id": {"dtype": "string", "id": null, "_type": "Value"}, "domain": {"dtype": "string", "id": null, "_type": "Value"}, "source_text": {"dtype": "string", "id": null, "_type": "Value"}, "targets": {"feature": {"compressed_text": {"dtype": "string", "id": null, "_type": "Value"}, "judge_id": {"dtype": "string", "id": null, "_type": "Value"}, "num_ratings": {"dtype": "int64", "id": null, "_type": "Value"}, "ratings": {"feature": {"dtype": "int64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "msr_text_compression", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5001312, "num_examples": 4936, "dataset_name": "msr_text_compression"}, "validation": {"name": "validation", "num_bytes": 449691, "num_examples": 447, "dataset_name": "msr_text_compression"}, "test": {"name": "test", "num_bytes": 804536, "num_examples": 785, "dataset_name": "msr_text_compression"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 6255539, "size_in_bytes": 6255539}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afacde32a52bd68d9726a38758f8922e9de720f3b9cb97fe7ecdd23c8da834e7
3
+ size 2141
msr_text_compression.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+ from collections import namedtuple
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{Toutanova2016ADA,
27
+ title={A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs},
28
+ author={Kristina Toutanova and Chris Brockett and Ke M. Tran and Saleema Amershi},
29
+ booktitle={EMNLP},
30
+ year={2016}
31
+ }
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing.
36
+ """
37
+
38
+ _HOMEPAGE = "https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563"
39
+
40
+ _LICENSE = "Microsoft Research Data License Agreement"
41
+
42
+
43
+ _SOURCE_LABELS = ["source_id", "domain", "source_text"]
44
+ _COMPRESSION_LABELS = ["compressed_text", "judge_id", "num_ratings", "ratings"]
45
+ SourceInfo = namedtuple("SourceInfo", _SOURCE_LABELS)
46
+ CompressionInfo = namedtuple("CompressionInfo", _COMPRESSION_LABELS)
47
+
48
+
49
+ class MsrTextCompression(datasets.GeneratorBasedBuilder):
50
+ """This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing."""
51
+
52
+ VERSION = datasets.Version("1.1.0")
53
+ _ENCODING = "utf-8-sig"
54
+
55
+ @property
56
+ def manual_download_instructions(self):
57
+ return """\
58
+ You should download the dataset from https://www.microsoft.com/en-us/download/details.aspx?id=54262
59
+ The webpage requires registration.
60
+ Unzip and please put the files from the extracted RawData folder under the following names
61
+ train.tsv, valid.tsv, and test.tsv in a dir of your choice,
62
+ which will be used as a manual_dir, e.g. `~/.manual_dir/msr_text_compression`
63
+ The data can then be loaded via:
64
+ `datasets.load_dataset("msr_text_compression", data_dir="~/.manual_dir/msr_text_compression")`.
65
+ """
66
+
67
+ def _info(self):
68
+
69
+ # Define features
70
+ source = {k: datasets.Value("string") for k in _SOURCE_LABELS}
71
+ target = {
72
+ "compressed_text": datasets.Value("string"),
73
+ "judge_id": datasets.Value("string"),
74
+ "num_ratings": datasets.Value("int64"),
75
+ "ratings": datasets.Sequence(datasets.Value("int64")),
76
+ }
77
+ targets = {"targets": datasets.Sequence(target)}
78
+ feature_dict = {**source, **targets}
79
+
80
+ return datasets.DatasetInfo(
81
+ description=_DESCRIPTION,
82
+ features=datasets.Features(feature_dict),
83
+ supervised_keys=None,
84
+ homepage=_HOMEPAGE,
85
+ license=_LICENSE,
86
+ citation=_CITATION,
87
+ )
88
+
89
+ def _split_generators(self, dl_manager):
90
+ """Returns SplitGenerators."""
91
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
92
+ if not os.path.exists(data_dir):
93
+ raise FileNotFoundError(
94
+ "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('msr_text_compression', data_dir=...)` per the manual download instructions: {}".format(
95
+ data_dir, self.manual_download_instructions
96
+ )
97
+ )
98
+ return [
99
+ datasets.SplitGenerator(
100
+ name=datasets.Split.TRAIN,
101
+ gen_kwargs={"input_file": os.path.join(data_dir, "train.tsv")},
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.VALIDATION,
105
+ gen_kwargs={"input_file": os.path.join(data_dir, "valid.tsv")},
106
+ ),
107
+ datasets.SplitGenerator(
108
+ name=datasets.Split.TEST,
109
+ gen_kwargs={"input_file": os.path.join(data_dir, "test.tsv")},
110
+ ),
111
+ ]
112
+
113
+ def _parse_source(self, s):
114
+ source_id, domain, text = [x.strip() for x in s.split("\t")]
115
+ return SourceInfo(source_id, domain, text)._asdict()
116
+
117
+ def _parse_ratings(self, num_ratings, ratings):
118
+ """Parses raw ratings into list of ints
119
+ Args:
120
+ num_ratings: int
121
+ ratings: List[str]
122
+ Returns:
123
+ List[int] with len == num_ratings
124
+ """
125
+
126
+ # ratings contains both numeric ratings (actual ratings) and qualitative descriptions
127
+ # we only wish to keep the numeric ratings
128
+ assert num_ratings * 2 == len(ratings)
129
+
130
+ return [int(r) for r in ratings[:: len(ratings) // num_ratings]]
131
+
132
+ def _parse_target(self, target):
133
+ text, judge, num_ratings, *ratings = [t.strip() for t in target.split("\t")]
134
+ num_ratings = int(num_ratings)
135
+ ratings = self._parse_ratings(num_ratings, ratings)
136
+ return CompressionInfo(text, judge, num_ratings, ratings)._asdict()
137
+
138
+ def _generate_examples(self, input_file):
139
+ """Yields examples.
140
+
141
+ Files are encoded with BOM markers, hence the use of utf-8-sig as codec
142
+ """
143
+ with open(input_file, encoding=self._ENCODING) as f:
144
+ for id_, line in enumerate(f):
145
+ source_info, *targets_info = line.split("|||")
146
+
147
+ source = self._parse_source(source_info)
148
+ targets = {"targets": [self._parse_target(target) for target in targets_info]}
149
+
150
+ yield id_, {**source, **targets}