Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:
system HF staff commited on
Commit
a6ebbbc
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +164 -0
  3. dataset_infos.json +1 -0
  4. dummy/samsum/0.0.0/dummy_data.zip +3 -0
  5. samsum.py +118 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-nd-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - summarization
20
+ ---
21
+
22
+ # Dataset Card for SAMSum Corpus
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://arxiv.org/abs/1911.12237v2
50
+ - **Repository:** [Needs More Information]
51
+ - **Paper:** https://arxiv.org/abs/1911.12237v2
52
+ - **Leaderboard:** [Needs More Information]
53
+ - **Point of Contact:** [Needs More Information]
54
+
55
+ ### Dataset Summary
56
+
57
+ The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
58
+ The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ [Needs More Information]
63
+
64
+ ### Languages
65
+
66
+ English
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
73
+
74
+ The first instance in the training set:
75
+ {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
76
+
77
+ ### Data Fields
78
+
79
+ - dialogue: text of dialogue.
80
+ - summary: human written summary of the dialogue.
81
+ - id: unique id of an example.
82
+
83
+ ### Data Splits
84
+
85
+ - train: 14732
86
+ - val: 818
87
+ - test: 819
88
+
89
+
90
+ ## Dataset Creation
91
+
92
+ ### Curation Rationale
93
+
94
+ In paper:
95
+ > In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
96
+ As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
97
+
98
+ ### Source Data
99
+
100
+ #### Initial Data Collection and Normalization
101
+
102
+ In paper:
103
+ > We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
104
+
105
+ #### Who are the source language producers?
106
+
107
+ linguists
108
+
109
+ ### Annotations
110
+
111
+ #### Annotation process
112
+
113
+ In paper:
114
+ > Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
115
+
116
+ #### Who are the annotators?
117
+
118
+ language experts
119
+
120
+ ### Personal and Sensitive Information
121
+
122
+ None, see above: Initial Data Collection and Normalization
123
+
124
+ ## Considerations for Using the Data
125
+
126
+ ### Social Impact of Dataset
127
+
128
+ [Needs More Information]
129
+
130
+ ### Discussion of Biases
131
+
132
+ [Needs More Information]
133
+
134
+ ### Other Known Limitations
135
+
136
+ [Needs More Information]
137
+
138
+ ## Additional Information
139
+
140
+ ### Dataset Curators
141
+
142
+ [Needs More Information]
143
+
144
+ ### Licensing Information
145
+
146
+ non-commercial licence: CC BY-NC-ND 4.0
147
+
148
+ ### Citation Information
149
+
150
+ @inproceedings{gliwa-etal-2019-samsum,
151
+ title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
152
+ author = "Gliwa, Bogdan and
153
+ Mochol, Iwona and
154
+ Biesek, Maciej and
155
+ Wawer, Aleksander",
156
+ booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
157
+ month = nov,
158
+ year = "2019",
159
+ address = "Hong Kong, China",
160
+ publisher = "Association for Computational Linguistics",
161
+ url = "https://www.aclweb.org/anthology/D19-5409",
162
+ doi = "10.18653/v1/D19-5409",
163
+ pages = "70--79"
164
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"samsum": {"description": "\nSAMSum Corpus contains over 16k chat dialogues with manually annotated\nsummaries.\nThere are two features:\n - dialogue: text of dialogue.\n - summary: human written summary of the dialogue.\n - id: id of a example.\n", "citation": "\n@article{gliwa2019samsum,\n title={SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization},\n author={Gliwa, Bogdan and Mochol, Iwona and Biesek, Maciej and Wawer, Aleksander},\n journal={arXiv preprint arXiv:1911.12237},\n year={2019}\n}\n", "homepage": "https://arxiv.org/abs/1911.12237v2", "license": "CC BY-NC-ND 4.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "samsum", "config_name": "samsum", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9479141, "num_examples": 14732, "dataset_name": "samsum"}, "test": {"name": "test", "num_bytes": 534492, "num_examples": 819, "dataset_name": "samsum"}, "validation": {"name": "validation", "num_bytes": 516431, "num_examples": 818, "dataset_name": "samsum"}}, "download_checksums": {"https://arxiv.org/src/1911.12237v2/anc/corpus.7z": {"num_bytes": 2944100, "checksum": "a97674c66726f66b98a08ca5e8868fb8af9d4843f2b05c4f839bc5cfe91e8899"}}, "download_size": 2944100, "post_processing_size": null, "dataset_size": 10530064, "size_in_bytes": 13474164}}
dummy/samsum/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a354d835d342e473712250a1484124f0c3e9ca154bc6404f4c8a713507d4183b
3
+ size 11382
samsum.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """SAMSum dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import py7zr
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """
27
+ @article{gliwa2019samsum,
28
+ title={SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization},
29
+ author={Gliwa, Bogdan and Mochol, Iwona and Biesek, Maciej and Wawer, Aleksander},
30
+ journal={arXiv preprint arXiv:1911.12237},
31
+ year={2019}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """
36
+ SAMSum Corpus contains over 16k chat dialogues with manually annotated
37
+ summaries.
38
+ There are two features:
39
+ - dialogue: text of dialogue.
40
+ - summary: human written summary of the dialogue.
41
+ - id: id of a example.
42
+ """
43
+
44
+ _HOMEPAGE = "https://arxiv.org/abs/1911.12237v2"
45
+
46
+ _LICENSE = "CC BY-NC-ND 4.0"
47
+
48
+ _URLs = "https://arxiv.org/src/1911.12237v2/anc/corpus.7z"
49
+
50
+
51
+ class Samsum(datasets.GeneratorBasedBuilder):
52
+ """SAMSum Corpus dataset."""
53
+
54
+ VERSION = datasets.Version("1.1.0")
55
+
56
+ BUILDER_CONFIGS = [
57
+ datasets.BuilderConfig(name="samsum"),
58
+ ]
59
+
60
+ def _info(self):
61
+ features = datasets.Features(
62
+ {
63
+ "id": datasets.Value("string"),
64
+ "dialogue": datasets.Value("string"),
65
+ "summary": datasets.Value("string"),
66
+ }
67
+ )
68
+ return datasets.DatasetInfo(
69
+ description=_DESCRIPTION,
70
+ features=features,
71
+ supervised_keys=None,
72
+ homepage=_HOMEPAGE,
73
+ license=_LICENSE,
74
+ citation=_CITATION,
75
+ )
76
+
77
+ def _split_generators(self, dl_manager):
78
+ """Returns SplitGenerators."""
79
+ my_urls = _URLs
80
+ path = dl_manager.download_and_extract(my_urls)
81
+ return [
82
+ datasets.SplitGenerator(
83
+ name=datasets.Split.TRAIN,
84
+ # These kwargs will be passed to _generate_examples
85
+ gen_kwargs={
86
+ "filepath": (path, "train.json"),
87
+ "split": "train",
88
+ },
89
+ ),
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.TEST,
92
+ # These kwargs will be passed to _generate_examples
93
+ gen_kwargs={
94
+ "filepath": (path, "test.json"),
95
+ "split": "test",
96
+ },
97
+ ),
98
+ datasets.SplitGenerator(
99
+ name=datasets.Split.VALIDATION,
100
+ # These kwargs will be passed to _generate_examples
101
+ gen_kwargs={
102
+ "filepath": (path, "val.json"),
103
+ "split": "val",
104
+ },
105
+ ),
106
+ ]
107
+
108
+ def _generate_examples(self, filepath, split):
109
+ """ Yields examples. """
110
+
111
+ path, fname = filepath
112
+
113
+ with py7zr.SevenZipFile(path, "r") as z:
114
+ for name, bio in z.readall().items():
115
+ if name == fname:
116
+ data = json.load(bio)
117
+ for example in data:
118
+ yield example["id"], example