Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:
Axel578 commited on
Commit
83a0dbc
1 Parent(s): ce9eedc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -3
README.md CHANGED
@@ -1,7 +1,201 @@
1
  ---
 
 
 
 
2
  language:
3
  - en
4
- pretty_name: Suoerpf
 
 
 
5
  size_categories:
6
- - 1K<n<10K
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
  language:
7
  - en
8
+ license:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
  size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - summarization
18
+ task_ids: []
19
+ paperswithcode_id: samsum-corpus
20
+ pretty_name: SAMSum Corpus
21
+ tags:
22
+ - conversations-summarization
23
+ dataset_info:
24
+ features:
25
+ - name: id
26
+ dtype: string
27
+ - name: dialogue
28
+ dtype: string
29
+ - name: summary
30
+ dtype: string
31
+ config_name: samsum
32
+ splits:
33
+ - name: train
34
+ num_bytes: 9479141
35
+ num_examples: 14732
36
+ - name: test
37
+ num_bytes: 534492
38
+ num_examples: 819
39
+ download_size: 2944100
40
+ dataset_size: 10530064
41
+ train-eval-index:
42
+ - config: samsum
43
+ task: summarization
44
+ task_id: summarization
45
+ splits:
46
+ eval_split: test
47
+ col_mapping:
48
+ dialogue: text
49
+ summary: target
50
+ ---
51
+
52
+ # Dataset Card for SAMSum Corpus
53
+
54
+ ## Table of Contents
55
+ - [Dataset Description](#dataset-description)
56
+ - [Dataset Summary](#dataset-summary)
57
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
58
+ - [Languages](#languages)
59
+ - [Dataset Structure](#dataset-structure)
60
+ - [Data Instances](#data-instances)
61
+ - [Data Fields](#data-fields)
62
+ - [Data Splits](#data-splits)
63
+ - [Dataset Creation](#dataset-creation)
64
+ - [Curation Rationale](#curation-rationale)
65
+ - [Source Data](#source-data)
66
+ - [Annotations](#annotations)
67
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
68
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
69
+ - [Social Impact of Dataset](#social-impact-of-dataset)
70
+ - [Discussion of Biases](#discussion-of-biases)
71
+ - [Other Known Limitations](#other-known-limitations)
72
+ - [Additional Information](#additional-information)
73
+ - [Dataset Curators](#dataset-curators)
74
+ - [Licensing Information](#licensing-information)
75
+ - [Citation Information](#citation-information)
76
+ - [Contributions](#contributions)
77
+
78
+ ## Dataset Description
79
+
80
+ - **Homepage:** https://arxiv.org/abs/1911.12237v2
81
+ - **Repository:** [Needs More Information]
82
+ - **Paper:** https://arxiv.org/abs/1911.12237v2
83
+ - **Leaderboard:** [Needs More Information]
84
+ - **Point of Contact:** [Needs More Information]
85
+
86
+ ### Dataset Summary
87
+
88
+ The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
89
+ The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
90
+
91
+ ### Supported Tasks and Leaderboards
92
+
93
+ [Needs More Information]
94
+
95
+ ### Languages
96
+
97
+ English
98
+
99
+ ## Dataset Structure
100
+
101
+ ### Data Instances
102
+
103
+ The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
104
+
105
+ The first instance in the training set:
106
+ {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
107
+
108
+ ### Data Fields
109
+
110
+ - dialogue: text of dialogue.
111
+ - summary: human written summary of the dialogue.
112
+ - id: unique id of an example.
113
+
114
+ ### Data Splits
115
+
116
+ - train: 14732
117
+ - val: 818
118
+ - test: 819
119
+
120
+
121
+ ## Dataset Creation
122
+
123
+ ### Curation Rationale
124
+
125
+ In paper:
126
+ > In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
127
+ As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
128
+
129
+ ### Source Data
130
+
131
+ #### Initial Data Collection and Normalization
132
+
133
+ In paper:
134
+ > We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
135
+
136
+ #### Who are the source language producers?
137
+
138
+ linguists
139
+
140
+ ### Annotations
141
+
142
+ #### Annotation process
143
+
144
+ In paper:
145
+ > Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
146
+
147
+ #### Who are the annotators?
148
+
149
+ language experts
150
+
151
+ ### Personal and Sensitive Information
152
+
153
+ None, see above: Initial Data Collection and Normalization
154
+
155
+ ## Considerations for Using the Data
156
+
157
+ ### Social Impact of Dataset
158
+
159
+ [Needs More Information]
160
+
161
+ ### Discussion of Biases
162
+
163
+ [Needs More Information]
164
+
165
+ ### Other Known Limitations
166
+
167
+ [Needs More Information]
168
+
169
+ ## Additional Information
170
+
171
+ ### Dataset Curators
172
+
173
+ [Needs More Information]
174
+
175
+ ### Licensing Information
176
+
177
+ non-commercial licence: CC BY-NC-ND 4.0
178
+
179
+ ### Citation Information
180
+
181
+ ```
182
+ @inproceedings{gliwa-etal-2019-samsum,
183
+ title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
184
+ author = "Gliwa, Bogdan and
185
+ Mochol, Iwona and
186
+ Biesek, Maciej and
187
+ Wawer, Aleksander",
188
+ booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
189
+ month = nov,
190
+ year = "2019",
191
+ address = "Hong Kong, China",
192
+ publisher = "Association for Computational Linguistics",
193
+ url = "https://www.aclweb.org/anthology/D19-5409",
194
+ doi = "10.18653/v1/D19-5409",
195
+ pages = "70--79"
196
+ }
197
+ ```
198
+
199
+ ### Contributions
200
+
201
+ Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset.