knkarthick commited on
Commit
f8628cb
1 Parent(s): fcb4a8d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -1
README.md CHANGED
@@ -1,3 +1,87 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - summarization
18
+ - topic modeling
19
+ - one liner summary
20
+ - email subject
21
+ - meeting title
22
+ task_ids:
23
+ - summarization-other-conversations-summarization
24
+ paperswithcode_id: samsum-corpus
25
+ pretty_name: XSum Corpus
26
  ---
27
+ # Dataset Card for SAMSum Corpus
28
+ ## Dataset Description
29
+ ### Links
30
+ - **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
31
+ - **Repository:** https://arxiv.org/abs/1911.12237v2
32
+ - **Paper:** https://arxiv.org/abs/1911.12237v2
33
+ - **Point of Contact:** https://huggingface.co/knkarthick
34
+
35
+ ### Dataset Summary
36
+ The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
37
+ The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
38
+
39
+ ### Languages
40
+ English
41
+
42
+ ## Dataset Structure
43
+ ### Data Instances
44
+ SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
45
+ The first instance in the training set:
46
+ {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
47
+
48
+ ### Data Fields
49
+ - dialogue: text of dialogue.
50
+ - summary: human written summary of the dialogue.
51
+ - id: unique file id of an example.
52
+
53
+ ### Data Splits
54
+ - train: 14732
55
+ - val: 818
56
+ - test: 819
57
+
58
+ ## Dataset Creation
59
+ ### Curation Rationale
60
+ In paper:
61
+ In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
62
+ As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
63
+
64
+ ### Who are the source language producers?
65
+ linguists
66
+ ### Who are the annotators?
67
+ language experts
68
+ ### Annotation process
69
+ In paper:
70
+ Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
71
+
72
+
73
+ ## Licensing Information
74
+ non-commercial licence: MIT
75
+
76
+ ## Citation Information
77
+
78
+ ```
79
+ @InProceedings{xsum-emnlp,
80
+ author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata",
81
+ title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization",
82
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ",
83
+ year = "2018",
84
+ address = "Brussels, Belgium",
85
+ ```
86
+ ## Contributions
87
+ Thanks to [@Edinburgh NLP](https://github.com/EdinburghNLP) for adding this dataset.