dipanjanS commited on
Commit
fbf0944
1 Parent(s): 0459b4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -31
README.md CHANGED
@@ -1,33 +1,101 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: dialogue
7
- dtype: string
8
- - name: summary
9
- dtype: string
10
- - name: topic
11
- dtype: string
12
- splits:
13
- - name: train
14
- num_bytes: 11439628
15
- num_examples: 12460
16
- - name: validation
17
- num_bytes: 446639
18
- num_examples: 500
19
- - name: test
20
- num_bytes: 1367451
21
- num_examples: 1500
22
- download_size: 7116819
23
- dataset_size: 13253718
24
- configs:
25
- - config_name: default
26
- data_files:
27
- - split: train
28
- path: data/train-*
29
- - split: validation
30
- path: data/validation-*
31
- - split: test
32
- path: data/test-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license: cc-by-nc-sa-4.0
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 10K<n<100K
13
+ source_datasets:
14
+ - original
15
+ task_categories:
16
+ - summarization
17
+ - text2text-generation
18
+ - text-generation
19
+ task_ids: []
20
+ pretty_name: DIALOGSum Corpus
21
+ tags:
22
+ - dialogue-summary
23
+ - one-liner-summary
24
+ - meeting-title
25
+ - email-subject
 
 
 
 
 
 
 
26
  ---
27
+ # Dataset Card for DIALOGSum Corpus
28
+ ## Dataset Description
29
+ ### Links
30
+ - **Homepage:** https://aclanthology.org/2021.findings-acl.449
31
+ - **Repository:** https://github.com/cylnlp/dialogsum
32
+ - **Paper:** https://aclanthology.org/2021.findings-acl.449
33
+ - **Point of Contact:** https://huggingface.co/knkarthick
34
+
35
+ ### Dataset Summary
36
+ DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
37
+ __This is just a copy of the original dataset from the link mentioned above, being used solely for research and training demos__
38
+
39
+ ### Languages
40
+ English
41
+
42
+ ## Dataset Structure
43
+ ### Data Instances
44
+ DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
45
+ The first instance in the training set:
46
+ {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
47
+ ### Data Fields
48
+ - dialogue: text of dialogue.
49
+ - summary: human written summary of the dialogue.
50
+ - topic: human written topic/one liner of the dialogue.
51
+ - id: unique file id of an example.
52
+
53
+ ### Data Splits
54
+ - train: 12460
55
+ - val: 500
56
+ - test: 1500
57
+ - holdout: 100 [Only 3 features: id, dialogue, topic]
58
+
59
+ ## Dataset Creation
60
+ ### Curation Rationale
61
+ In paper:
62
+ We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
63
+
64
+ Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
65
+
66
+ Under rich real-life scenarios, including more diverse task-oriented scenarios;
67
+ Have clear communication patterns and intents, which is valuable to serve as summarization sources;
68
+ Have a reasonable length, which comforts the purpose of automatic summarization.
69
+
70
+ We ask annotators to summarize each dialogue based on the following criteria:
71
+ Convey the most salient information;
72
+ Be brief;
73
+ Preserve important named entities within the conversation;
74
+ Be written from an observer perspective;
75
+ Be written in formal language.
76
+ ### Who are the source language producers?
77
+ linguists
78
+ ### Who are the annotators?
79
+ language experts
80
+
81
+ ## Licensing Information
82
+ CC BY-NC-SA 4.0
83
+ ## Citation Information
84
+ ```
85
+ @inproceedings{chen-etal-2021-dialogsum,
86
+ title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
87
+ author = "Chen, Yulong and
88
+ Liu, Yang and
89
+ Chen, Liang and
90
+ Zhang, Yue",
91
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
92
+ month = aug,
93
+ year = "2021",
94
+ address = "Online",
95
+ publisher = "Association for Computational Linguistics",
96
+ url = "https://aclanthology.org/2021.findings-acl.449",
97
+ doi = "10.18653/v1/2021.findings-acl.449",
98
+ pages = "5062--5074",
99
+ ```
100
+ ## Contributions
101
+ Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset.