File size: 18,964 Bytes
93aedf9
966e84c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93aedf9
966e84c
e01c0e7
966e84c
19df2a8
098ed3c
966e84c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
098ed3c
 
966e84c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3dcb112
966e84c
 
 
 
 
 
 
 
 
3dcb112
966e84c
098ed3c
966e84c
 
 
 
 
3dcb112
966e84c
098ed3c
966e84c
 
 
 
 
098ed3c
 
966e84c
3dcb112
966e84c
 
 
 
 
 
 
098ed3c
966e84c
 
3dcb112
 
098ed3c
966e84c
 
098ed3c
966e84c
 
3dcb112
966e84c
098ed3c
966e84c
 
 
 
 
098ed3c
966e84c
3dcb112
 
 
098ed3c
966e84c
 
098ed3c
966e84c
 
3dcb112
 
098ed3c
966e84c
 
098ed3c
966e84c
 
3dcb112
 
966e84c
 
 
098ed3c
966e84c
 
3dcb112
 
966e84c
 
 
098ed3c
966e84c
 
3dcb112
 
098ed3c
966e84c
 
098ed3c
966e84c
 
3dcb112
 
966e84c
 
 
 
 
 
098ed3c
966e84c
 
 
 
 
 
 
 
 
 
12a8df2
966e84c
098ed3c
966e84c
 
 
 
 
 
098ed3c
966e84c
 
3dcb112
 
966e84c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
098ed3c
966e84c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3dcb112
 
 
 
 
 
 
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
098ed3c
3dcb112
 
 
 
098ed3c
3dcb112
 
966e84c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346

---
pretty_name: Claire English Dialogue Dataset (CEDD)
license: cc-by-nc-sa-4.0
language:
  - en
multilinguality:
  - monolingual
size_categories:
  - 100M<n<1B
task_categories:
  - text-generation
  - text2text-generation
task_ids:
  - language-modeling
  - dialogue-modeling
  - dialogue-generation
tags:
  - conversational
  - text-generation
  - conditional-text-generation
  - dialogue-modeling
  - dialogue-generation
viewer: true
configs:
- config_name: default
  sample_by: paragraph
  data_files:
  - split: train
    path: "EN/*/train.txt"
  - split: test
    path: "EN/*/test.txt"
---


# Claire English Dialogue Dataset (CEDD) <br />*A collection of English dialogue transcripts*

This is the first packaged version of the datasets used to train the english variants of the Claire family of large language models
([OpenLLM-France/Claire-7B-EN-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-EN-0.1)). (A related French dataset can be found [here](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).)

The Claire English Dialogue Dataset (CEDD) is a collection of transcripts of English dialogues from various sources, including parliamentary proceedings, interviews, broadcast, meetings, and free conversations.
Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker, or a unique identifier if the speaker is unknown.

* [Dataset composition](#dataset-composition)
  * [Data sources](#data-sources)
* [Example use (python)](#example-use-python)
* [Important notes](#important-notes)
* [License](#license)
* [Citations](#citations)
* [Contact](#contact)


## Dataset composition

CEDD can be broken down into:
* 962,550 conversations in total (812,705 in train, 11,992 in test)
* 20,863,917 speech turns in total (18,576,327 in train, 359,527 in test)
* around 864M words

It is a collection of several independent datasets, classified by the types of conversations they contain. This categorization is designed to more evenly balance the influence of different styles of dialogue on model training and to facilitate future applications of CEDD for which certain types of dialogue might be more helpful than others.

For more information, you can look at the following documents:
* [EN/metadata.csv](EN/metadata.csv) contains further statistics on the different subcorpora (broken down by train/test splits).
<!-- * XXX -->

### Data sources

<table>
<thead>
<tr>
  <th>Dataset</th>
  <th>Description</th>
  <th>Words</th>
  <th>Turns</th>
  <th>Conversations</th>
  <th>License (and conditions)</th>
</tr>
</thead>
<tbody>
<tr>
  <td colspan="6"><h4>Parliamentary Proceedings</h4></td></tr>
<tr>
  <td><a href="https://www.statmt.org/europarl/">Europarl</a></td>
  <td>The Europarl parallel corpus</td>
  <td>56M</td>
  <td>214K</td>
  <td>11K</td>
  <td>No copyright restrictions. If you use this data in your research, please contact phi@jhu.edu</td>
</tr>
<tr>
  <td colspan="6"><h4>Spoken Dialogue</h4></td></tr>
<tr>
  <td><a href="https://anc.org/data/oanc/contents/#charlotte">Charlotte Narratives</a></td>
  <td>The Charlotte Narrative and Conversation Collection (CNCC) contains 95 narratives, conversations and interviews representative of the residents of Mecklenburg County, North Carolina and surrounding North Carolina communities.</td>
  <td>200K</td>
  <td>2.7K</td>
  <td>93</td>
  <td><a href="https://anc.org/data/oanc/download/">Available for download and use for research and development, including commercial development</a></td>
</tr>
<tr>
  <td><a href="https://anc.org/data/oanc/contents/#switchboard">Switchboard</a></td>
  <td>The corpus consists of approximately 260 hours of speech and was originally collected by Texas Instruments in 1990-1, under DARPA sponsorship.</td>
  <td>3M</td>
  <td>290K</td>
  <td>2320</td>
  <td><a href="https://catalog.ldc.upenn.edu/LDC97S62">LDC User Ageement for Non-Members</a></td>
</tr>
   
<tr>
  <td colspan="6"><h4>Broadcast</h4></td></tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio">MediaSum</a> <a href="https://huggingface.co/datasets/ccdv/mediasum">(GitHub)</a></td>
  <td>MediaSum dataset for summarization. A collection of transcripts of CNN and NPR interviews with short summaries.</td>
  <td>720M</td>
  <td>13M</td>
  <td>458K</td>
  <td><a href="https://github.com/zcgzcgzcg1/MediaSum">For research purposes only</a></td>
</tr>

<tr>
  <td colspan="6"><h4>Meetings</h4></td></tr>
<tr>
  <td><a href="https://github.com/guokan-shang/ami-and-icsi-corpora">AMI</a> <a href="https://groups.inf.ed.ac.uk/ami/corpus/">(project page)</a></td>
  <td>The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings.</td>
  <td>712K</td>
  <td>75K</td>
  <td>139</td>
  <td><a href="https://groups.inf.ed.ac.uk/ami/corpus/">CC BY 4.0</a></td>
</tr>
<tr>
  <td><a href="https://github.com/guokan-shang/ami-and-icsi-corpora">ICSI</a> <a href="https://groups.inf.ed.ac.uk/ami/icsi/">(project page)</a></td>
  <td>About 70 hours of meeting recordings.</td>
  <td>804K</td>
  <td>64K</td>
  <td><1K</td>
  <td><a href="https://groups.inf.ed.ac.uk/ami/icsi/">CC BY 4.0</a></td>
</tr>

<tr>
  <td colspan="6"><h4>Assistance</h4></td></tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/conversational_recommendation/Redial">ReDial</a> <a href="https://redialdata.github.io/website/">(GitHub)</a></td>
  <td>ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users recommend movies to each other.</td>
  <td>1.5M</td>
  <td>139K</td>
  <td>11K</td>
  <td><a href="https://redialdata.github.io/website/">CC BY 4.0</a></td>
</tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/conversational_recommendation/OpenDialKG">OpenDialKG</a> <a href="https://github.com/facebookresearch/opendialkg">(GitHub)</a></td>
  <td>OpenDialKG is a dataset of conversations between two crowdsourcing agents engaging in a dialog about a given topic.</td>
  <td>1M</td>
  <td>84K</td>
  <td>12K</td>
  <td><a href="https://github.com/facebookresearch/opendialkg">CC-BY-NC-4.0</a></td>
</tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/ABCD">ABCD</a> <a href="https://github.com/asappresearch/abcd">(GitHub)</a></td>
  <td>Action-Based Conversations Dataset.</td>
  <td>1.5M</td>
  <td>142K</td>
  <td>10K</td>
  <td><a href="https://github.com/asappresearch/abcd/blob/master/LICENSE">MIT</a></td>
</tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/AirDialogue">AirDialogue</a> <a href="https://github.com/google/airdialogue">(GitHub)</a></td>
  <td>AirDialogue is a benchmark dataset for goal-oriented dialogue generation research.</td>
  <td>37M</td>
  <td>4.6M</td>
  <td>361K</td>
  <td><a href="https://github.com/google/airdialogue/blob/master/LICENSE">Apache License 2.0</a></td>
</tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/MULTIWOZ2_2">MULTIWOZ2_2</a> <a href="https://huggingface.co/datasets/pfb30/multi_woz_v22">(pfb30)</a></td>
  <td>Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics.</td>
  <td>1.9M</td>
  <td>143K</td>
  <td>10.4K</td>
  <td><a href="https://huggingface.co/datasets/pfb30/multi_woz_v22">Apache License 2.0</a></td>
</tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/MulDoGO">MulDoGO2</a> <a href="https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset">(GitHub)</a></td>
  <td>Conversations from the airline, fastfood, finance, insurance, media, and software domains.</td>
  <td>10M</td>
  <td>892K</td>
  <td>63K</td>
  <td><a href="https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset/blob/master/LICENSE.txt">CDLA Permissive License</a></td>
</tr>

<tr>
  <td colspan="6"><h4>Free Chat</h4></td></tr>
<tr>
  <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/open_domain/chitchat-dataset">Chit-Chat</a> <a href="https://github.com/BYU-PCCL/chitchat-dataset">(GitHub)</a></td>
  <td>Open-domain conversational dataset from the BYU Perception, Control & Cognition lab's Chit-Chat Challenge.</td>
  <td>2.3M</td>
  <td>7.1K</td>
  <td>258K</td>
  <td><a href="https://github.com/BYU-PCCL/chitchat-dataset/blob/master/LICENSE">MIT License</a></td>
</tr>
<tr>
  <td><a href="https://huggingface.co/datasets/li2017dailydialog/daily_dialog">DailyDialog</a></td>
  <td>High-quality multi-turn dialog dataset.</td>
  <td>1.2M</td>
  <td>102K</td>
  <td>13K</td>
  <td><a href="https://huggingface.co/datasets/li2017dailydialog/daily_dialog">CC BY-NC-SA 4.0</a></td>
</tr>


<tr>
  <td colspan="6"><h4>Misc</h4></td></tr>
<tr>
  <td><a href="http://www.phon.ox.ac.uk/AudioBNC#Access">British National Corpus (BNC)</a></td>
  <td>Collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of British English, both spoken and written, from the late twentieth century.</td>
  <td>110M</td>
  <td>663K</td>
  <td>0.9K</td>
  <td><a href="http://www.natcorp.ox.ac.uk/docs/licence.html">BCN License</a></td>
</tr>

</tbody>
</table>



## Example use (python)

In the following `sample_by="paragraph"` is important to ensure that each sample corresponds to a full conversation (not just a speech turn).

Load dataset from HuggingFace cache (downloaded under `~/.cache/huggingface/datasets`):
```python
from datasets import load_dataset

dataset = load_dataset("OpenLLM-France/Claire-Dialogue-English-0.1", sample_by="paragraph", streaming=True)
```

Load dataset from raw text files:
```python
from datasets import load_dataset
import glob

path = "path/to/dataset"
train_files = glob.glob(path + "/*/train.txt")
test_files = glob.glob(path + "/*/test.txt")

dataset = load_dataset("text", data_files={"train": train_files, "test": test_files}, sample_by="paragraph", streaming=True)
```

Iterate on the dataset:
```python
for sample in dataset["train"]:
    train_conversation = sample["text"]
    ...

for sample in dataset["test"]:
    test_conversation = sample["text"]
    ...
```


## Important notes

All datasets were normalized in text files so that:
* Conversations are separated by a single blank line.
* Each line corresponds to a single speech turn.
* Each line begins with a speaker label of the form "`[***:]`".
* When speaker names are anonymized or otherwise unknown, speakers are distinguished by numbers in the following format: "**`[speaker001:]`**", "**`[speaker002:]`**", … <br /> Otherwise, speakers are labeled with their names or roles, e.g. "`[Paul:]`", "`[John King:]`", "`[White House Correspondent:]`".
* There are no parentheses: special annotations are always between square brackets.
* Commong tags include:
    * "**`[PII]`**": Personally Identifiable Information (anonymized name...)
    * "`[NOISE]`": distinct ambient noises
    * "`[LAUGHTER]`": laughter
<!-- * Truncated words are sometimes marked with "-" (ex: "je suis dé- décidé") -->
* Depending on the data source, data may or may not include punctuation marks and upper case letters.
* The data were normalized in various ways including unicode NFC normalization, conversion of unbreakable spaces to spaces, and standardization of punctuation marks (`…` -> `...`, `«`/`»`/`“`/`”`/`″`/`„` -> `"`). <!-- `’`/`‘`/`‛`/`ʿ` -> `'`,  `ᵉ`/`ᵉʳ` -> `e`/`er`, `‚` -> `,` -->

<!-- Those details are described in the paper:
[_«&nbsp;The Claire French Dialogue Dataset&nbsp;»_](https://arxiv.org/abs/2311.16840) (2023).-->


## License

Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
Claire-Dialogue-English-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).


## Citations

When using the CEDD corpus, please cite this page:

<!-- ✍ Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré  (2023)
[The Claire French Dialogue Dataset](https://arxiv.org/abs/2311.16840) -->

```bibtex
@misc{openllm2024claire_en,
  author = {Julie Hunter and Jérôme Louradour and Virgile Rennard and Ismaïl Harrando and Guokan Shang and Jean-Pierre Lorré},
  title = {The Claire English Dialogue Dataset},
  year = {2024},
  publisher = {HuggingFace},
  journal = {HuggingFace},
  howpublished = {\url{https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1}},
}
```

You should also provide citations for all of the original corpora. They are listed below.

* **Europarl**
  * Philipp Koehn (2005). [Europarl: A Parallel Corpus for Statistical Machine Translation](https://aclanthology.org/2005.mtsummit-papers.11/). _Proceedings of Machine Translation Summit X: Papers_, Phuket, Thailand.
* **Charlotte Narratives**
  * [OANC link](https://anc.org/data/oanc/contents/#charlotte).
* **Switchboard**
  * John J. Godfrey, Edward Holliman (1993). [Switchboard-1 Release 2](https://catalog.ldc.upenn.edu/LDC97S62), Linguistic Data Consortium (LDC), Philadelphia.
* **MediaSum**
  * Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael (2021). [MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization](https://aclanthology.org/2021.naacl-main.474/). North American Chapter of the Association for Computational Linguistics (NAACL), Mexico City, Mexico, 2021.
* **AMI**
  * I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, M.Kronenthal, G. Lathoud, M. Lincoln, A. Lisowska, W. Post, D. Reidsma, and P. Wellner (2005). [The AMI meeting corpus](https://d1wqtxts1xzle7.cloudfront.net/50793769/The_AMI_meeting_corpus20161208-17868-1xaka8f-libre.pdf?1481255943=&response-content-disposition=inline%3B+filename%3DThe_AMI_Meeting_Corpus.pdf&Expires=1725287059&Signature=BtJK8AeKwsBmEEJZDF5C2ISWnB8Ss~IWyi1DLBrLS0A5JOVYcvTCdyn63ANd~dZYeIp3W23PuQOPHQfJYhkf1i2TryegDH82JL2v7ODCtKEWmmpXEGyAdBMdPQPdvu3M2lXEccqFaOq~4-2uzAb7goPkGl0~ZdLV1Jsy5ybc3epkMoZwNV947QNKWuW4t-dsfZJaGx8JeoX6GdpzgdmKGC7wcMnD-3uvYugoTggv-5htWofL~pvZ-mUZ9hAORcEbs3nYm-w9TyqhCwE2au~LyiD6nzaEbZCyiIICulsltNIYtu1X1AYRv7ECpw-9KOgiAENzx-7b~UoDg9TSY2x8Ow__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA), _Proc. International Conference on Methods and Techniques in Behavioral Research_. 2005. p. 1-4.
* **ICSI**
  * Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. (2003). [The ICSI meeting corpus](https://d1wqtxts1xzle7.cloudfront.net/71218943/icassp03-janin-libre.pdf?1633309989=&response-content-disposition=inline%3B+filename%3DThe_ICSI_meeting_corpus.pdf&Expires=1725287256&Signature=Uh44rCSC1WPAwavIeqA2zouS7H4-XiED1HSHtU45KJuC06w94tuj3khieSS6ZkFavB1swZXCZOp4rZ8fHSpjDB~E-iYStkYB8HlSy1sAUWJ86XONkBem6VeTV6vzJRxdBzj3KLZL3BNubWc6ypOMsorjymoTthbmHyH1zJXjeHbmD1R4ZRLZ2eThImTqN3CE2uXtC8JIzn9vCfGV0cpyRd4JPYTpRojcIHivlSOyY8msZ2syA8-Ca1efmtBDo96EV9PQuDKrKdlbzGj2M1bD9sF3i1W~mrpIp~xPwz3ElHv~lZchrG-56e2wOutPHYFT7vBjMc1FCV0CWah46ATaqA__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA). In _2003 IEEE International Conference on Acoustics, Speech, and Signal Processing_ (ICASSP’03), volume 1. IEEE. 
* **ReDial**
  * Li, Raymond and Kahou, Samira Ebrahimi and Schulz, Hannes and Michalski, Vincent and Charlin, Laurent and Pal, Chris (2018). [Towards Deep Conversational Recommendations](https://proceedings.neurips.cc/paper/2018/file/800de15c79c8d840f4e78d3af937d4d4-Paper.pdf). _Advances in Neural Information Processing Systems 31 (NeurIPS 2018)_, Montreal.
* **OpenDialKG**
  * Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba (2019). [OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs](https://aclanthology.org/P19-1081/). _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)_, Florence, Italy.
* **ABCD**
  * Derek Chen, Howard Chen, Yi Yang, Alexander Lin, Zhou Yu (2021). [Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems](https://aclanthology.org/2021.naacl-main.239/). _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)_, Online.
* **AirDialogue**
  * Wei Wei, Quoc Le, Andrew Dai, Jia Li (2018). [AirDialogue: An Environment for Goal-Oriented Dialogue Research ](https://aclanthology.org/D18-1419/). _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, Brussels, Belgium.
* **MULTIWOZ2_2**
  * Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, Jindong Chen (2020). [MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines](https://arxiv.org/abs/2007.12720). _Arxiv_.
* **MultiDoGO**
  * Denis Peskov, Nancy Clarke, Jason Krone, Brigi Fodor, Yi Zhang, Adel Youssef, Mona Diab (2019). [Multi-Domain Goal-Oriented Dialogues (MultiDoGO): Strategies toward Curating and Annotating Large Scale Dialogue Data](https://www.aclweb.org/anthology/D19-1460). _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, Hong Kong, China.
* **Chit-Chat**
  * Myers, Will and Etchart, Tyler and Fulda, Nancy (2020). [Conversational Scaffolding: An Analogy-based Approach to Response Prioritization in Open-domain Dialogs](https://www.scitepress.org/Papers/2020/89399/89399.pdf). _Proceedings of the 12th International Conference on Agents and Artificial Intelligence (ICAART 2020)_, volume 2, pages 69-78.
* **DailyDialog**
  * Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu (2017). [DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset](https://aclanthology.org/I17-1099/). _Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP)_, Taipei, Taiwan.
* **British National Corpus (BNC)**
  *  [The British National Corpus online](http://www.natcorp.ox.ac.uk/).


Our versions of MediaSum, ReDial, OpenDialKG, ABCD, AirDialogue, MultiWOZ2.2, MulDoGo2 and Chit-Chat were collected from the DialogStudio compilation, which is also to be cited if using these datasets:
* **DialogStudio**
  *  Zhang, Jianguo and Qian, Kun and Liu, Zhiwei and Heinecke, Shelby and Meng, Rui and Liu, Ye and Yu, Zhou and Savarese, Silvio and Xiong, Caiming (2023). [DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI](https://arxiv.org/abs/2307.10172). _arXiv preprint arXiv:2307.10172_.

## Contact

contact@openllm-france.fr