Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:
File size: 11,236 Bytes
a187759
 
 
 
224893f
d44d5d4
a187759
d44d5d4
f023c13
a187759
 
 
224893f
a187759
 
 
366e0e8
231fcbb
cb3f457
224893f
231fcbb
 
82f5b2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f3397f
 
 
82f5b2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f3397f
 
 
 
 
 
82f5b2b
 
a187759
 
224893f
a187759
 
 
 
 
cb3f457
a187759
 
 
cb3f457
 
a187759
 
 
 
 
 
 
 
 
 
 
 
 
 
224893f
a187759
 
 
 
 
224893f
a187759
 
 
 
 
 
 
224893f
 
 
 
 
 
 
a187759
 
224893f
a187759
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224893f
 
 
 
 
 
 
 
 
a187759
 
 
 
 
224893f
 
 
 
 
 
a187759
 
 
 
 
224893f
 
 
 
 
 
 
a187759
 
 
224893f
a187759
 
 
 
 
224893f
 
 
 
 
 
 
 
 
 
 
 
a187759
 
 
224893f
 
a187759
 
 
224893f
 
a187759
 
 
 
 
224893f
a187759
 
 
224893f
 
a187759
 
 
224893f
 
a187759
 
 
 
 
224893f
a187759
 
 
 
 
 
 
224893f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a187759
224893f
a187759
 
 
224893f
 
 
 
 
 
 
231fcbb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: irc-disentanglement
pretty_name: IRC Disentanglement
tags:
- conversation-disentanglement
dataset_info:
- config_name: ubuntu
  features:
  - name: id
    dtype: int32
  - name: raw
    dtype: string
  - name: ascii
    dtype: string
  - name: tokenized
    dtype: string
  - name: date
    dtype: string
  - name: connections
    sequence: int32
  splits:
  - name: train
    num_bytes: 56012854
    num_examples: 220616
  - name: validation
    num_bytes: 3081479
    num_examples: 12510
  - name: test
    num_bytes: 3919900
    num_examples: 15010
  download_size: 118470210
  dataset_size: 63014233
- config_name: channel_two
  features:
  - name: id
    dtype: int32
  - name: raw
    dtype: string
  - name: ascii
    dtype: string
  - name: tokenized
    dtype: string
  - name: connections
    sequence: int32
  splits:
  - name: dev
    num_bytes: 197505
    num_examples: 1001
  - name: pilot
    num_bytes: 92663
    num_examples: 501
  - name: test
    num_bytes: 186823
    num_examples: 1001
  - name: pilot_dev
    num_bytes: 290175
    num_examples: 1501
  - name: all_
    num_bytes: 496524
    num_examples: 2602
  download_size: 118470210
  dataset_size: 1263690
---


# Dataset Card for IRC Disentanglement

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)
  - [Acknowledgments](#acknowledgments)

## Dataset Description

- **Homepage:** https://jkk.name/irc-disentanglement/
- **Repository:** https://github.com/jkkummerfeld/irc-disentanglement/tree/master/data
- **Paper:** https://aclanthology.org/P19-1374/
- **Leaderboard:** NA
- **Point of Contact:** jkummerf@umich.edu

### Dataset Summary

Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context.

Note, the Github repository for the dataset also contains several useful tools for:

- Conversion (e.g. extracting conversations from graphs)
- Evaluation
- Preprocessing
- Word embeddings trained on the full Ubuntu logs in 2018

### Supported Tasks and Leaderboards

Conversational Disentanglement

### Languages

English (en)

## Dataset Structure

### Data Instances

For Ubuntu:

data["train"][1050]

```
{

  'ascii': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 

  'connections': [1048, 1054, 1055, 1072, 1073], 

  'date': '2004-12-25', 

  'id': 1050,

  'raw': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 

  'tokenized': "<s> ( also , i 'm guessing that this is n't a good place to report minor but annoying bugs ... what is ?) </s>"

}
```

For Channel_two:

data["train"][50]

```
{

  'ascii': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 

  'connections': [49, 53], 

  'id': 50,

  'raw': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 

  'tokenized': "<s> <user> : i do n't know off hand sorry </s>"

}
```

### Data Fields

'id' : The id of the message, this is the value that would be in the 'connections' of associated messages.

'raw' : The original message from the IRC log, as downloaded.

'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word).

'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols.

'connections' : The indices of linked messages.

(only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date.

### Data Splits


The dataset has 4 parts:

| Part          | Number of Annotated Messages                |
| ------------- | ------------------------------------------- |
| Train         | 67,463                                      |
| Dev           |  2,500                                      |
| Test          |  5,000                                      |
| Channel 2     |  2,600                                      |


## Dataset Creation

### Curation Rationale

IRC is a synchronous chat setting with a long history of use.
Several channels log all messages and make them publicly available.
The Ubuntu channel is particularly heavily used and has been the subject of several academic studies.

Data was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users).
For full details, see the [annotation information page](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/data/READ.history.md).

### Source Data

#### Initial Data Collection and Normalization

Data was collected from the Ubuntu IRC channel logs, which are publicly available at [https://irclogs.ubuntu.com/](https://irclogs.ubuntu.com/).
The raw files are included, as well as two other versions:

- ASCII, converted using the script [make_txt.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/make-txt.py)
- Tok, tokenised text with rare words replaced by UNK using the script [dstc8-tokenise.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/dstc8-tokenise.py)

The raw channel two data is from prior work [(Elsner and Charniak, 2008)](https://www.aclweb.org/anthology/P08-1095.pdf)].

#### Who are the source language producers?

The text is from a large group of internet users asking questions and providing answers related to Ubuntu.

### Annotations

#### Annotation process

The data is expert annotated with:

- Training, one annotation per line in general, a small portion is double-annotated and adjudicated
- Dev, Channel 2, double annotated and adjudicated
- Test, triple annotated and adjudicated

| Part          | Annotators      | Adjudication?                         |
| ------------- | --------------- | ------------------------------------- |
| Train         | 1 or 2 per file | For files with 2 annotators (only 10) |
| Dev           | 2               | Yes                                   |
| Test          | 3               | Yes                                   |
| Channel 2     | 2               | Yes                                   |

#### Who are the annotators?

Students and a postdoc at the University of Michigan.
Everyone involved went through a training process with feedback to learn the annotation guidelines.

### Personal and Sensitive Information

No content is removed or obfuscated.
There is probably personal information in the dataset from users.

## Considerations for Using the Data

### Social Impact of Dataset

The raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact.

### Discussion of Biases

The data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort.
Given that users are only identified by their self-selected usernames, it is difficult to know more about the authors.

### Other Known Limitations

Being focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication.
Those conventions may not apply to other channels, or beyond IRC.

## Additional Information

### Dataset Curators

Jonathan K. Kummerfeld

### Licensing Information

Creative Commons Attribution 4.0

### Citation Information

```
@inproceedings{kummerfeld-etal-2019-large,
    title = "A Large-Scale Corpus for Conversation Disentanglement",
    author = "Kummerfeld, Jonathan K.  and
      Gouravajhala, Sai R.  and
      Peper, Joseph J.  and
      Athreya, Vignesh  and
      Gunasekara, Chulaka  and
      Ganhotra, Jatin  and
      Patel, Siva Sankalp  and
      Polymenakos, Lazaros C  and
      Lasecki, Walter",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P19-1374",
    doi = "10.18653/v1/P19-1374",
    pages = "3846--3856",
    arxiv = "https://arxiv.org/abs/1810.11118",
    software = "https://jkk.name/irc-disentanglement",
    data = "https://jkk.name/irc-disentanglement",
    abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89{\%} of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.",
}
```

### Contributions

Thanks to [@dhruvjoshi1998](https://github.com/dhruvjoshi1998) for adding this dataset.

Thanks to [@jkkummerfeld](https://github.com/jkkummerfeld) for improvements to the documentation.


### Acknowledgments

This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM.