parquet-converter commited on
Commit
d8f0886
1 Parent(s): 6f3397f

Update parquet files

Browse files
README.md DELETED
@@ -1,328 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - token-classification
18
- task_ids: []
19
- paperswithcode_id: irc-disentanglement
20
- pretty_name: IRC Disentanglement
21
- tags:
22
- - conversation-disentanglement
23
- dataset_info:
24
- - config_name: ubuntu
25
- features:
26
- - name: id
27
- dtype: int32
28
- - name: raw
29
- dtype: string
30
- - name: ascii
31
- dtype: string
32
- - name: tokenized
33
- dtype: string
34
- - name: date
35
- dtype: string
36
- - name: connections
37
- sequence: int32
38
- splits:
39
- - name: train
40
- num_bytes: 56012854
41
- num_examples: 220616
42
- - name: validation
43
- num_bytes: 3081479
44
- num_examples: 12510
45
- - name: test
46
- num_bytes: 3919900
47
- num_examples: 15010
48
- download_size: 118470210
49
- dataset_size: 63014233
50
- - config_name: channel_two
51
- features:
52
- - name: id
53
- dtype: int32
54
- - name: raw
55
- dtype: string
56
- - name: ascii
57
- dtype: string
58
- - name: tokenized
59
- dtype: string
60
- - name: connections
61
- sequence: int32
62
- splits:
63
- - name: dev
64
- num_bytes: 197505
65
- num_examples: 1001
66
- - name: pilot
67
- num_bytes: 92663
68
- num_examples: 501
69
- - name: test
70
- num_bytes: 186823
71
- num_examples: 1001
72
- - name: pilot_dev
73
- num_bytes: 290175
74
- num_examples: 1501
75
- - name: all_
76
- num_bytes: 496524
77
- num_examples: 2602
78
- download_size: 118470210
79
- dataset_size: 1263690
80
- ---
81
-
82
-
83
- # Dataset Card for IRC Disentanglement
84
-
85
- ## Table of Contents
86
- - [Dataset Description](#dataset-description)
87
- - [Dataset Summary](#dataset-summary)
88
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
89
- - [Languages](#languages)
90
- - [Dataset Structure](#dataset-structure)
91
- - [Data Instances](#data-instances)
92
- - [Data Fields](#data-fields)
93
- - [Data Splits](#data-splits)
94
- - [Dataset Creation](#dataset-creation)
95
- - [Curation Rationale](#curation-rationale)
96
- - [Source Data](#source-data)
97
- - [Annotations](#annotations)
98
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
99
- - [Considerations for Using the Data](#considerations-for-using-the-data)
100
- - [Social Impact of Dataset](#social-impact-of-dataset)
101
- - [Discussion of Biases](#discussion-of-biases)
102
- - [Other Known Limitations](#other-known-limitations)
103
- - [Additional Information](#additional-information)
104
- - [Dataset Curators](#dataset-curators)
105
- - [Licensing Information](#licensing-information)
106
- - [Citation Information](#citation-information)
107
- - [Contributions](#contributions)
108
- - [Acknowledgments](#acknowledgments)
109
-
110
- ## Dataset Description
111
-
112
- - **Homepage:** https://jkk.name/irc-disentanglement/
113
- - **Repository:** https://github.com/jkkummerfeld/irc-disentanglement/tree/master/data
114
- - **Paper:** https://aclanthology.org/P19-1374/
115
- - **Leaderboard:** NA
116
- - **Point of Contact:** jkummerf@umich.edu
117
-
118
- ### Dataset Summary
119
-
120
- Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context.
121
-
122
- Note, the Github repository for the dataset also contains several useful tools for:
123
-
124
- - Conversion (e.g. extracting conversations from graphs)
125
- - Evaluation
126
- - Preprocessing
127
- - Word embeddings trained on the full Ubuntu logs in 2018
128
-
129
- ### Supported Tasks and Leaderboards
130
-
131
- Conversational Disentanglement
132
-
133
- ### Languages
134
-
135
- English (en)
136
-
137
- ## Dataset Structure
138
-
139
- ### Data Instances
140
-
141
- For Ubuntu:
142
-
143
- data["train"][1050]
144
-
145
- ```
146
- {
147
-
148
- 'ascii': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)",
149
-
150
- 'connections': [1048, 1054, 1055, 1072, 1073],
151
-
152
- 'date': '2004-12-25',
153
-
154
- 'id': 1050,
155
-
156
- 'raw': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)",
157
-
158
- 'tokenized': "<s> ( also , i 'm guessing that this is n't a good place to report minor but annoying bugs ... what is ?) </s>"
159
-
160
- }
161
- ```
162
-
163
- For Channel_two:
164
-
165
- data["train"][50]
166
-
167
- ```
168
- {
169
-
170
- 'ascii': "[01:04] <Felicia> Chanel: i don't know off hand sorry",
171
-
172
- 'connections': [49, 53],
173
-
174
- 'id': 50,
175
-
176
- 'raw': "[01:04] <Felicia> Chanel: i don't know off hand sorry",
177
-
178
- 'tokenized': "<s> <user> : i do n't know off hand sorry </s>"
179
-
180
- }
181
- ```
182
-
183
- ### Data Fields
184
-
185
- 'id' : The id of the message, this is the value that would be in the 'connections' of associated messages.
186
-
187
- 'raw' : The original message from the IRC log, as downloaded.
188
-
189
- 'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word).
190
-
191
- 'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols.
192
-
193
- 'connections' : The indices of linked messages.
194
-
195
- (only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date.
196
-
197
- ### Data Splits
198
-
199
-
200
- The dataset has 4 parts:
201
-
202
- | Part | Number of Annotated Messages |
203
- | ------------- | ------------------------------------------- |
204
- | Train | 67,463 |
205
- | Dev | 2,500 |
206
- | Test | 5,000 |
207
- | Channel 2 | 2,600 |
208
-
209
-
210
- ## Dataset Creation
211
-
212
- ### Curation Rationale
213
-
214
- IRC is a synchronous chat setting with a long history of use.
215
- Several channels log all messages and make them publicly available.
216
- The Ubuntu channel is particularly heavily used and has been the subject of several academic studies.
217
-
218
- Data was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users).
219
- For full details, see the [annotation information page](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/data/READ.history.md).
220
-
221
- ### Source Data
222
-
223
- #### Initial Data Collection and Normalization
224
-
225
- Data was collected from the Ubuntu IRC channel logs, which are publicly available at [https://irclogs.ubuntu.com/](https://irclogs.ubuntu.com/).
226
- The raw files are included, as well as two other versions:
227
-
228
- - ASCII, converted using the script [make_txt.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/make-txt.py)
229
- - Tok, tokenised text with rare words replaced by UNK using the script [dstc8-tokenise.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/dstc8-tokenise.py)
230
-
231
- The raw channel two data is from prior work [(Elsner and Charniak, 2008)](https://www.aclweb.org/anthology/P08-1095.pdf)].
232
-
233
- #### Who are the source language producers?
234
-
235
- The text is from a large group of internet users asking questions and providing answers related to Ubuntu.
236
-
237
- ### Annotations
238
-
239
- #### Annotation process
240
-
241
- The data is expert annotated with:
242
-
243
- - Training, one annotation per line in general, a small portion is double-annotated and adjudicated
244
- - Dev, Channel 2, double annotated and adjudicated
245
- - Test, triple annotated and adjudicated
246
-
247
- | Part | Annotators | Adjudication? |
248
- | ------------- | --------------- | ------------------------------------- |
249
- | Train | 1 or 2 per file | For files with 2 annotators (only 10) |
250
- | Dev | 2 | Yes |
251
- | Test | 3 | Yes |
252
- | Channel 2 | 2 | Yes |
253
-
254
- #### Who are the annotators?
255
-
256
- Students and a postdoc at the University of Michigan.
257
- Everyone involved went through a training process with feedback to learn the annotation guidelines.
258
-
259
- ### Personal and Sensitive Information
260
-
261
- No content is removed or obfuscated.
262
- There is probably personal information in the dataset from users.
263
-
264
- ## Considerations for Using the Data
265
-
266
- ### Social Impact of Dataset
267
-
268
- The raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact.
269
-
270
- ### Discussion of Biases
271
-
272
- The data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort.
273
- Given that users are only identified by their self-selected usernames, it is difficult to know more about the authors.
274
-
275
- ### Other Known Limitations
276
-
277
- Being focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication.
278
- Those conventions may not apply to other channels, or beyond IRC.
279
-
280
- ## Additional Information
281
-
282
- ### Dataset Curators
283
-
284
- Jonathan K. Kummerfeld
285
-
286
- ### Licensing Information
287
-
288
- Creative Commons Attribution 4.0
289
-
290
- ### Citation Information
291
-
292
- ```
293
- @inproceedings{kummerfeld-etal-2019-large,
294
- title = "A Large-Scale Corpus for Conversation Disentanglement",
295
- author = "Kummerfeld, Jonathan K. and
296
- Gouravajhala, Sai R. and
297
- Peper, Joseph J. and
298
- Athreya, Vignesh and
299
- Gunasekara, Chulaka and
300
- Ganhotra, Jatin and
301
- Patel, Siva Sankalp and
302
- Polymenakos, Lazaros C and
303
- Lasecki, Walter",
304
- booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
305
- month = jul,
306
- year = "2019",
307
- address = "Florence, Italy",
308
- publisher = "Association for Computational Linguistics",
309
- url = "https://aclanthology.org/P19-1374",
310
- doi = "10.18653/v1/P19-1374",
311
- pages = "3846--3856",
312
- arxiv = "https://arxiv.org/abs/1810.11118",
313
- software = "https://jkk.name/irc-disentanglement",
314
- data = "https://jkk.name/irc-disentanglement",
315
- abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89{\%} of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.",
316
- }
317
- ```
318
-
319
- ### Contributions
320
-
321
- Thanks to [@dhruvjoshi1998](https://github.com/dhruvjoshi1998) for adding this dataset.
322
-
323
- Thanks to [@jkkummerfeld](https://github.com/jkkummerfeld) for improvements to the documentation.
324
-
325
-
326
- ### Acknowledgments
327
-
328
- This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"ubuntu": {"description": "Disentangling conversations mixed together in a single stream of messages is\na difficult task, made harder by the lack of large manually annotated\ndatasets. This new dataset of 77,563 messages manually annotated with\nreply-structure graphs that both disentangle conversations and define\ninternal conversation structure. The dataset is 16 times larger than all\npreviously released datasets combined, the first to include adjudication of\nannotation disagreements, and the first to include context.\n", "citation": "@inproceedings{kummerfeld-etal-2019-large,\n title = \"A Large-Scale Corpus for Conversation Disentanglement\",\n author = \"Kummerfeld, Jonathan K. and\n Gouravajhala, Sai R. and\n Peper, Joseph J. and\n Athreya, Vignesh and\n Gunasekara, Chulaka and\n Ganhotra, Jatin and\n Patel, Siva Sankalp and\n Polymenakos, Lazaros C and\n Lasecki, Walter\",\n booktitle = \"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P19-1374\",\n doi = \"10.18653/v1/P19-1374\",\n pages = \"3846--3856\",\n arxiv = \"https://arxiv.org/abs/1810.11118\",\n software = \"https://jkk.name/irc-disentanglement\",\n data = \"https://jkk.name/irc-disentanglement\",\n abstract = \"Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.\",\n}\n", "homepage": "https://jkk.name/irc-disentanglement/", "license": "Creative Commons Attribution 4.0 International Public License", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "raw": {"dtype": "string", "id": null, "_type": "Value"}, "ascii": {"dtype": "string", "id": null, "_type": "Value"}, "tokenized": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}, "connections": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "irc_disentangle", "config_name": "ubuntu", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 56012854, "num_examples": 220616, "dataset_name": "irc_disentangle"}, "validation": {"name": "validation", "num_bytes": 3081479, "num_examples": 12510, "dataset_name": "irc_disentangle"}, "test": {"name": "test", "num_bytes": 3919900, "num_examples": 15010, "dataset_name": "irc_disentangle"}}, "download_checksums": {"https://github.com/jkkummerfeld/irc-disentanglement/tarball/master": {"num_bytes": 118470210, "checksum": "e5232a65f5e97805a366a19b4c0b127dfcf91981a8681b33855bfb6c72706c2f"}}, "download_size": 118470210, "post_processing_size": null, "dataset_size": 63014233, "size_in_bytes": 181484443}, "channel_two": {"description": "Disentangling conversations mixed together in a single stream of messages is\na difficult task, made harder by the lack of large manually annotated\ndatasets. This new dataset of 77,563 messages manually annotated with\nreply-structure graphs that both disentangle conversations and define\ninternal conversation structure. The dataset is 16 times larger than all\npreviously released datasets combined, the first to include adjudication of\nannotation disagreements, and the first to include context.\n", "citation": "@inproceedings{kummerfeld-etal-2019-large,\n title = \"A Large-Scale Corpus for Conversation Disentanglement\",\n author = \"Kummerfeld, Jonathan K. and\n Gouravajhala, Sai R. and\n Peper, Joseph J. and\n Athreya, Vignesh and\n Gunasekara, Chulaka and\n Ganhotra, Jatin and\n Patel, Siva Sankalp and\n Polymenakos, Lazaros C and\n Lasecki, Walter\",\n booktitle = \"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P19-1374\",\n doi = \"10.18653/v1/P19-1374\",\n pages = \"3846--3856\",\n arxiv = \"https://arxiv.org/abs/1810.11118\",\n software = \"https://jkk.name/irc-disentanglement\",\n data = \"https://jkk.name/irc-disentanglement\",\n abstract = \"Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.\",\n}\n", "homepage": "https://jkk.name/irc-disentanglement/", "license": "Creative Commons Attribution 4.0 International Public License", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "raw": {"dtype": "string", "id": null, "_type": "Value"}, "ascii": {"dtype": "string", "id": null, "_type": "Value"}, "tokenized": {"dtype": "string", "id": null, "_type": "Value"}, "connections": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "irc_disentangle", "config_name": "channel_two", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"dev": {"name": "dev", "num_bytes": 197505, "num_examples": 1001, "dataset_name": "irc_disentangle"}, "pilot": {"name": "pilot", "num_bytes": 92663, "num_examples": 501, "dataset_name": "irc_disentangle"}, "test": {"name": "test", "num_bytes": 186823, "num_examples": 1001, "dataset_name": "irc_disentangle"}, "pilot_dev": {"name": "pilot_dev", "num_bytes": 290175, "num_examples": 1501, "dataset_name": "irc_disentangle"}, "all_": {"name": "all_", "num_bytes": 496524, "num_examples": 2602, "dataset_name": "irc_disentangle"}}, "download_checksums": {"https://github.com/jkkummerfeld/irc-disentanglement/tarball/master": {"num_bytes": 118470210, "checksum": "e5232a65f5e97805a366a19b4c0b127dfcf91981a8681b33855bfb6c72706c2f"}}, "download_size": 118470210, "post_processing_size": null, "dataset_size": 1263690, "size_in_bytes": 119733900}}
 
 
irc_disentangle.py DELETED
@@ -1,272 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Dataset of disentangled IRC"""
16
-
17
-
18
- import glob
19
- import os
20
- from pathlib import Path
21
-
22
- import datasets
23
-
24
-
25
- _CITATION = """\
26
- @inproceedings{kummerfeld-etal-2019-large,
27
- title = "A Large-Scale Corpus for Conversation Disentanglement",
28
- author = "Kummerfeld, Jonathan K. and
29
- Gouravajhala, Sai R. and
30
- Peper, Joseph J. and
31
- Athreya, Vignesh and
32
- Gunasekara, Chulaka and
33
- Ganhotra, Jatin and
34
- Patel, Siva Sankalp and
35
- Polymenakos, Lazaros C and
36
- Lasecki, Walter",
37
- booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
38
- month = jul,
39
- year = "2019",
40
- address = "Florence, Italy",
41
- publisher = "Association for Computational Linguistics",
42
- url = "https://aclanthology.org/P19-1374",
43
- doi = "10.18653/v1/P19-1374",
44
- pages = "3846--3856",
45
- arxiv = "https://arxiv.org/abs/1810.11118",
46
- software = "https://jkk.name/irc-disentanglement",
47
- data = "https://jkk.name/irc-disentanglement",
48
- abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.",
49
- }
50
- """
51
-
52
- _DESCRIPTION = """\
53
- Disentangling conversations mixed together in a single stream of messages is
54
- a difficult task, made harder by the lack of large manually annotated
55
- datasets. This new dataset of 77,563 messages manually annotated with
56
- reply-structure graphs that both disentangle conversations and define
57
- internal conversation structure. The dataset is 16 times larger than all
58
- previously released datasets combined, the first to include adjudication of
59
- annotation disagreements, and the first to include context.
60
- """
61
-
62
- _HOMEPAGE = "https://jkk.name/irc-disentanglement/"
63
-
64
- _LICENSE = "Creative Commons Attribution 4.0 International Public License"
65
-
66
- _URL = "https://github.com/jkkummerfeld/irc-disentanglement/tarball/master"
67
-
68
-
69
- class IRCDisentangle(datasets.GeneratorBasedBuilder):
70
- """IRCDisentangle dataset"""
71
-
72
- VERSION = datasets.Version("1.0.0")
73
-
74
- BUILDER_CONFIGS = [
75
- datasets.BuilderConfig(
76
- name="ubuntu",
77
- version=VERSION,
78
- description="This part of the dataset is the annotated conversations from the Ubuntu channel",
79
- ),
80
- datasets.BuilderConfig(
81
- name="channel_two",
82
- version=VERSION,
83
- description="This part of the dataset is the annotated conversations from the Channel Two",
84
- ),
85
- ]
86
-
87
- DEFAULT_CONFIG_NAME = "ubuntu"
88
-
89
- def _info(self):
90
- if self.config.name == "ubuntu":
91
- features = datasets.Features(
92
- {
93
- "id": datasets.Value("int32"),
94
- "raw": datasets.Value("string"),
95
- "ascii": datasets.Value("string"),
96
- "tokenized": datasets.Value("string"),
97
- "date": datasets.Value("string"),
98
- "connections": datasets.features.Sequence(datasets.Value("int32")),
99
- }
100
- )
101
- elif self.config.name == "channel_two":
102
- features = datasets.Features(
103
- {
104
- "id": datasets.Value("int32"),
105
- "raw": datasets.Value("string"),
106
- "ascii": datasets.Value("string"),
107
- "tokenized": datasets.Value("string"),
108
- "connections": datasets.features.Sequence(datasets.Value("int32")),
109
- }
110
- )
111
- return datasets.DatasetInfo(
112
- description=_DESCRIPTION,
113
- features=features,
114
- supervised_keys=None,
115
- homepage=_HOMEPAGE,
116
- license=_LICENSE,
117
- citation=_CITATION,
118
- )
119
-
120
- def _split_generators(self, dl_manager):
121
- """Returns SplitGenerators."""
122
- dl_dir = dl_manager.download_and_extract(_URL)
123
- filepath = os.path.join(dl_dir, "jkkummerfeld-irc-disentanglement-35f0a40", "data")
124
- split_names = {datasets.Split.TRAIN: "train", datasets.Split.VALIDATION: "dev", datasets.Split.TEST: "test"}
125
- if self.config.name == "ubuntu":
126
- return [
127
- datasets.SplitGenerator(
128
- name=split,
129
- gen_kwargs={
130
- "filepath": os.path.join(filepath, split_name),
131
- "split": split_name,
132
- },
133
- )
134
- for split, split_name in split_names.items()
135
- ]
136
- elif self.config.name == "channel_two":
137
- filepath = os.path.join(filepath, "channel-two")
138
- return [
139
- datasets.SplitGenerator(
140
- name="dev",
141
- gen_kwargs={
142
- "filepath": filepath,
143
- "split": "dev",
144
- },
145
- ),
146
- datasets.SplitGenerator(
147
- name="pilot",
148
- gen_kwargs={
149
- "filepath": filepath,
150
- "split": "pilot",
151
- },
152
- ),
153
- datasets.SplitGenerator(
154
- name="test",
155
- gen_kwargs={
156
- "filepath": filepath,
157
- "split": "test",
158
- },
159
- ),
160
- datasets.SplitGenerator(
161
- name="pilot_dev",
162
- gen_kwargs={
163
- "filepath": filepath,
164
- "split": "pilot-dev",
165
- },
166
- ),
167
- datasets.SplitGenerator(
168
- name="all_",
169
- gen_kwargs={
170
- "filepath": filepath,
171
- "split": "all",
172
- },
173
- ),
174
- ]
175
-
176
- def _generate_examples(self, filepath, split):
177
- """Yields examples."""
178
-
179
- if self.config.name == "ubuntu":
180
- # run loop for each date
181
- all_files = sorted(glob.glob(os.path.join(filepath, "*.annotation.txt")))
182
- all_dates = [Path(filename).name[:10] for filename in all_files]
183
- all_info = [Path(filename).name[10:-15] for filename in all_files]
184
-
185
- elif self.config.name == "channel_two":
186
- # run loop once (there are no dates for this config)
187
- all_dates = ["_"]
188
- all_info = ["_"]
189
-
190
- last_id = 0
191
- id_ = 0
192
-
193
- for date, info in zip(all_dates, all_info):
194
-
195
- if self.config.name == "ubuntu":
196
- # load file of given date and additional info for each split
197
- raw_path = os.path.join(filepath, f"{date}{info}.raw.txt")
198
- ascii_path = os.path.join(filepath, f"{date}{info}.ascii.txt")
199
- tok_path = os.path.join(filepath, f"{date}{info}.tok.txt")
200
- annot_path = os.path.join(filepath, f"{date}{info}.annotation.txt")
201
-
202
- elif self.config.name == "channel_two":
203
- # load files of different splits
204
- raw_path = os.path.join(filepath, f"channel-two.{split}.raw.txt")
205
- ascii_path = os.path.join(filepath, f"channel-two.{split}.ascii.txt")
206
- tok_path = os.path.join(filepath, f"channel-two.{split}.tok.txt")
207
- annot_path = os.path.join(filepath, f"channel-two.{split}.annotation.txt")
208
-
209
- with open(raw_path, encoding="utf-8") as f_raw, open(ascii_path, encoding="utf-8") as f_ascii, open(
210
- tok_path, encoding="utf-8"
211
- ) as f_tok, open(annot_path, encoding="utf-8") as f_annot:
212
-
213
- # tokenize txt file
214
- raw_sentences = f_raw.read().split("\n")
215
- ascii_sentences = f_ascii.read().split("\n")
216
- tok_sentences = f_tok.read().split("\n")
217
- annot_lines = f_annot.read().split("\n")
218
-
219
- assert (
220
- len(raw_sentences) == len(ascii_sentences) == len(tok_sentences)
221
- ), "Sizes do not match: %d vs %d vs %d for Raw Sentences vs Ascii Sentences vs Tokenized Sentences." % (
222
- len(raw_sentences),
223
- len(ascii_sentences),
224
- len(tok_sentences),
225
- )
226
-
227
- annotation_pairs = []
228
-
229
- # for annotation lines, make annotation pairs
230
- for annot in annot_lines:
231
- line = annot.split(" ")
232
- if len(line) > 1:
233
- annotation_pairs.append((int(line[0]), int(line[1])))
234
-
235
- annotations = dict()
236
- for row in range(last_id, last_id + len(raw_sentences)):
237
- annotations[row] = set()
238
-
239
- for (a, b) in annotation_pairs:
240
- # required for dummy data creation
241
- if last_id + a not in annotations:
242
- annotations[last_id + a] = set()
243
- if last_id + b not in annotations:
244
- annotations[last_id + b] = set()
245
-
246
- # add annotation 'b' to a's annotation set, and vice versa
247
- annotations[last_id + a].add(last_id + b)
248
- annotations[last_id + b].add(last_id + a)
249
-
250
- for i in range(len(raw_sentences)):
251
- # return all 3 kinds of chat messages, the date (if applicable), and the annotation set for that sentece
252
- if self.config.name == "ubuntu":
253
- yield id_, {
254
- "id": id_,
255
- "raw": raw_sentences[i],
256
- "ascii": ascii_sentences[i],
257
- "tokenized": tok_sentences[i],
258
- "date": date,
259
- "connections": sorted(annotations[id_]),
260
- }
261
- elif self.config.name == "channel_two":
262
- yield id_, {
263
- "id": id_,
264
- "raw": raw_sentences[i],
265
- "ascii": ascii_sentences[i],
266
- "tokenized": tok_sentences[i],
267
- "connections": sorted(annotations[i]),
268
- }
269
- id_ += 1
270
-
271
- # continue counting from position last left off
272
- last_id = id_
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ubuntu/test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6a3a4dec7f1cfe05fd47a5b00ec5969cff5a53e33f3120a3b5d522ed3e88c83
3
+ size 2116104
ubuntu/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7ef4cc537e2c88d3202e2404b7dc76df177856a94d6edaedd89b418ab896d52
3
+ size 29452339
ubuntu/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5aeada72085ce29c5f24127918bc9255665f3be0a5112568d504be903741d80
3
+ size 1645562