system HF staff commited on
Commit
0814fb9
0 Parent(s):

Update files from the datasets library (from 1.6.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.6.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ task1_qa:
14
+ - 100K<n<1M
15
+ task2_recs:
16
+ - n>1M
17
+ task3_qarecs:
18
+ - 100K<n<1M
19
+ task4_reddit:
20
+ - 100K<n<1M
21
+ source_datasets:
22
+ - original
23
+ task_categories:
24
+ - sequence-modeling
25
+ task_ids:
26
+ - dialogue-modeling
27
+ ---
28
+
29
+ # Dataset Card for MDD
30
+
31
+ ## Table of Contents
32
+ - [Dataset Card for MDD](#dataset-card-for-dataset-name)
33
+ - [Table of Contents](#table-of-contents)
34
+ - [Dataset Description](#dataset-description)
35
+ - [Dataset Summary](#dataset-summary)
36
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
37
+ - [Languages](#languages)
38
+ - [Dataset Structure](#dataset-structure)
39
+ - [Data Instances](#data-instances)
40
+ - [Data Fields](#data-fields)
41
+ - [Data Splits](#data-splits)
42
+ - [Dataset Creation](#dataset-creation)
43
+ - [Curation Rationale](#curation-rationale)
44
+ - [Source Data](#source-data)
45
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
46
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
47
+ - [Annotations](#annotations)
48
+ - [Annotation process](#annotation-process)
49
+ - [Who are the annotators?](#who-are-the-annotators)
50
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
51
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
52
+ - [Social Impact of Dataset](#social-impact-of-dataset)
53
+ - [Discussion of Biases](#discussion-of-biases)
54
+ - [Other Known Limitations](#other-known-limitations)
55
+ - [Additional Information](#additional-information)
56
+ - [Dataset Curators](#dataset-curators)
57
+ - [Licensing Information](#licensing-information)
58
+ - [Citation Information](#citation-information)
59
+ - [Contributions](#contributions)
60
+
61
+ ## Dataset Description
62
+
63
+ - **Homepage:**[The bAbI project](https://research.fb.com/downloads/babi/)
64
+ - **Repository:**
65
+ - **Paper:** [arXiv Paper](https://arxiv.org/pdf/1511.06931.pdf)
66
+ - **Leaderboard:**
67
+ - **Point of Contact:**
68
+ ### Dataset Summary
69
+
70
+ The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ [More Information Needed]
74
+
75
+ ### Languages
76
+
77
+ The data is present in English language as written by users on OMDb and MovieLens websites.
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+ An instance from the `task3_qarecs` config's `train` split:
83
+ ```
84
+ {'dialogue_turns': {'speaker': [0, 1, 0, 1, 0, 1], 'utterance': ["I really like Jaws, Bottle Rocket, Saving Private Ryan, Tommy Boy, The Muppet Movie, Face/Off, and Cool Hand Luke. I'm looking for a Documentary movie.", 'Beyond the Mat', 'Who is that directed by?', 'Barry W. Blaustein', 'I like Jon Fauer movies more. Do you know anything else?', 'Cinematographer Style']}}
85
+ ```
86
+ An instance from the `task4_reddit` config's `cand-valid` split:
87
+ ```
88
+ {'dialogue_turns': {'speaker': [0], 'utterance': ['MORTAL KOMBAT !']}}
89
+ ```
90
+ ### Data Fields
91
+
92
+ For all configurations:
93
+ - `dialogue_turns`: a dictionary feature containing:
94
+ - `speaker`: an integer with possible values including `0`, `1`, indicating which speaker wrote the utterance.
95
+ - `utterance`: a `string` feature containing the text utterance.
96
+
97
+ ### Data Splits
98
+
99
+ The splits and corresponding sizes are:
100
+
101
+ |config |train |test |validation|cand_valid|cand_test|
102
+ |:--|------:|----:|---------:|----:|----:|
103
+ |task1_qa|96185|9952|9968|-|-|
104
+ |task2_recs|1000000|10000|10000|-|-|
105
+ |task3_qarecs|952125|4915|5052|-|-|
106
+ |task4_reddit|945198|10000|10000|10000|10000|
107
+
108
+ The `cand_valid` and `cand_test` are negative candidates for the `task4_reddit` configuration which is used in ranking true positive against these candidates and hits@k (or another ranking metric) is reported. (See paper)
109
+
110
+
111
+ ## Dataset Creation
112
+
113
+ ### Curation Rationale
114
+
115
+ [More Information Needed]
116
+
117
+ ### Source Data
118
+
119
+ #### Initial Data Collection and Normalization
120
+
121
+ The construction of the tasks depended on some existing datasets:
122
+
123
+ 1) MovieLens. The data was downloaded from: http://grouplens.org/datasets/movielens/20m/ on May 27th, 2015.
124
+
125
+ 2) OMDB. The data was downloaded from: http://beforethecode.com/projects/omdb/download.aspx on May 28th, 2015.
126
+
127
+ 3) For `task4_reddit`, the data is a processed subset (movie subreddit only) of the data available at:
128
+ https://www.reddit.com/r/datasets/comments/3bxlg7
129
+
130
+ #### Who are the source language producers?
131
+
132
+ Users on MovieLens, OMDB website and reddit websites, among others.
133
+
134
+ ### Annotations
135
+
136
+ #### Annotation process
137
+
138
+ [More Information Needed]
139
+
140
+ #### Who are the annotators?
141
+
142
+ [More Information Needed]
143
+
144
+ ### Personal and Sensitive Information
145
+
146
+ [More Information Needed]
147
+
148
+ ## Considerations for Using the Data
149
+
150
+ ### Social Impact of Dataset
151
+
152
+ [More Information Needed]
153
+
154
+ ### Discussion of Biases
155
+
156
+ [More Information Needed]
157
+
158
+ ### Other Known Limitations
159
+
160
+ [More Information Needed]
161
+
162
+ ## Additional Information
163
+
164
+ ### Dataset Curators
165
+
166
+ Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston (at Facebook Research).
167
+
168
+ ### Licensing Information
169
+
170
+ ```
171
+ Creative Commons Attribution 3.0 License
172
+ ```
173
+
174
+ ### Citation Information
175
+
176
+ ```
177
+ @misc{dodge2016evaluating,
178
+ title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},
179
+ author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},
180
+ year={2016},
181
+ eprint={1511.06931},
182
+ archivePrefix={arXiv},
183
+ primaryClass={cs.CL}
184
+ }
185
+ ```
186
+
187
+ ### Contributions
188
+
189
+ Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"task1_qa": {"description": "The Movie Dialog dataset (MDD) is designed to measure how well\nmodels can perform at goal and non-goal orientated dialog\ncentered around the topic of movies (question answering,\nrecommendation and discussion).\n\n", "citation": "@misc{dodge2016evaluating,\n title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},\n author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},\n year={2016},\n eprint={1511.06931},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://research.fb.com/downloads/babi/", "license": "Creative Commons Attribution 3.0 License", "features": {"dialogue_turns": {"feature": {"speaker": {"dtype": "int32", "id": null, "_type": "Value"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "mdd", "config_name": "task1_qa", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8621120, "num_examples": 96185, "dataset_name": "mdd"}, "test": {"name": "test", "num_bytes": 894590, "num_examples": 9952, "dataset_name": "mdd"}, "validation": {"name": "validation", "num_bytes": 892540, "num_examples": 9968, "dataset_name": "mdd"}}, "download_checksums": {"http://www.thespermwhale.com/jaseweston/babi/movie_dialog_dataset.tgz": {"num_bytes": 135614957, "checksum": "59194c0ac331e2672a68f152a86571be79bde3938bb6ace3eecba7df1a06a23f"}}, "download_size": 135614957, "post_processing_size": null, "dataset_size": 10408250, "size_in_bytes": 146023207}, "task2_recs": {"description": "The Movie Dialog dataset (MDD) is designed to measure how well\nmodels can perform at goal and non-goal orientated dialog\ncentered around the topic of movies (question answering,\nrecommendation and discussion).\n\n", "citation": "@misc{dodge2016evaluating,\n title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},\n author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},\n year={2016},\n eprint={1511.06931},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://research.fb.com/downloads/babi/", "license": "Creative Commons Attribution 3.0 License", "features": {"dialogue_turns": {"feature": {"speaker": {"dtype": "int32", "id": null, "_type": "Value"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "mdd", "config_name": "task2_recs", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 205936579, "num_examples": 1000000, "dataset_name": "mdd"}, "test": {"name": "test", "num_bytes": 2064509, "num_examples": 10000, "dataset_name": "mdd"}, "validation": {"name": "validation", "num_bytes": 2057290, "num_examples": 10000, "dataset_name": "mdd"}}, "download_checksums": {"http://www.thespermwhale.com/jaseweston/babi/movie_dialog_dataset.tgz": {"num_bytes": 135614957, "checksum": "59194c0ac331e2672a68f152a86571be79bde3938bb6ace3eecba7df1a06a23f"}}, "download_size": 135614957, "post_processing_size": null, "dataset_size": 210058378, "size_in_bytes": 345673335}, "task3_qarecs": {"description": "The Movie Dialog dataset (MDD) is designed to measure how well\nmodels can perform at goal and non-goal orientated dialog\ncentered around the topic of movies (question answering,\nrecommendation and discussion).\n\n", "citation": "@misc{dodge2016evaluating,\n title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},\n author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},\n year={2016},\n eprint={1511.06931},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://research.fb.com/downloads/babi/", "license": "Creative Commons Attribution 3.0 License", "features": {"dialogue_turns": {"feature": {"speaker": {"dtype": "int32", "id": null, "_type": "Value"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "mdd", "config_name": "task3_qarecs", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 356789364, "num_examples": 952125, "dataset_name": "mdd"}, "test": {"name": "test", "num_bytes": 1730291, "num_examples": 4915, "dataset_name": "mdd"}, "validation": {"name": "validation", "num_bytes": 1776506, "num_examples": 5052, "dataset_name": "mdd"}}, "download_checksums": {"http://www.thespermwhale.com/jaseweston/babi/movie_dialog_dataset.tgz": {"num_bytes": 135614957, "checksum": "59194c0ac331e2672a68f152a86571be79bde3938bb6ace3eecba7df1a06a23f"}}, "download_size": 135614957, "post_processing_size": null, "dataset_size": 360296161, "size_in_bytes": 495911118}, "task4_reddit": {"description": "The Movie Dialog dataset (MDD) is designed to measure how well\nmodels can perform at goal and non-goal orientated dialog\ncentered around the topic of movies (question answering,\nrecommendation and discussion).\n\n", "citation": "@misc{dodge2016evaluating,\n title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},\n author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},\n year={2016},\n eprint={1511.06931},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://research.fb.com/downloads/babi/", "license": "Creative Commons Attribution 3.0 License", "features": {"dialogue_turns": {"feature": {"speaker": {"dtype": "int32", "id": null, "_type": "Value"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "mdd", "config_name": "task4_reddit", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 497864160, "num_examples": 945198, "dataset_name": "mdd"}, "test": {"name": "test", "num_bytes": 5220295, "num_examples": 10000, "dataset_name": "mdd"}, "validation": {"name": "validation", "num_bytes": 5372702, "num_examples": 10000, "dataset_name": "mdd"}, "cand_valid": {"name": "cand_valid", "num_bytes": 1521633, "num_examples": 10000, "dataset_name": "mdd"}, "cand_test": {"name": "cand_test", "num_bytes": 1567235, "num_examples": 10000, "dataset_name": "mdd"}}, "download_checksums": {"http://tinyurl.com/p6tyohj": {"num_bytes": 192209920, "checksum": "6316a6a5c563bc3c133a4a1e611d8ca638c61582f331c500697d9090efd215bb"}}, "download_size": 192209920, "post_processing_size": null, "dataset_size": 511546025, "size_in_bytes": 703755945}}
dummy/task1_qa/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:651a8c0c1e7a5c3dbd78933e1abcab136cd7cb12d44528c9763a62748570713c
3
+ size 2409
dummy/task2_recs/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99b4b1d34684fda2501de0daaef3941183dd44cf1d12c06ea9ec5a170bd23caf
3
+ size 2971
dummy/task3_qarecs/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa743ea722b5de64d4149ae0b92f2bef28b5bf699502d99d80fd88af1d1920c9
3
+ size 3664
dummy/task4_reddit/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e094a2c26e4c4ffeccd447e34f39dd3b2af56a49f0a5a520adb87eb8c14e8000
3
+ size 4989
mdd.py ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Movie Dialog Dataset."""
16
+
17
+
18
+ import os
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @misc{dodge2016evaluating,
25
+ title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},
26
+ author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},
27
+ year={2016},
28
+ eprint={1511.06931},
29
+ archivePrefix={arXiv},
30
+ primaryClass={cs.CL}
31
+ }
32
+ """
33
+
34
+
35
+ _DESCRIPTION = """\
36
+ The Movie Dialog dataset (MDD) is designed to measure how well
37
+ models can perform at goal and non-goal orientated dialog
38
+ centered around the topic of movies (question answering,
39
+ recommendation and discussion).
40
+
41
+ """
42
+
43
+ _HOMEPAGE = "https://research.fb.com/downloads/babi/"
44
+
45
+ _LICENSE = """Creative Commons Attribution 3.0 License"""
46
+
47
+ ZIP_URL = "http://www.thespermwhale.com/jaseweston/babi/movie_dialog_dataset.tgz"
48
+ REDDIT_URL = "http://tinyurl.com/p6tyohj"
49
+ dir = "movie_dialog_dataset/"
50
+ dir2 = ""
51
+ paths = {
52
+ "task1_qa": {
53
+ "train": dir + "task1_qa/task1_qa_train.txt",
54
+ "dev": dir + "task1_qa/task1_qa_dev.txt",
55
+ "test": dir + "task1_qa/task1_qa_test.txt",
56
+ },
57
+ "task2_recs": {
58
+ "train": dir + "task2_recs/task2_recs_train.txt",
59
+ "dev": dir + "task2_recs/task2_recs_dev.txt",
60
+ "test": dir + "task2_recs/task2_recs_test.txt",
61
+ },
62
+ "task3_qarecs": {
63
+ "train": dir + "task3_qarecs/task3_qarecs_train.txt",
64
+ "dev": dir + "task3_qarecs/task3_qarecs_dev.txt",
65
+ "test": dir + "task3_qarecs/task3_qarecs_test.txt",
66
+ },
67
+ "task4_reddit": {
68
+ "train": "task4_reddit/task4_reddit_train.txt",
69
+ "dev": "task4_reddit/task4_reddit_dev.txt",
70
+ "test": "task4_reddit/task4_reddit_test.txt",
71
+ "cand_valid": "task4_reddit/task4_reddit_cand-valid.txt",
72
+ "cand_test": "task4_reddit/task4_reddit_cand-test.txt",
73
+ },
74
+ }
75
+
76
+
77
+ class Mdd(datasets.GeneratorBasedBuilder):
78
+ """The Movie Dialog Dataset"""
79
+
80
+ VERSION = datasets.Version("1.1.0")
81
+
82
+ BUILDER_CONFIGS = [
83
+ datasets.BuilderConfig(
84
+ name="task1_qa", version=VERSION, description="This part of my dataset covers task1_qa part of the dataset"
85
+ ),
86
+ datasets.BuilderConfig(
87
+ name="task2_recs",
88
+ version=VERSION,
89
+ description="This part of my dataset covers task2_recs part of the dataset",
90
+ ),
91
+ datasets.BuilderConfig(
92
+ name="task3_qarecs",
93
+ version=VERSION,
94
+ description="This part of my dataset covers task3_qarecs part of the dataset",
95
+ ),
96
+ datasets.BuilderConfig(
97
+ name="task4_reddit",
98
+ version=VERSION,
99
+ description="This part of my dataset covers task4_reddit part of the dataset",
100
+ ),
101
+ ]
102
+
103
+ def _info(self):
104
+ features = datasets.Features(
105
+ {
106
+ "dialogue_turns": datasets.Sequence(
107
+ {
108
+ "speaker": datasets.Value("int32"),
109
+ "utterance": datasets.Value("string"),
110
+ }
111
+ ),
112
+ }
113
+ )
114
+ return datasets.DatasetInfo(
115
+ # This is the description that will appear on the datasets page.
116
+ description=_DESCRIPTION,
117
+ # This defines the different columns of the dataset and their types
118
+ features=features, # Here we define them above because they are different between the two configurations
119
+ # If there's a common (input, target) tuple from the features,
120
+ # specify them here. They'll be used if as_supervised=True in
121
+ # builder.as_dataset.
122
+ supervised_keys=None,
123
+ # Homepage of the dataset for documentation
124
+ homepage=_HOMEPAGE,
125
+ # License for the dataset if available
126
+ license=_LICENSE,
127
+ # Citation for the dataset
128
+ citation=_CITATION,
129
+ )
130
+
131
+ def _split_generators(self, dl_manager):
132
+ """Returns SplitGenerators."""
133
+ if self.config.name != "task4_reddit":
134
+ my_urls = ZIP_URL # Cannot download just one single type as it is a compressed file.
135
+ else:
136
+ my_urls = REDDIT_URL
137
+ data_dir = dl_manager.download_and_extract(my_urls)
138
+ splits = [
139
+ datasets.SplitGenerator(
140
+ name=datasets.Split.TRAIN,
141
+ # These kwargs will be passed to _generate_examples
142
+ gen_kwargs={
143
+ "filepath": os.path.join(data_dir, paths[self.config.name]["train"]),
144
+ },
145
+ ),
146
+ datasets.SplitGenerator(
147
+ name=datasets.Split.TEST,
148
+ # These kwargs will be passed to _generate_examples
149
+ gen_kwargs={
150
+ "filepath": os.path.join(data_dir, paths[self.config.name]["test"]),
151
+ },
152
+ ),
153
+ datasets.SplitGenerator(
154
+ name=datasets.Split.VALIDATION,
155
+ # These kwargs will be passed to _generate_examples
156
+ gen_kwargs={
157
+ "filepath": os.path.join(data_dir, paths[self.config.name]["dev"]),
158
+ },
159
+ ),
160
+ ]
161
+ if self.config.name == "task4_reddit":
162
+ splits += [
163
+ datasets.SplitGenerator(
164
+ name=datasets.Split("cand_valid"),
165
+ # These kwargs will be passed to _generate_examples
166
+ gen_kwargs={
167
+ "filepath": os.path.join(data_dir, paths[self.config.name]["cand_valid"]),
168
+ },
169
+ ),
170
+ datasets.SplitGenerator(
171
+ name=datasets.Split("cand_test"),
172
+ # These kwargs will be passed to _generate_examples
173
+ gen_kwargs={
174
+ "filepath": os.path.join(data_dir, paths[self.config.name]["cand_test"]),
175
+ },
176
+ ),
177
+ ]
178
+ return splits
179
+
180
+ def _generate_examples(self, filepath):
181
+ if "cand" not in filepath:
182
+ with open(filepath, encoding="utf-8") as f:
183
+ dialogue_turns = []
184
+ example_idx = 0
185
+ for idx, line in enumerate(f):
186
+ if line.strip() == "":
187
+ if dialogue_turns != []:
188
+ yield example_idx, {"dialogue_turns": dialogue_turns}
189
+ example_idx += 1
190
+ dialogue_turns = []
191
+ elif line.strip().split()[0] == "1": # New convo
192
+ if dialogue_turns != []: # Already some convo, flush it out
193
+ yield example_idx, {"dialogue_turns": dialogue_turns}
194
+ example_idx += 1
195
+ dialogue_turns = []
196
+ exchange = line[len(line.split()[0]) :].strip().split("\t") # Skip the number in the front
197
+ sp1 = exchange[0]
198
+ sp2 = exchange[-1] # Might contain multiple tabs in between.
199
+ dialogue_turns.append({"speaker": 0, "utterance": sp1})
200
+ dialogue_turns.append({"speaker": 1, "utterance": sp2})
201
+ else:
202
+ exchange = line[len(line.split()[0]) :].strip().split("\t") # Skip the number in the front
203
+ sp1 = exchange[0]
204
+ sp2 = exchange[-1] # Might contain multiple tabs in between.
205
+ dialogue_turns.append({"speaker": 0, "utterance": sp1})
206
+ dialogue_turns.append({"speaker": 1, "utterance": sp2})
207
+ else:
208
+ if dialogue_turns != []:
209
+ yield example_idx, {"dialogue_turns": dialogue_turns}
210
+ else:
211
+ with open(filepath, encoding="utf-8") as f:
212
+ dialogue_turns = []
213
+ example_idx = 0
214
+ for idx, line in enumerate(f):
215
+ if line.strip() == "":
216
+ if dialogue_turns != []:
217
+ yield example_idx, {"dialogue_turns": dialogue_turns}
218
+ example_idx += 1
219
+ dialogue_turns = []
220
+ elif line.strip().split()[0] == "1": # New convo
221
+ if dialogue_turns != []: # Already some convo, flush it out
222
+ yield example_idx, {"dialogue_turns": dialogue_turns}
223
+ example_idx += 1
224
+ dialogue_turns = []
225
+ exchange = line[len(line.split()[0]) :].strip() # Skip the number in the front
226
+ sp1 = exchange
227
+ dialogue_turns.append({"speaker": 0, "utterance": sp1})
228
+ else:
229
+ exchange = line[len(line.split()[0]) :].strip() # Skip the number in the front
230
+ sp1 = exchange
231
+ dialogue_turns.append({"speaker": 0, "utterance": sp1})
232
+ else: # Last line, new example
233
+ if dialogue_turns != []:
234
+ yield example_idx, {"dialogue_turns": dialogue_turns}