system HF staff commited on
Commit
aa49e8f
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - gpl-3-0+
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ - text-retrieval
19
+ task_ids:
20
+ all:
21
+ - dialogue-modeling
22
+ - utterance-retrieval
23
+ happy:
24
+ - dialogue-modeling
25
+ - utterance-retrieval
26
+ offmychest:
27
+ - dialogue-modeling
28
+ - utterance-retrieval
29
+ ---
30
+
31
+ # Dataset Card for PEC
32
+
33
+ ## Table of Contents
34
+ - [Dataset Card for PEC](#dataset-card-for-pec)
35
+ - [Table of Contents](#table-of-contents)
36
+ - [Dataset Description](#dataset-description)
37
+ - [Dataset Summary](#dataset-summary)
38
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
39
+ - [Languages](#languages)
40
+ - [Dataset Structure](#dataset-structure)
41
+ - [Data Instances](#data-instances)
42
+ - [Data Fields](#data-fields)
43
+ - [Data Splits](#data-splits)
44
+ - [Dataset Creation](#dataset-creation)
45
+ - [Curation Rationale](#curation-rationale)
46
+ - [Source Data](#source-data)
47
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
48
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
49
+ - [Annotations](#annotations)
50
+ - [Annotation process](#annotation-process)
51
+ - [Who are the annotators?](#who-are-the-annotators)
52
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
53
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
54
+ - [Social Impact of Dataset](#social-impact-of-dataset)
55
+ - [Discussion of Biases](#discussion-of-biases)
56
+ - [Other Known Limitations](#other-known-limitations)
57
+ - [Additional Information](#additional-information)
58
+ - [Dataset Curators](#dataset-curators)
59
+ - [Licensing Information](#licensing-information)
60
+ - [Citation Information](#citation-information)
61
+
62
+ ## Dataset Description
63
+
64
+ - **Repository:** [PEC repository](https://github.com/zhongpeixiang/PEC)
65
+ - **Paper:** [Towards Persona-Based Empathetic Conversational Models](https://www.aclweb.org/anthology/2020.emnlp-main.531/)
66
+ - **Point of Contact:** [Peixiang Zhong](mailto:zhongpeixiang@gmail.com)
67
+
68
+ ### Dataset Summary
69
+
70
+ The PEC dataset is an English-language dataset of open-domain conversations gathered from two subreddits on Reddit, i.e., happy and offmychest. PEC has around 350K persona-based empathetic conversations. Each utterance is associated with a speaker, and each speaker has a persona of multiple persona sentences. The conversations in PEC are more empathetic than casual conversations. The conversations in the happy domain are mostly positive, whereas the conversations in the offmychest domain are mostly negative.
71
+
72
+ ### Supported Tasks and Leaderboards
73
+
74
+ - `dialogue-modeling`, `utterance-retrieval`: this dataset can be used to train a generative or retrieval-based conversational model.
75
+
76
+ ### Languages
77
+
78
+ English
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ A typical data example comprises a list of context utterances, a list of context speakers, a response to the context, the response speaker and the persona of the response speaker.
85
+
86
+ An example from PEC looks as follows:
87
+ ```
88
+ {'context': ['found out this morning i got a job promotion ! ! !'],
89
+ 'context_speakers': ['HeWentToJared91'],
90
+ 'personas': [
91
+ "i ca n't stand working in the ugli .",
92
+ 'i ’ve always liked my eyes except for the fact that they ca n’t shoot lasers',
93
+ 'i feel really bad about myself as a person right now , and i could really use a hand .',
94
+ 'i drank a coffee , and it just made me feel even more exhausted .',
95
+ 'i want a natsuki t shirt',
96
+ "i 've dealt with depression in the past .",
97
+ 'i love red dead 2'],
98
+ 'response': "you look like a nice person ! we 're proud of you , and i bet you earned that promotion !",
99
+ 'response_speaker': 'tylock'}
100
+ ```
101
+
102
+ ### Data Fields
103
+
104
+ - `context`: a list of strings, each string denotes a context utterance.
105
+ - `context_speakers`: a list of strings, each string denotes a speaker.
106
+ - `response`: a string denoting the response to the `context`.
107
+ - `response_speaker`: a string denoting the speaker of `response`.
108
+ - `personas`: a list of strings, each string denotes a persona sentence of `response_speaker`.
109
+
110
+ ### Data Splits
111
+ The data is split into a training, validation and test set for each of the three domains. Note that the *all* domain is the concatenation of the *happy* and *offmychest* domains.
112
+
113
+ | domain | Tain | Valid | Test |
114
+ | ----- | ------ | ----- | ---- |
115
+ | happy | 157195 | 19829 | 22730|
116
+ | offmychest | 123968 | 16004 | 15324|
117
+ | all | 281163 | 35833 | 38054|
118
+
119
+ ## Dataset Creation
120
+
121
+ ### Curation Rationale
122
+
123
+ PEC was built to provide a testbed for machines to learn persona-based empathetic responding. In our empirical analysis, we found that different personas have different styles of empathetic responding. This dataset can also be used to investigate the link between persona and empathy in human conversations. According to our human assessment, the conversations on the happy and offmychest subreddits are significantly more empathetic than casual conversations.
124
+
125
+ ### Source Data
126
+
127
+ #### Initial Data Collection and Normalization
128
+
129
+ The data was obtained via the [pushshift API](https://pushshift.io/using-bigquery-with-reddit-data/) via Google BigQuery.
130
+
131
+ #### Who are the source language producers?
132
+
133
+ The language producers are users of the [r/happy](https://www.reddit.com/r/happy/), and [r/offmychest](https://www.reddit.com/r/offmychest/) subreddits between 2012 and 2020. No further demographic information was available from the data source.
134
+
135
+ ### Annotations
136
+
137
+ #### Annotation process
138
+
139
+ The dataset does not contain any additional annotations.
140
+
141
+ #### Who are the annotators?
142
+
143
+ [More Information Needed]
144
+
145
+ ### Personal and Sensitive Information
146
+
147
+ The dataset includes the speaker IDs of users on *happy* and *offmychest* subreddits.
148
+
149
+ ## Considerations for Using the Data
150
+
151
+ ### Social Impact of Dataset
152
+
153
+ The purpose of this dataset is to help develop more personalised and empathetic conversational systems, which is an important milestone towards truly human-like conversational agents.
154
+
155
+ ### Discussion of Biases
156
+
157
+ [More Information Needed]
158
+
159
+ ### Other Known Limitations
160
+
161
+ A small portion of the dataset has the issues of sexism, hate, and harassment. The persona sentences are noisy.
162
+
163
+ ## Additional Information
164
+
165
+ ### Dataset Curators
166
+
167
+ The dataset was initially created by Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao, jointly done at Nanyang Technological University and Alibaba Group.
168
+
169
+ ### Licensing Information
170
+
171
+ The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
172
+
173
+ ### Citation Information
174
+ ```
175
+ @inproceedings{zhong-etal-2020-towards,
176
+ title = "Towards Persona-Based Empathetic Conversational Models",
177
+ author = "Zhong, Peixiang and
178
+ Zhang, Chen and
179
+ Wang, Hao and
180
+ Liu, Yong and
181
+ Miao, Chunyan",
182
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
183
+ year = "2020",
184
+ publisher = "Association for Computational Linguistics",
185
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.531",
186
+ pages = "6556--6566"
187
+ }
188
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"happy": {"description": "A dataset of around 350K persona-based empathetic conversations. \nEach speaker is associated with a persona, which comprises multiple persona sentences. \nThe response of each conversation is empathetic.\n", "citation": "@inproceedings{zhong-etal-2020-towards,\n title = \"Towards Persona-Based Empathetic Conversational Models\",\n author = \"Zhong, Peixiang and\n Zhang, Chen and\n Wang, Hao and\n Liu, Yong and\n Miao, Chunyan\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.531\",\n pages = \"6556--6566\"}\n", "homepage": "https://github.com/zhongpeixiang/PEC", "license": "", "features": {"personas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context_speakers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "response": {"dtype": "string", "id": null, "_type": "Value"}, "response_speaker": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "pec", "config_name": "happy", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 643196978, "num_examples": 157195, "dataset_name": "pec"}, "test": {"name": "test", "num_bytes": 92003042, "num_examples": 22730, "dataset_name": "pec"}, "validation": {"name": "validation", "num_bytes": 81132088, "num_examples": 19829, "dataset_name": "pec"}}, "download_checksums": {"https://dl.dropboxusercontent.com/s/u04fzuhsnxd0uvw/hf_pec.zip": {"num_bytes": 252434681, "checksum": "5daa1e0a1569a8927f045191ed1939fe25769860fd7d78dc414bf5583dab0bf1"}}, "download_size": 252434681, "post_processing_size": null, "dataset_size": 816332108, "size_in_bytes": 1068766789}, "offmychest": {"description": "A dataset of around 350K persona-based empathetic conversations. \nEach speaker is associated with a persona, which comprises multiple persona sentences. \nThe response of each conversation is empathetic.\n", "citation": "@inproceedings{zhong-etal-2020-towards,\n title = \"Towards Persona-Based Empathetic Conversational Models\",\n author = \"Zhong, Peixiang and\n Zhang, Chen and\n Wang, Hao and\n Liu, Yong and\n Miao, Chunyan\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.531\",\n pages = \"6556--6566\"}\n", "homepage": "https://github.com/zhongpeixiang/PEC", "license": "", "features": {"personas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context_speakers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "response": {"dtype": "string", "id": null, "_type": "Value"}, "response_speaker": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "pec", "config_name": "offmychest", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 518616402, "num_examples": 123968, "dataset_name": "pec"}, "test": {"name": "test", "num_bytes": 64173390, "num_examples": 15324, "dataset_name": "pec"}, "validation": {"name": "validation", "num_bytes": 66675909, "num_examples": 16004, "dataset_name": "pec"}}, "download_checksums": {"https://dl.dropboxusercontent.com/s/u04fzuhsnxd0uvw/hf_pec.zip": {"num_bytes": 252434681, "checksum": "5daa1e0a1569a8927f045191ed1939fe25769860fd7d78dc414bf5583dab0bf1"}}, "download_size": 252434681, "post_processing_size": null, "dataset_size": 649465701, "size_in_bytes": 901900382}, "all": {"description": "A dataset of around 350K persona-based empathetic conversations. \nEach speaker is associated with a persona, which comprises multiple persona sentences. \nThe response of each conversation is empathetic.\n", "citation": "@inproceedings{zhong-etal-2020-towards,\n title = \"Towards Persona-Based Empathetic Conversational Models\",\n author = \"Zhong, Peixiang and\n Zhang, Chen and\n Wang, Hao and\n Liu, Yong and\n Miao, Chunyan\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.531\",\n pages = \"6556--6566\"}\n", "homepage": "https://github.com/zhongpeixiang/PEC", "license": "", "features": {"personas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context_speakers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "response": {"dtype": "string", "id": null, "_type": "Value"}, "response_speaker": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "pec", "config_name": "all", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1162655628, "num_examples": 281163, "dataset_name": "pec"}, "test": {"name": "test", "num_bytes": 156310498, "num_examples": 38054, "dataset_name": "pec"}, "validation": {"name": "validation", "num_bytes": 147940164, "num_examples": 35833, "dataset_name": "pec"}}, "download_checksums": {"https://dl.dropboxusercontent.com/s/u04fzuhsnxd0uvw/hf_pec.zip": {"num_bytes": 252434681, "checksum": "5daa1e0a1569a8927f045191ed1939fe25769860fd7d78dc414bf5583dab0bf1"}}, "download_size": 252434681, "post_processing_size": null, "dataset_size": 1466906290, "size_in_bytes": 1719340971}}
dummy/all/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcabe146ec6671bf6eba1745fba27adfa148ca09bc81aae3000f9820f5596ae6
3
+ size 10830
dummy/happy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcabe146ec6671bf6eba1745fba27adfa148ca09bc81aae3000f9820f5596ae6
3
+ size 10830
dummy/offmychest/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcabe146ec6671bf6eba1745fba27adfa148ca09bc81aae3000f9820f5596ae6
3
+ size 10830
pec.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """TODO: Add a description here."""
2
+ from __future__ import absolute_import, division, print_function
3
+
4
+ import os
5
+
6
+ import datasets
7
+
8
+
9
+ # TODO: Add BibTeX citation
10
+ _CITATION = """\
11
+ @inproceedings{zhong2020towards,
12
+ title = "Towards Persona-Based Empathetic Conversational Models",
13
+ author = "Zhong, Peixiang and
14
+ Zhang, Chen and
15
+ Wang, Hao and
16
+ Liu, Yong and
17
+ Miao, Chunyan",
18
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
19
+ year = "2020",
20
+ publisher = "Association for Computational Linguistics",
21
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.531",
22
+ pages = "6556--6566"}
23
+ """
24
+
25
+ # TODO: Add description of the dataset here
26
+ _DESCRIPTION = """\
27
+ A dataset of around 350K persona-based empathetic conversations. Each speaker is associated with a persona, which comprises multiple persona sentences. The response of each conversation is empathetic.
28
+ """
29
+
30
+ _URL = "https://dl.dropboxusercontent.com/s/u04fzuhsnxd0uvw/hf_pec.zip"
31
+
32
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
33
+ # Using a specific configuration class is optional, you can also use the base class if you don't need
34
+ # to add specific attributes.
35
+ # here we give an example for three sub-set of the dataset with difference sizes.
36
+
37
+
38
+ class PECConfig(datasets.BuilderConfig):
39
+ """ BuilderConfig for PEC"""
40
+
41
+ def __init__(self, domain="all", **kwargs):
42
+ """
43
+ Args:
44
+ domain: the domain of our dataset: happy or offmychest
45
+ **kwargs: keyword arguments forwarded to super.
46
+ """
47
+ super(PECConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
48
+ self.domain = domain
49
+
50
+
51
+ class PEC(datasets.GeneratorBasedBuilder):
52
+ """TODO: Short description of my dataset."""
53
+
54
+ VERSION = datasets.Version("1.0.0")
55
+ # This is an example of a dataset with multiple configurations.
56
+ # If you don't want/need to define several sub-sets in your dataset,
57
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
58
+ BUILDER_CONFIG_CLASS = PECConfig
59
+ BUILDER_CONFIGS = [
60
+ PECConfig(name=domain, description="A subset of PEC dataset: {}".format(domain), domain=domain)
61
+ for domain in ["happy", "offmychest", "all"]
62
+ ]
63
+
64
+ def _info(self):
65
+ # TODO: Specifies the datasets.DatasetInfo object
66
+ return datasets.DatasetInfo(
67
+ # This is the description that will appear on the datasets page.
68
+ description=_DESCRIPTION,
69
+ # This defines the different columns of the dataset and their types
70
+ features=datasets.Features(
71
+ {
72
+ "personas": datasets.features.Sequence(datasets.Value("string")),
73
+ "context": datasets.features.Sequence(datasets.Value("string")),
74
+ "context_speakers": datasets.features.Sequence(datasets.Value("string")),
75
+ "response": datasets.Value("string"),
76
+ "response_speaker": datasets.Value("string"),
77
+ }
78
+ ),
79
+ # If there's a common (input, target) tuple from the features,
80
+ # specify them here. They'll be used if as_supervised=True in
81
+ # builder.as_dataset.
82
+ supervised_keys=None,
83
+ # Homepage of the dataset for documentation
84
+ homepage="https://github.com/zhongpeixiang/PEC",
85
+ citation=_CITATION,
86
+ )
87
+
88
+ def _load_persona(self, paths):
89
+ persona = {}
90
+ is_speaker = True
91
+ sentences = []
92
+ for path in paths:
93
+ with open(path, encoding="utf-8") as f:
94
+ for row in f:
95
+ if "********************" not in row:
96
+ if is_speaker:
97
+ speaker = row.strip()
98
+ is_speaker = False
99
+ else:
100
+ sentences.append(row.strip())
101
+ else:
102
+ persona[speaker] = sentences
103
+ is_speaker = True
104
+ sentences = []
105
+ return persona
106
+
107
+ def _split_generators(self, dl_manager):
108
+ """Returns SplitGenerators."""
109
+ # TODO: Downloads the data and defines the splits
110
+ # dl_manager is a datasets.download.DownloadManager that can be used to
111
+ # download and extract URLs
112
+ dl_dir = dl_manager.download_and_extract(_URL)
113
+ data_dir = os.path.join(dl_dir, "hf_pec")
114
+ domains = ["happy", "offmychest"] if self.config.domain == "all" else [self.config.domain] # multiple domains
115
+ persona_paths = [os.path.join(data_dir, domain, "persona.txt") for domain in domains]
116
+ persona = self._load_persona(persona_paths)
117
+
118
+ return [
119
+ datasets.SplitGenerator(
120
+ name=datasets.Split.TRAIN,
121
+ gen_kwargs={
122
+ "filepath": [os.path.join(data_dir, domain, "train.txt") for domain in domains],
123
+ "split": "train",
124
+ "persona": persona,
125
+ },
126
+ ),
127
+ datasets.SplitGenerator(
128
+ name=datasets.Split.TEST,
129
+ gen_kwargs={
130
+ "filepath": [os.path.join(data_dir, domain, "test.txt") for domain in domains],
131
+ "split": "test",
132
+ "persona": persona,
133
+ },
134
+ ),
135
+ datasets.SplitGenerator(
136
+ name=datasets.Split.VALIDATION,
137
+ gen_kwargs={
138
+ "filepath": [os.path.join(data_dir, domain, "valid.txt") for domain in domains],
139
+ "split": "dev",
140
+ "persona": persona,
141
+ },
142
+ ),
143
+ ]
144
+
145
+ def _generate_examples(self, filepath, split, persona):
146
+ """ Yields examples. """
147
+ # TODO: Yields (key, example) tuples from the dataset
148
+ context_speakers = []
149
+ context = []
150
+ example_id = 0
151
+ for fpath in filepath:
152
+ with open(fpath, encoding="utf-8") as f:
153
+ for id_, row in enumerate(f):
154
+ if row.strip() == "":
155
+ continue
156
+ if "********************" not in row:
157
+ if "---+---" in row:
158
+ speaker, utterance = row.split("---+---")
159
+ context_speakers.append(speaker.strip())
160
+ context.append(utterance.strip())
161
+ else:
162
+ # contains inline \n
163
+ context[-1] = context[-1] + " " + row.strip()
164
+ else:
165
+ response_speaker = context_speakers.pop()
166
+ response = context.pop()
167
+ yield example_id, {
168
+ "personas": persona[response_speaker],
169
+ "context_speakers": context_speakers,
170
+ "context": context,
171
+ "response_speaker": response_speaker,
172
+ "response": response,
173
+ }
174
+ context_speakers = []
175
+ context = []
176
+ example_id += 1