system HF staff commited on
Commit
5e666e8
1 Parent(s): aae87bb

Update files from the datasets library (from 1.2.1)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.1

Files changed (1) hide show
  1. README.md +161 -33
README.md CHANGED
@@ -15,8 +15,12 @@ source_datasets:
15
  - original
16
  task_categories:
17
  - other
 
 
18
  task_ids:
19
- - other-other-Coached Conversation Preference
 
 
20
  ---
21
 
22
  # Dataset Card for Coached Conversational Preference Elicitation
@@ -46,55 +50,168 @@ task_ids:
46
 
47
  ## Dataset Description
48
 
49
- - **Homepage:** [Google Research](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
50
- - **Repository:**
51
  - **Paper:** [Aclweb](https://www.aclweb.org/anthology/W19-5941/)
52
- - **Leaderboard:**
53
- - **Point of Contact:**
54
 
55
  ### Dataset Summary
56
 
57
- [More Information Needed]
58
 
59
  ### Supported Tasks and Leaderboards
60
 
61
- [More Information Needed]
62
 
63
  ### Languages
64
 
65
- [More Information Needed]
66
 
67
  ## Dataset Structure
68
 
69
  ### Data Instances
70
 
71
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ### Data Fields
74
 
75
  Each conversation has the following fields:
76
 
77
- * conversationId: A unique random ID for the conversation. The ID has no meaning.
78
- * utterances: An array of utterances by the workers.
79
 
80
  Each utterance has the following fields:
81
 
82
- * index: A 0-based index indicating the order of the utterances in the conversation.
83
- * speaker: Either USER or ASSISTANT, indicating which role generated this utterance.
84
- * text: The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER.
85
- * segments: An array of semantic annotations of spans in the text.
86
 
87
  Each semantic annotation segment has the following fields:
88
 
89
- * startIndex: The position of the start of the annotation in the utterance text.
90
- * endIndex: The position of the end of the annotation in the utterance text.
91
- * text: The raw text that has been annotated.
92
- * annotations: An array of annotation details for this segment.
93
 
94
  Each annotation has two fields:
95
 
96
- * annotationType: The class of annotation (see ontology below).
97
- * entityType: The class of the entity to which the text refers (see ontology below).
98
 
99
  **EXPLANATION OF ONTOLOGY**
100
 
@@ -102,22 +219,25 @@ In the corpus, preferences and the entities that these preferences refer to are
102
 
103
  Annotation types fall into four categories:
104
 
105
- * ENTITY_NAME: These mark the names of relevant entities mentioned.
106
- * ENTITY_PREFERENCE: These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed.
107
- * ENTITY_DESCRIPTION: Neutral descriptions that describe an entity but do not convey an explicit liking or disliking.
108
- * ENTITY_OTHER: Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity.
109
 
110
  Entity types are marked as belonging to one of four categories:
111
 
112
- * MOVIE_GENRE_OR_CATEGORY for genres or general descriptions that capture a particular type or style of movie.
113
- * MOVIE_OR_SERIES for the full or partial name of a movie or series of movies.
114
- * PERSON for the full or partial name of an actual person.
115
- * SOMETHING_ELSE for other important proper nouns, such as the names of characters or locations.
116
-
117
 
118
  ### Data Splits
119
 
120
- [More Information Needed]
 
 
 
 
121
 
122
  ## Dataset Creation
123
 
@@ -171,8 +291,16 @@ Entity types are marked as belonging to one of four categories:
171
 
172
  ### Licensing Information
173
 
174
- [More Information Needed]
175
 
176
  ### Citation Information
177
 
178
- [More Information Needed]
 
 
 
 
 
 
 
 
 
15
  - original
16
  task_categories:
17
  - other
18
+ - sequence-modeling
19
+ - structure-prediction
20
  task_ids:
21
+ - other-other-Conversational Recommendation
22
+ - dialogue-modeling
23
+ - parsing
24
  ---
25
 
26
  # Dataset Card for Coached Conversational Preference Elicitation
 
50
 
51
  ## Dataset Description
52
 
53
+ - **Homepage:** [Coached Conversational Preference Elicitation Homepage](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
54
+ - **Repository:** [Coached Conversational Preference Elicitation Repository](https://github.com/google-research-datasets/ccpe)
55
  - **Paper:** [Aclweb](https://www.aclweb.org/anthology/W19-5941/)
 
 
56
 
57
  ### Dataset Summary
58
 
59
+ A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.
60
 
61
  ### Supported Tasks and Leaderboards
62
 
63
+ * `other-other-Conversational Recommendation`: The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation.
64
 
65
  ### Languages
66
 
67
+ The text in the dataset is in English. The associated BCP-47 code is `en`.
68
 
69
  ## Dataset Structure
70
 
71
  ### Data Instances
72
 
73
+ A typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields.
74
+
75
+ An example from the Coached Conversational Preference Elicitation dataset looks as follows:
76
+
77
+ ```
78
+ {'conversationId': 'CCPE-6faee',
79
+ 'utterances': {'index': [0,
80
+ 1,
81
+ 2,
82
+ 3,
83
+ 4,
84
+ 5,
85
+ 6,
86
+ 7,
87
+ 8,
88
+ 9,
89
+ 10,
90
+ 11,
91
+ 12,
92
+ 13,
93
+ 14,
94
+ 15],
95
+ 'segments': [{'annotations': [{'annotationType': [], 'entityType': []}],
96
+ 'endIndex': [0],
97
+ 'startIndex': [0],
98
+ 'text': ['']},
99
+ {'annotations': [{'annotationType': [0], 'entityType': [0]},
100
+ {'annotationType': [1], 'entityType': [0]}],
101
+ 'endIndex': [20, 27],
102
+ 'startIndex': [14, 0],
103
+ 'text': ['comedy', 'I really like comedy movies']},
104
+ {'annotations': [{'annotationType': [0], 'entityType': [0]}],
105
+ 'endIndex': [24],
106
+ 'startIndex': [16],
107
+ 'text': ['comedies']},
108
+ {'annotations': [{'annotationType': [1], 'entityType': [0]}],
109
+ 'endIndex': [15],
110
+ 'startIndex': [0],
111
+ 'text': ['I love to laugh']},
112
+ {'annotations': [{'annotationType': [], 'entityType': []}],
113
+ 'endIndex': [0],
114
+ 'startIndex': [0],
115
+ 'text': ['']},
116
+ {'annotations': [{'annotationType': [0], 'entityType': [1]},
117
+ {'annotationType': [1], 'entityType': [1]}],
118
+ 'endIndex': [21, 21],
119
+ 'startIndex': [8, 0],
120
+ 'text': ['Step Brothers', 'I liked Step Brothers']},
121
+ {'annotations': [{'annotationType': [], 'entityType': []}],
122
+ 'endIndex': [0],
123
+ 'startIndex': [0],
124
+ 'text': ['']},
125
+ {'annotations': [{'annotationType': [1], 'entityType': [1]}],
126
+ 'endIndex': [32],
127
+ 'startIndex': [0],
128
+ 'text': ['Had some amazing one-liners that']},
129
+ {'annotations': [{'annotationType': [], 'entityType': []}],
130
+ 'endIndex': [0],
131
+ 'startIndex': [0],
132
+ 'text': ['']},
133
+ {'annotations': [{'annotationType': [0], 'entityType': [1]},
134
+ {'annotationType': [1], 'entityType': [1]}],
135
+ 'endIndex': [15, 15],
136
+ 'startIndex': [13, 0],
137
+ 'text': ['RV', "I don't like RV"]},
138
+ {'annotations': [{'annotationType': [], 'entityType': []}],
139
+ 'endIndex': [0],
140
+ 'startIndex': [0],
141
+ 'text': ['']},
142
+ {'annotations': [{'annotationType': [1], 'entityType': [1]},
143
+ {'annotationType': [1], 'entityType': [1]}],
144
+ 'endIndex': [48, 66],
145
+ 'startIndex': [18, 50],
146
+ 'text': ['It was just so slow and boring', "I didn't like it"]},
147
+ {'annotations': [{'annotationType': [0], 'entityType': [1]}],
148
+ 'endIndex': [63],
149
+ 'startIndex': [33],
150
+ 'text': ['Jurassic World: Fallen Kingdom']},
151
+ {'annotations': [{'annotationType': [0], 'entityType': [1]},
152
+ {'annotationType': [3], 'entityType': [1]}],
153
+ 'endIndex': [52, 52],
154
+ 'startIndex': [22, 0],
155
+ 'text': ['Jurassic World: Fallen Kingdom',
156
+ 'I have seen the movie Jurassic World: Fallen Kingdom']},
157
+ {'annotations': [{'annotationType': [], 'entityType': []}],
158
+ 'endIndex': [0],
159
+ 'startIndex': [0],
160
+ 'text': ['']},
161
+ {'annotations': [{'annotationType': [1], 'entityType': [1]},
162
+ {'annotationType': [1], 'entityType': [1]},
163
+ {'annotationType': [1], 'entityType': [1]}],
164
+ 'endIndex': [24, 125, 161],
165
+ 'startIndex': [0, 95, 135],
166
+ 'text': ['I really like the actors',
167
+ 'I just really like the scenery',
168
+ 'the dinosaurs were awesome']}],
169
+ 'speaker': [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
170
+ 'text': ['What kinds of movies do you like?',
171
+ 'I really like comedy movies.',
172
+ 'Why do you like comedies?',
173
+ "I love to laugh and comedy movies, that's their whole purpose. Make you laugh.",
174
+ 'Alright, how about a movie you liked?',
175
+ 'I liked Step Brothers.',
176
+ 'Why did you like that movie?',
177
+ 'Had some amazing one-liners that still get used today even though the movie was made awhile ago.',
178
+ 'Well, is there a movie you did not like?',
179
+ "I don't like RV.",
180
+ 'Why not?',
181
+ "And I just didn't It was just so slow and boring. I didn't like it.",
182
+ 'Ok, then have you seen the movie Jurassic World: Fallen Kingdom',
183
+ 'I have seen the movie Jurassic World: Fallen Kingdom.',
184
+ 'What is it about these kinds of movies that you like or dislike?',
185
+ 'I really like the actors. I feel like they were doing their best to make the movie better. And I just really like the scenery, and the the dinosaurs were awesome.']}}
186
+ ```
187
+
188
+
189
 
190
  ### Data Fields
191
 
192
  Each conversation has the following fields:
193
 
194
+ * `conversationId`: A unique random ID for the conversation. The ID has no meaning.
195
+ * `utterances`: An array of utterances by the workers.
196
 
197
  Each utterance has the following fields:
198
 
199
+ * `index`: A 0-based index indicating the order of the utterances in the conversation.
200
+ * `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance.
201
+ * `text`: The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER.
202
+ * `segments`: An array of semantic annotations of spans in the text.
203
 
204
  Each semantic annotation segment has the following fields:
205
 
206
+ * `startIndex`: The position of the start of the annotation in the utterance text.
207
+ * `endIndex`: The position of the end of the annotation in the utterance text.
208
+ * `text`: The raw text that has been annotated.
209
+ * `annotations`: An array of annotation details for this segment.
210
 
211
  Each annotation has two fields:
212
 
213
+ * `annotationType`: The class of annotation (see ontology below).
214
+ * `entityType`: The class of the entity to which the text refers (see ontology below).
215
 
216
  **EXPLANATION OF ONTOLOGY**
217
 
 
219
 
220
  Annotation types fall into four categories:
221
 
222
+ * `ENTITY_NAME` (0): These mark the names of relevant entities mentioned.
223
+ * `ENTITY_PREFERENCE` (1): These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed.
224
+ * `ENTITY_DESCRIPTION` (2): Neutral descriptions that describe an entity but do not convey an explicit liking or disliking.
225
+ * `ENTITY_OTHER` (3): Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity.
226
 
227
  Entity types are marked as belonging to one of four categories:
228
 
229
+ * `MOVIE_GENRE_OR_CATEGORY` (0): For genres or general descriptions that capture a particular type or style of movie.
230
+ * `MOVIE_OR_SERIES` (1): For the full or partial name of a movie or series of movies.
231
+ * `PERSON` (2): For the full or partial name of an actual person.
232
+ * `SOMETHING_ELSE ` (3): For other important proper nouns, such as the names of characters or locations.
 
233
 
234
  ### Data Splits
235
 
236
+ There is a single split of the dataset named 'train' which contains the whole datset.
237
+
238
+ | | Train |
239
+ | ------------------- | ----- |
240
+ | Input Conversations | 502 |
241
 
242
  ## Dataset Creation
243
 
 
291
 
292
  ### Licensing Information
293
 
294
+ [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/)
295
 
296
  ### Citation Information
297
 
298
+ ```
299
+ @inproceedings{radlinski-etal-2019-ccpe,
300
+ title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences},
301
+ author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi},
302
+ booktitle = {Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue ({SIGDIAL})},
303
+ year = 2019
304
+ }
305
+ ```
306
+