Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
c6a4a1c
1 Parent(s): 0162890

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,230 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- pretty_name: BlendedSkillTalk
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - conversational
19
- task_ids:
20
- - dialogue-generation
21
- paperswithcode_id: blended-skill-talk
22
- dataset_info:
23
- features:
24
- - name: personas
25
- sequence: string
26
- - name: additional_context
27
- dtype: string
28
- - name: previous_utterance
29
- sequence: string
30
- - name: context
31
- dtype: string
32
- - name: free_messages
33
- sequence: string
34
- - name: guided_messages
35
- sequence: string
36
- - name: suggestions
37
- sequence:
38
- - name: convai2
39
- dtype: string
40
- - name: empathetic_dialogues
41
- dtype: string
42
- - name: wizard_of_wikipedia
43
- dtype: string
44
- - name: guided_chosen_suggestions
45
- sequence: string
46
- - name: label_candidates
47
- sequence:
48
- sequence: string
49
- splits:
50
- - name: train
51
- num_bytes: 10831361
52
- num_examples: 4819
53
- - name: validation
54
- num_bytes: 43961658
55
- num_examples: 1009
56
- - name: test
57
- num_bytes: 44450102
58
- num_examples: 980
59
- download_size: 38101408
60
- dataset_size: 99243121
61
- ---
62
-
63
- # Dataset Card for "blended_skill_talk"
64
-
65
- ## Table of Contents
66
- - [Dataset Description](#dataset-description)
67
- - [Dataset Summary](#dataset-summary)
68
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
69
- - [Languages](#languages)
70
- - [Dataset Structure](#dataset-structure)
71
- - [Data Instances](#data-instances)
72
- - [Data Fields](#data-fields)
73
- - [Data Splits](#data-splits)
74
- - [Dataset Creation](#dataset-creation)
75
- - [Curation Rationale](#curation-rationale)
76
- - [Source Data](#source-data)
77
- - [Annotations](#annotations)
78
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
79
- - [Considerations for Using the Data](#considerations-for-using-the-data)
80
- - [Social Impact of Dataset](#social-impact-of-dataset)
81
- - [Discussion of Biases](#discussion-of-biases)
82
- - [Other Known Limitations](#other-known-limitations)
83
- - [Additional Information](#additional-information)
84
- - [Dataset Curators](#dataset-curators)
85
- - [Licensing Information](#licensing-information)
86
- - [Citation Information](#citation-information)
87
- - [Contributions](#contributions)
88
-
89
- ## Dataset Description
90
-
91
- - **Homepage:** [https://parl.ai/projects/bst/](https://parl.ai/projects/bst/)
92
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
- - **Paper:** [Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills](https://arxiv.org/abs/2004.08449v1)
94
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
95
- - **Size of downloaded dataset files:** 36.34 MB
96
- - **Size of the generated dataset:** 14.38 MB
97
- - **Total amount of disk used:** 50.71 MB
98
-
99
- ### Dataset Summary
100
-
101
- A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.
102
-
103
- ### Supported Tasks and Leaderboards
104
-
105
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
-
107
- ### Languages
108
-
109
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
-
111
- ## Dataset Structure
112
-
113
- ### Data Instances
114
-
115
- #### default
116
-
117
- - **Size of downloaded dataset files:** 36.34 MB
118
- - **Size of the generated dataset:** 14.38 MB
119
- - **Total amount of disk used:** 50.71 MB
120
-
121
- An example of 'train' looks as follows.
122
- ```
123
- {
124
- 'personas': ['my parents don t really speak english , but i speak italian and english.', 'i have three children.'],
125
- 'additional_context': 'Backstreet Boys',
126
- 'previous_utterance': ['Oh, I am a BIG fan of the Backstreet Boys! Have you ever seen them performing live?', "No,I listen to their music a lot, mainly the unbreakable which is the Backstreet Boys' sixth studio album. "],
127
- 'context': 'wizard_of_wikipedia',
128
- 'free_messages': ['you are very knowledgeable, do you prefer nsync or bsb?', "haha kids of this days don't know them, i'm 46 and i still enjoying them, my kids only listen k-pop", "italian?haha that's strange, i only talk english and a little spanish "],
129
- 'guided_messages': ["i don't have a preference, they are both great. All 3 of my kids get annoyed when I listen to them though.", 'Sometimes I sing their songs in Italian, that really annoys them lol.', 'My parents barely speak English, so I was taught both. By the way, what is k-pop?'],
130
- 'suggestions': {'convai2': ["i don't have a preference , both are pretty . do you have any hobbies ?", "do they the backstreet boys ? that's my favorite group .", 'are your kids interested in music ?'], 'empathetic_dialogues': ['I actually just discovered Imagine Dragons. I love them!', "Hahaha that just goes to show ya, age is just a umber!'", 'That would be hard! Do you now Spanish well?'], 'wizard_of_wikipedia': ['NSYNC Also had Lance Bass and Joey Fatone, sometimes called the Fat One.', 'Yes, there are a few K-Pop songs that I have heard good big in the USA. It is the most popular in South Korea and has Western elements of pop.', 'English, beleive it or not.']},
131
- 'guided_chosen_suggestions': ['convai2', '', ''],
132
- 'label_candidates': []}
133
- ```
134
-
135
- ### Data Fields
136
-
137
- The data fields are the same among all splits.
138
-
139
- #### default
140
- - `personas`: a `list` of `string` features.
141
- - `additional_context`: a `string` feature.
142
- - `previous_utterance`: a `list` of `string` features.
143
- - `context`: a `string` feature.
144
- - `free_messages`: a `list` of `string` features.
145
- - `guided_messgaes`: a `list` of `string` features.
146
- - `suggestions`: a dictionary feature containing:
147
- - `convai2`: a `string` feature.
148
- - `empathetic_dialogues`: a `string` feature.
149
- - `wizard_of_wikipedia`: a `string` feature.
150
- - `guided_chosen_suggestions`: a `list` of `string` features.
151
- - `label_candidates`: a `list` of `lists` of `string` features.
152
-
153
- ### Data Splits
154
-
155
- | name |train|validation|test|
156
- |-------|----:|---------:|---:|
157
- |default| 4819| 1009| 980|
158
-
159
- ## Dataset Creation
160
-
161
- ### Curation Rationale
162
-
163
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
-
165
- ### Source Data
166
-
167
- #### Initial Data Collection and Normalization
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- #### Who are the source language producers?
172
-
173
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
-
175
- ### Annotations
176
-
177
- #### Annotation process
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- #### Who are the annotators?
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ### Personal and Sensitive Information
186
-
187
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
-
189
- ## Considerations for Using the Data
190
-
191
- ### Social Impact of Dataset
192
-
193
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
-
195
- ### Discussion of Biases
196
-
197
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
-
199
- ### Other Known Limitations
200
-
201
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
-
203
- ## Additional Information
204
-
205
- ### Dataset Curators
206
-
207
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
208
-
209
- ### Licensing Information
210
-
211
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
212
-
213
- ### Citation Information
214
-
215
- ```
216
- @misc{smith2020evaluating,
217
- title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills},
218
- author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau},
219
- year={2020},
220
- eprint={2004.08449},
221
- archivePrefix={arXiv},
222
- primaryClass={cs.CL}
223
- }
224
-
225
- ```
226
-
227
-
228
- ### Contributions
229
-
230
- Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
blended_skill_talk.py DELETED
@@ -1,146 +0,0 @@
1
- """TODO(blended_skill_talk): Add a description here."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
-
8
-
9
- # TODO(blended_skill_talk): BibTeX citation
10
- _CITATION = """\
11
- @misc{smith2020evaluating,
12
- title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills},
13
- author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau},
14
- year={2020},
15
- eprint={2004.08449},
16
- archivePrefix={arXiv},
17
- primaryClass={cs.CL}
18
- }
19
- """
20
-
21
- # TODO(blended_skill_talk):
22
- _DESCRIPTION = """\
23
- A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.
24
- """
25
- _URL = "http://parl.ai/downloads/blended_skill_talk/blended_skill_talk.tar.gz"
26
-
27
- _TASK = ["convai2", "empathetic_dialogues", "wizard_of_wikipedia"]
28
-
29
-
30
- class BlendedSkillTalk(datasets.GeneratorBasedBuilder):
31
- """TODO(blended_skill_talk): Short description of my dataset."""
32
-
33
- # TODO(blended_skill_talk): Set up version.
34
- VERSION = datasets.Version("1.0.0")
35
-
36
- def _info(self):
37
- # TODO(blended_skill_talk): Specifies the datasets.DatasetInfo object
38
- return datasets.DatasetInfo(
39
- # This is the description that will appear on the datasets page.
40
- description=_DESCRIPTION,
41
- # datasets.features.FeatureConnectors
42
- features=datasets.Features(
43
- {
44
- "personas": datasets.features.Sequence(datasets.Value("string")),
45
- "additional_context": datasets.Value("string"),
46
- "previous_utterance": datasets.features.Sequence(datasets.Value("string")),
47
- "context": datasets.Value("string"),
48
- "free_messages": datasets.features.Sequence(datasets.Value("string")),
49
- "guided_messages": datasets.features.Sequence(datasets.Value("string")),
50
- "suggestions": datasets.features.Sequence({task: datasets.Value("string") for task in _TASK}),
51
- "guided_chosen_suggestions": datasets.features.Sequence(datasets.Value("string")),
52
- "label_candidates": datasets.features.Sequence(
53
- datasets.features.Sequence(datasets.Value("string"))
54
- ),
55
- # These are the features of your dataset like images, labels ...
56
- }
57
- ),
58
- # If there's a common (input, target) tuple from the features,
59
- # specify them here. They'll be used if as_supervised=True in
60
- # builder.as_dataset.
61
- supervised_keys=None,
62
- # Homepage of the dataset for documentation
63
- homepage="https://parl.ai/projects/bst/",
64
- citation=_CITATION,
65
- )
66
-
67
- def _split_generators(self, dl_manager):
68
- """Returns SplitGenerators."""
69
- # TODO(blended_skill_talk): Downloads the data and defines the splits
70
- # dl_manager is a datasets.download.DownloadManager that can be used to
71
- # download and extract URLs
72
- archive = dl_manager.download(_URL)
73
- return [
74
- datasets.SplitGenerator(
75
- name=datasets.Split.TRAIN,
76
- # These kwargs will be passed to _generate_examples
77
- gen_kwargs={
78
- "filepath": "train.json",
79
- "files": dl_manager.iter_archive(archive),
80
- },
81
- ),
82
- datasets.SplitGenerator(
83
- name=datasets.Split.VALIDATION,
84
- # These kwargs will be passed to _generate_examples
85
- gen_kwargs={
86
- "filepath": "valid.json",
87
- "files": dl_manager.iter_archive(archive),
88
- },
89
- ),
90
- datasets.SplitGenerator(
91
- name=datasets.Split.TEST,
92
- # These kwargs will be passed to _generate_examples
93
- gen_kwargs={
94
- "filepath": "test.json",
95
- "files": dl_manager.iter_archive(archive),
96
- },
97
- ),
98
- ]
99
-
100
- def _generate_examples(self, filepath, files):
101
- """Yields examples."""
102
- # TODO(blended_skill_talk): Yields (key, example) tuples from the dataset
103
- for path, f in files:
104
- if path == filepath:
105
- data = json.load(f)
106
- for id_, row in enumerate(data):
107
- personas = [row["personas"][1][0], row["personas"][1][1]]
108
- dialogs = [dialog[1] for dialog in row["dialog"]]
109
- free_messages = []
110
- guided_messages = []
111
-
112
- for i in range(len(dialogs) // 2):
113
- free_messages.append(dialogs[2 * i])
114
- guided_messages.append(dialogs[2 * i + 1])
115
- context = row["context_dataset"]
116
- add_context = row["additional_context"] if context == "wizard_of_wikipedia" else ""
117
- previous_utterance = [row["free_turker_utterance"], row["guided_turker_utterance"]]
118
- suggestions = row["suggestions"]
119
- convai_suggestions = []
120
- empathetic_suggestions = []
121
- wow_suggestions = []
122
- for i in range(len(suggestions) // 2):
123
- convai_suggestions.append(suggestions[2 * i + 1]["convai2"])
124
- empathetic_suggestions.append(suggestions[2 * i + 1]["empathetic_dialogues"])
125
- wow_suggestions.append(suggestions[2 * i + 1]["wizard_of_wikipedia"])
126
- chosen_suggestions = row["chosen_suggestions"]
127
- guided_chosen_suggestions = []
128
- for i in range(len(chosen_suggestions) // 2):
129
- guided_chosen_suggestions.append(chosen_suggestions[2 * i + 1])
130
- label_candidates = row["label_candidates"] if "label_candidates" in row else []
131
- yield id_, {
132
- "personas": personas,
133
- "additional_context": add_context,
134
- "previous_utterance": previous_utterance,
135
- "context": context,
136
- "free_messages": free_messages,
137
- "guided_messages": guided_messages,
138
- "suggestions": {
139
- "convai2": convai_suggestions,
140
- "empathetic_dialogues": empathetic_suggestions,
141
- "wizard_of_wikipedia": wow_suggestions,
142
- },
143
- "guided_chosen_suggestions": guided_chosen_suggestions,
144
- "label_candidates": label_candidates,
145
- }
146
- break
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.\n", "citation": "@misc{smith2020evaluating,\n title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills},\n author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau},\n year={2020},\n eprint={2004.08449},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://parl.ai/projects/bst/", "license": "", "features": {"personas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "additional_context": {"dtype": "string", "id": null, "_type": "Value"}, "previous_utterance": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "free_messages": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "guided_messages": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "suggestions": {"feature": {"convai2": {"dtype": "string", "id": null, "_type": "Value"}, "empathetic_dialogues": {"dtype": "string", "id": null, "_type": "Value"}, "wizard_of_wikipedia": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "guided_chosen_suggestions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "label_candidates": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "blended_skill_talk", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10831361, "num_examples": 4819, "dataset_name": "blended_skill_talk"}, "validation": {"name": "validation", "num_bytes": 43961658, "num_examples": 1009, "dataset_name": "blended_skill_talk"}, "test": {"name": "test", "num_bytes": 44450102, "num_examples": 980, "dataset_name": "blended_skill_talk"}}, "download_checksums": {"http://parl.ai/downloads/blended_skill_talk/blended_skill_talk.tar.gz": {"num_bytes": 38101408, "checksum": "5fbed0068ee89e2d43b93c3ecb341e784617033efa5e8e911a219d4eda6134a6"}}, "download_size": 38101408, "post_processing_size": null, "dataset_size": 99243121, "size_in_bytes": 137344529}}
 
 
default/blended_skill_talk-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:489a36c661a9967628a1004eeba1fa1a222634daac3e7084a8d32d1b3b5aeabd
3
+ size 2402775
default/blended_skill_talk-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aad7460938a5028e48d4f32ddb98345f9f1334ee4db4ea33463e0960fc648b4
3
+ size 5876072
default/blended_skill_talk-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abe885b5c159433fdc359649d9189fc2f43e732cccd1bd7bf36a9b775966a9f6
3
+ size 2618794