parquet-converter commited on
Commit
b270533
1 Parent(s): be2fcc0

Update parquet files

Browse files
.gitattributes CHANGED
@@ -2,3 +2,5 @@ personachat_truecased_full_train.json filter=lfs diff=lfs merge=lfs -text
2
  personachat_truecased_full_valid.json filter=lfs diff=lfs merge=lfs -text
3
  personachat_truecased_sample_train.json filter=lfs diff=lfs merge=lfs -text
4
  personachat_truecased_sample_valid.json filter=lfs diff=lfs merge=lfs -text
 
 
 
2
  personachat_truecased_full_valid.json filter=lfs diff=lfs merge=lfs -text
3
  personachat_truecased_sample_train.json filter=lfs diff=lfs merge=lfs -text
4
  personachat_truecased_sample_valid.json filter=lfs diff=lfs merge=lfs -text
5
+ full/personachat_truecased-train.parquet filter=lfs diff=lfs merge=lfs -text
6
+ full/personachat_truecased-validation.parquet filter=lfs diff=lfs merge=lfs -text
.gitignore DELETED
@@ -1,2 +0,0 @@
1
- venv
2
- .idea
 
 
 
README.md DELETED
@@ -1,65 +0,0 @@
1
- # A More Natural PersonaChat
2
-
3
- ## Dataset Summary
4
-
5
- This dataset is a true-cased version of the PersonaChat dataset by Zhang et al. (2018).
6
- The original PersonaChat dataset is all lower case, and has extra space around each
7
- clause/sentence separating punctuation mark. This version of the dataset has more of a
8
- natural language look, with sentence capitalization, proper noun capitalization, and
9
- normalized whitespace. Also, each dialogue turn includes a pool of distractor
10
- candidate responses, which can be used by a multiple choice regularization loss during
11
- training.
12
-
13
- As an example, here is an utterance from the original PersonaChat dataset:
14
-
15
- ```
16
- "i really like celine dion . what about you ?"
17
- ```
18
-
19
- In this dataset, that example is:
20
-
21
- ```
22
- "I really like Celine Dion. What about you?"
23
- ```
24
-
25
- ## Languages
26
-
27
- The text in the dataset is in English (**en**).
28
-
29
- ## Data Fields
30
-
31
- Each instance of the dataset represents a conversational utterance that a
32
- crowdworker made, while pretending to have a certain personality. Each instance has
33
- these fields:
34
-
35
- | Field Name | Datatype | Description |
36
- |---------------|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
37
- | `conv_id` | int | A unique identifier for the instance's conversation. |
38
- | `utterance_idx` | int | The index of the instance in the conversation. |
39
- | `personality` | list of string | Sentences describing the personality of the current speaker. |
40
- | `history` | list of string | The conversation's utterances so far, alternating between speakers with one utterance per speaker. |
41
- | `candidates` | list of string | A list of utterances including distractor utterances as well as the true utterance the speaker gave, given their personality and the conversation history thus far. The true utterance is always the last utterance in this list. |
42
-
43
- ## Dataset Curation
44
-
45
- The dataset was sourced from HuggingFace's version of the dataset used in the code for their
46
- ConvAI 2018 submission, which was described in their [blog article](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313)
47
- on that submission. This version of the dataset has had extra white spaces removed,
48
- and a StanfordNLP [stanza](https://stanfordnlp.github.io/stanza/) NLP pipeline was
49
- used to conduct part-of-speech tagging to identify proper nouns, which were then
50
- capitalized. The pipeline was also used to conduct sentence segmentation, allowing
51
- the beginning of sentences to then be capitalized. Finally, all instances of the
52
- pronoun "I" were capitalized, along with its contractions.
53
-
54
- ## Citation Information
55
-
56
- For the PersonaChat dataset, please cite:
57
-
58
- ```
59
- @article{zhang2018personalizing,
60
- title={Personalizing dialogue agents: I have a dog, do you have pets too?},
61
- author={Zhang, Saizheng and Dinan, Emily and Urbanek, Jack and Szlam, Arthur and Kiela, Douwe and Weston, Jason},
62
- journal={arXiv preprint arXiv:1801.07243},
63
- year={2018}
64
- }
65
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1,176 +0,0 @@
1
- {
2
- "full": {
3
- "description": "A version of the PersonaChat dataset that has been true-cased, and also has been given more normalized punctuation.\nThe original PersonaChat dataset is in all lower case, and has extra space around each clause/sentence separating\npunctuation mark. This version of the dataset has more of a natural language look, with sentence capitalization,\nproper noun capitalization, and normalized whitespace. Also, each dialogue turn includes a pool of distractor\ncandidate responses, which can be used by a multiple choice regularization loss during training.\n",
4
- "citation": "@article{zhang2018personalizing,\n title={Personalizing dialogue agents: I have a dog, do you have pets too?},\n author={Zhang, Saizheng and Dinan, Emily and Urbanek, Jack and Szlam, Arthur and Kiela, Douwe and Weston, Jason},\n journal={arXiv preprint arXiv:1801.07243},\n year={2018}\n}\n",
5
- "homepage": "",
6
- "license": "Like the original PersonaChat dataset, this dataset is released under the CC BY 4.0 license.",
7
- "features": {
8
- "personality": {
9
- "feature": {
10
- "dtype": "string",
11
- "id": null,
12
- "_type": "Value"
13
- },
14
- "length": -1,
15
- "id": null,
16
- "_type": "Sequence"
17
- },
18
- "candidates": {
19
- "feature": {
20
- "dtype": "string",
21
- "id": null,
22
- "_type": "Value"
23
- },
24
- "length": -1,
25
- "id": null,
26
- "_type": "Sequence"
27
- },
28
- "history": {
29
- "feature": {
30
- "dtype": "string",
31
- "id": null,
32
- "_type": "Value"
33
- },
34
- "length": -1,
35
- "id": null,
36
- "_type": "Sequence"
37
- },
38
- "conv_id": {
39
- "dtype": "int32",
40
- "id": null,
41
- "_type": "Value"
42
- },
43
- "utterance_idx": {
44
- "dtype": "int32",
45
- "id": null,
46
- "_type": "Value"
47
- }
48
- },
49
- "post_processed": null,
50
- "supervised_keys": null,
51
- "builder_name": "personachat_truecased",
52
- "config_name": "full",
53
- "version": {
54
- "version_str": "1.0.0",
55
- "description": null,
56
- "major": 1,
57
- "minor": 0,
58
- "patch": 0
59
- },
60
- "splits": {
61
- "train": {
62
- "name": "train",
63
- "num_bytes": 208267262,
64
- "num_examples": 131438,
65
- "dataset_name": "personachat_truecased"
66
- },
67
- "validation": {
68
- "name": "validation",
69
- "num_bytes": 12968847,
70
- "num_examples": 7801,
71
- "dataset_name": "personachat_truecased"
72
- }
73
- },
74
- "download_checksums": {
75
- "./personachat_truecased_full_train.json": {
76
- "num_bytes": 193210313,
77
- "checksum": "cea0e87a230ecbf69ef7937a6012e12060a7f4a2bd9a1adc44d3141cb57938f3"
78
- },
79
- "./personachat_truecased_full_valid.json": {
80
- "num_bytes": 11995403,
81
- "checksum": "2277648a4b773d81cf3406eb872deff9489a930c7140b5a1a1bed79a48317562"
82
- }
83
- },
84
- "download_size": 205205716,
85
- "post_processing_size": null,
86
- "dataset_size": 221236109,
87
- "size_in_bytes": 426441825
88
- },
89
- "sample": {
90
- "description": "A version of the PersonaChat dataset that has been true-cased, and also has been given more normalized punctuation.\nThe original PersonaChat dataset is in all lower case, and has extra space around each clause/sentence separating\npunctuation mark. This version of the dataset has more of a natural language look, with sentence capitalization,\nproper noun capitalization, and normalized whitespace. Also, each dialogue turn includes a pool of distractor\ncandidate responses, which can be used by a multiple choice regularization loss during training.\n",
91
- "citation": "@article{zhang2018personalizing,\n title={Personalizing dialogue agents: I have a dog, do you have pets too?},\n author={Zhang, Saizheng and Dinan, Emily and Urbanek, Jack and Szlam, Arthur and Kiela, Douwe and Weston, Jason},\n journal={arXiv preprint arXiv:1801.07243},\n year={2018}\n}\n",
92
- "homepage": "",
93
- "license": "Like the original PersonaChat dataset, this dataset is released under the CC BY 4.0 license.",
94
- "features": {
95
- "personality": {
96
- "feature": {
97
- "dtype": "string",
98
- "id": null,
99
- "_type": "Value"
100
- },
101
- "length": -1,
102
- "id": null,
103
- "_type": "Sequence"
104
- },
105
- "candidates": {
106
- "feature": {
107
- "dtype": "string",
108
- "id": null,
109
- "_type": "Value"
110
- },
111
- "length": -1,
112
- "id": null,
113
- "_type": "Sequence"
114
- },
115
- "history": {
116
- "feature": {
117
- "dtype": "string",
118
- "id": null,
119
- "_type": "Value"
120
- },
121
- "length": -1,
122
- "id": null,
123
- "_type": "Sequence"
124
- },
125
- "conv_id": {
126
- "dtype": "int32",
127
- "id": null,
128
- "_type": "Value"
129
- },
130
- "utterance_idx": {
131
- "dtype": "int32",
132
- "id": null,
133
- "_type": "Value"
134
- }
135
- },
136
- "post_processed": null,
137
- "supervised_keys": null,
138
- "builder_name": "personachat_truecased",
139
- "config_name": "sample",
140
- "version": {
141
- "version_str": "1.0.0",
142
- "description": null,
143
- "major": 1,
144
- "minor": 0,
145
- "patch": 0
146
- },
147
- "splits": {
148
- "train": {
149
- "name": "train",
150
- "num_bytes": 22552,
151
- "num_examples": 14,
152
- "dataset_name": "personachat_truecased"
153
- },
154
- "validation": {
155
- "name": "validation",
156
- "num_bytes": 24568,
157
- "num_examples": 15,
158
- "dataset_name": "personachat_truecased"
159
- }
160
- },
161
- "download_checksums": {
162
- "./personachat_truecased_sample_train.json": {
163
- "num_bytes": 21396,
164
- "checksum": "cba1d219748010a15e47bdb5fbc78903bd77c52ffa2ff8fdb96e6e68d1747e5a"
165
- },
166
- "./personachat_truecased_sample_valid.json": {
167
- "num_bytes": 23092,
168
- "checksum": "f0f956b6e6b359949073dc47117e192140a289ec974f2edb77c230f5a63e6420"
169
- }
170
- },
171
- "download_size": 44488,
172
- "post_processing_size": null,
173
- "dataset_size": 47120,
174
- "size_in_bytes": 91608
175
- }
176
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
personachat_truecased_full_valid.json → full/personachat_truecased-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2277648a4b773d81cf3406eb872deff9489a930c7140b5a1a1bed79a48317562
3
- size 11995403
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f0c7b8d13b3b324fc74590b1a925775168d3a704084c8515bb4261984ab4e50
3
+ size 97063692
personachat_truecased_sample_train.json → full/personachat_truecased-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cba1d219748010a15e47bdb5fbc78903bd77c52ffa2ff8fdb96e6e68d1747e5a
3
- size 21396
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a4fa997d6f0ad272ac1c32fd17c55f42dff71abb5ff8e4cfe33a1cbee9e988b
3
+ size 4303155
personachat_truecased.py DELETED
@@ -1,96 +0,0 @@
1
- import json
2
-
3
- import datasets
4
- from datasets.features import Sequence
5
-
6
-
7
- _URLS = {
8
- "full": {
9
- "train": "./personachat_truecased_full_train.json",
10
- "valid": "./personachat_truecased_full_valid.json"
11
- },
12
- "sample": {
13
- "train": "./personachat_truecased_sample_train.json",
14
- "valid": "./personachat_truecased_sample_valid.json"
15
- }
16
- }
17
-
18
- _DESCRIPTION = """\
19
- A version of the PersonaChat dataset that has been true-cased, and also has been given more normalized punctuation.
20
- The original PersonaChat dataset is in all lower case, and has extra space around each clause/sentence separating
21
- punctuation mark. This version of the dataset has more of a natural language look, with sentence capitalization,
22
- proper noun capitalization, and normalized whitespace. Also, each dialogue turn includes a pool of distractor
23
- candidate responses, which can be used by a multiple choice regularization loss during training.
24
- """
25
-
26
- _CITATION = """\
27
- @article{zhang2018personalizing,
28
- title={Personalizing dialogue agents: I have a dog, do you have pets too?},
29
- author={Zhang, Saizheng and Dinan, Emily and Urbanek, Jack and Szlam, Arthur and Kiela, Douwe and Weston, Jason},
30
- journal={arXiv preprint arXiv:1801.07243},
31
- year={2018}
32
- }
33
- """
34
-
35
- _LICENSE = "Like the original PersonaChat dataset, this dataset is released under the CC BY 4.0 license."
36
-
37
-
38
- class PersonachatTruecased(datasets.GeneratorBasedBuilder):
39
- """
40
- Version of the PersonaChat dataset that includes true-casing, normalized punctuation, and candidate distractor
41
- responses for each dialogue turn, for including a multiple choice regularzation loss while training.
42
- """
43
-
44
- VERSION = datasets.Version("1.0.0")
45
-
46
- BUILDER_CONFIGS = [
47
- datasets.BuilderConfig(name="full", version=VERSION, description="The full dataset."),
48
- datasets.BuilderConfig(
49
- name="sample", version=VERSION, description="A sample sample of the dataset, useful for testing."
50
- )
51
- ]
52
-
53
- DEFAULT_CONFIG_NAME = "full"
54
-
55
- def _info(self):
56
- return datasets.DatasetInfo(
57
- description=_DESCRIPTION,
58
- features=datasets.Features({
59
- "personality": Sequence(datasets.Value("string")),
60
- "candidates": Sequence(datasets.Value("string")),
61
- "history": Sequence(datasets.Value("string")),
62
- "conv_id": datasets.Value("int32"),
63
- "utterance_idx": datasets.Value("int32")
64
- }),
65
- citation=_CITATION,
66
- license=_LICENSE
67
- )
68
-
69
- def _split_generators(self, dl_manager: datasets.DownloadManager):
70
- split_paths = dl_manager.download(_URLS[self.config.name])
71
- return [
72
- datasets.SplitGenerator(
73
- name=datasets.Split.TRAIN,
74
- # These kwargs will be passed to _generate_examples
75
- gen_kwargs={"data_path": split_paths["train"]}
76
- ),
77
- datasets.SplitGenerator(
78
- name=datasets.Split.VALIDATION,
79
- gen_kwargs={"data_path": split_paths["valid"]}
80
- )
81
- ]
82
-
83
- def _generate_examples(self, data_path: str):
84
- with open(data_path) as f:
85
- data = json.load(f)
86
- for conv_id, conv in enumerate(data):
87
- personality = conv["personality"]
88
- for utterance_idx, utterance in enumerate(conv["utterances"]):
89
- id_ = f"{conv_id}-{utterance_idx}"
90
- yield id_, {
91
- "personality": personality,
92
- "candidates": utterance["candidates"],
93
- "history": utterance["history"],
94
- "conv_id": conv_id,
95
- "utterance_idx": utterance_idx
96
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
personachat_truecased_full_train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cea0e87a230ecbf69ef7937a6012e12060a7f4a2bd9a1adc44d3141cb57938f3
3
- size 193210313
 
 
 
 
personachat_truecased_sample_valid.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0f956b6e6b359949073dc47117e192140a289ec974f2edb77c230f5a63e6420
3
- size 23092
 
 
 
 
sample/personachat_truecased-train.parquet ADDED
Binary file (16.1 kB). View file
 
sample/personachat_truecased-validation.parquet ADDED
Binary file (16.9 kB). View file