parquet-converter commited on
Commit
5e65b1a
1 Parent(s): 6f27df5

Update parquet files

Browse files
README.md DELETED
@@ -1,269 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - ja
6
- - en
7
- language_creators:
8
- - expert-generated
9
- license:
10
- - cc-by-sa-4.0
11
- multilinguality:
12
- - translation
13
- pretty_name: JSICK
14
- size_categories:
15
- - 10K<n<100K
16
- source_datasets:
17
- - extended|sick
18
- tags:
19
- - semantic-textual-similarity
20
- - sts
21
- task_categories:
22
- - sentence-similarity
23
- - text-classification
24
- task_ids:
25
- - natural-language-inference
26
- - semantic-similarity-scoring
27
- ---
28
-
29
- # Dataset Card for JSICK
30
-
31
- ## Table of Contents
32
- - [Dataset Card for JSICK](#dataset-card-for-jsick)
33
- - [Table of Contents](#table-of-contents)
34
- - [Dataset Description](#dataset-description)
35
- - [Dataset Summary](#dataset-summary)
36
- - [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japanese-sentences-involving-compositional-knowledge-jsick-dataset)
37
- - [JSICK-stress Test set](#jsick-stress-test-set)
38
- - [Languages](#languages)
39
- - [Dataset Structure](#dataset-structure)
40
- - [Data Instances](#data-instances)
41
- - [base](#base)
42
- - [stress](#stress)
43
- - [Data Fields](#data-fields)
44
- - [base](#base-1)
45
- - [stress](#stress-1)
46
- - [Data Splits](#data-splits)
47
- - [Annotations](#annotations)
48
- - [Additional Information](#additional-information)
49
- - [Licensing Information](#licensing-information)
50
- - [Citation Information](#citation-information)
51
- - [Contributions](#contributions)
52
-
53
- ## Dataset Description
54
-
55
- - **Homepage:** https://github.com/verypluming/JSICK
56
- - **Repository:** https://github.com/verypluming/JSICK
57
- - **Paper:** https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual
58
- - **Paper:** https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja
59
-
60
- ### Dataset Summary
61
-
62
- From official [GitHub](https://github.com/verypluming/JSICK):
63
-
64
- #### Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
65
-
66
- JSICK is the Japanese NLI and STS dataset by manually translating the English dataset [SICK (Marelli et al., 2014)](https://aclanthology.org/L14-1314/) into Japanese.
67
- We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
68
-
69
-
70
- #### JSICK-stress Test set
71
-
72
- The JSICK-stress test set is a dataset to investigate whether models capture word order and case particles in Japanese.
73
- The JSICK-stress test set is provided by transforming syntactic structures of sentence pairs in JSICK, where we analyze whether models are attentive to word order and case particles to predict entailment labels and similarity scores.
74
-
75
- The JSICK test set contains 1666, 797, and 1006 sentence pairs (A, B) whose premise sentences A (the column `sentence_A_Ja_origin`) include the basic word order involving
76
- ga-o (nominative-accusative), ga-ni (nominative-dative), and ga-de (nominative-instrumental/locative) relations, respectively.
77
-
78
- We provide the JSICK-stress test set by transforming syntactic structures of these pairs by the following three ways:
79
- - `scrum_ga_o`: a scrambled pair, where the word order of premise sentences A is scrambled into o-ga, ni-ga, and de-ga order, respectively.
80
- - `ex_ga_o`: a rephrased pair, where the only case particles (ga, o, ni, de) in the premise A are swapped
81
- - `del_ga_o`: a rephrased pair, where the only case particles (ga, o, ni) in the premise A are deleted
82
-
83
-
84
- ### Languages
85
-
86
- The language data in JSICK is in Japanese and English.
87
-
88
-
89
- ## Dataset Structure
90
-
91
-
92
- ### Data Instances
93
- When loading a specific configuration, users has to append a version dependent suffix:
94
-
95
- ```python
96
- import datasets as ds
97
-
98
- dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick")
99
- print(dataset)
100
- # DatasetDict({
101
- # train: Dataset({
102
- # features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
103
- # num_rows: 4500
104
- # })
105
- # test: Dataset({
106
- # features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
107
- # num_rows: 4927
108
- # })
109
- # })
110
-
111
- dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="stress")
112
- print(dataset)
113
- # DatasetDict({
114
- # test: Dataset({
115
- # features: ['id', 'premise', 'hypothesis', 'label', 'score', 'sentence_A_Ja_origin', 'entailment_label_origin', 'relatedness_score_Ja_origin', 'rephrase_type', 'case_particles'],
116
- # num_rows: 900
117
- # })
118
- # })
119
- ```
120
-
121
-
122
- #### base
123
-
124
- An example of looks as follows:
125
-
126
- ```json
127
- {
128
- 'id': 1,
129
- 'premise': '子供たちのグループが庭で遊んでいて、後ろの方には年を取った男性が立っている',
130
- 'hypothesis': '庭にいる男の子たちのグループが遊んでいて、男性が後ろ���方に立っている',
131
- 'label': 1, // (neutral)
132
- 'score': 3.700000047683716,
133
- 'premise_en': 'A group of kids is playing in a yard and an old man is standing in the background',
134
- 'hypothesis_en': 'A group of boys in a yard is playing and a man is standing in the background',
135
- 'label_en': 1, // (neutral)
136
- 'score_en': 4.5,
137
- 'corr_entailment_labelAB_En': 'nan',
138
- 'corr_entailment_labelBA_En': 'nan',
139
- 'image_ID': '3155657768_b83a7831e5.jpg',
140
- 'original_caption': 'A group of children playing in a yard , a man in the background .',
141
- 'semtag_short': 'nan',
142
- 'semtag_long': 'nan',
143
- }
144
- ```
145
-
146
- #### stress
147
-
148
- An example of looks as follows:
149
-
150
- ```json
151
- {
152
- 'id': '5818_de_d',
153
- 'premise': '女性火の近くダンスをしている',
154
- 'hypothesis': '火の近くでダンスをしている女性は一人もいない',
155
- 'label': 2, // (contradiction)
156
- 'score': 4.0,
157
- 'sentence_A_Ja_origin': '女性が火の近くでダンスをしている',
158
- 'entailment_label_origin': 2,
159
- 'relatedness_score_Ja_origin': 3.700000047683716,
160
- 'rephrase_type': 'd',
161
- 'case_particles': 'de'
162
- }
163
- ```
164
-
165
- ### Data Fields
166
-
167
- #### base
168
-
169
- A version adopting the column names of a typical NLI dataset.
170
-
171
- | Name | Description |
172
- | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
173
- | id | The ids (the same with original SICK). |
174
- | premise | The first sentence in Japanese. |
175
- | hypothesis | The second sentence in Japanese. |
176
- | label | The entailment label in Japanese. |
177
- | score | The relatedness score in the range [1-5] in Japanese. |
178
- | premise_en | The first sentence in English. |
179
- | hypothesis_en | The second sentence in English. |
180
- | label_en | The original entailment label in English. |
181
- | score_en | The original relatedness score in the range [1-5] in English. |
182
- | semtag_short | The linguistic phenomena tags in Japanese. |
183
- | semtag_long | The details of linguistic phenomena tags in Japanese. |
184
- | image_ID | The original image in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
185
- | original_caption | The original caption in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
186
- | corr_entailment_labelAB_En | The corrected entailment label from A to B in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
187
- | corr_entailment_labelBA_En | The corrected entailment label from B to A in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
188
-
189
-
190
- #### stress
191
-
192
- | Name | Description |
193
- | --------------------------- | ------------------------------------------------------------------------------------------------- |
194
- | id | Ids (the same with original SICK). |
195
- | premise | The first sentence in Japanese. |
196
- | hypothesis | The second sentence in Japanese. |
197
- | label | The entailment label in Japanese |
198
- | score | The relatedness score in the range [1-5] in Japanese. |
199
- | sentence_A_Ja_origin | The original premise sentences A from the JSICK test set. |
200
- | entailment_label_origin | The original entailment labels. |
201
- | relatedness_score_Ja_origin | The original relatedness scores. |
202
- | rephrase_type | The type of transformation applied to the syntactic structures of the sentence pairs. |
203
- | case_particles | The grammatical particles in Japanese that indicate the function or role of a noun in a sentence. |
204
-
205
-
206
- ### Data Splits
207
-
208
- | name | train | validation | test |
209
- | --------------- | ----: | ---------: | ----: |
210
- | base | 4,500 | | 4,927 |
211
- | original | 4,500 | | 4,927 |
212
- | stress | | | 900 |
213
- | stress-original | | | 900 |
214
-
215
-
216
- ### Annotations
217
-
218
- To annotate the JSICK dataset, they used the crowdsourcing platform "Lancers" to re-annotate entailment labels and similarity scores for JSICK.
219
- They had six native Japanese speakers as annotators, who were randomly selected from the platform.
220
- The annotators were asked to fully understand the guidelines and provide the same labels as gold labels for ten test questions.
221
-
222
- For entailment labels, they adopted annotations that were agreed upon by a majority vote as gold labels and checked whether the majority judgment vote was semantically valid for each example.
223
- For similarity scores, they used the average of the annotation results as gold scores.
224
- The raw annotations with the JSICK dataset are [publicly available](https://github.com/verypluming/JSICK/blob/main/jsick/jsick-all-annotations.tsv).
225
- The average annotation time was 1 minute per pair, and Krippendorff's alpha for the entailment labels was 0.65.
226
-
227
-
228
- ## Additional Information
229
-
230
- - [verypluming/JSICK](https://github.com/verypluming/JSICK)
231
- - [Compositional Evaluation on Japanese Textual Entailment and Similarity](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual)
232
- - [JSICK: 日本語構成的推論・類似度データセットの構築](https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_article/-char/ja)
233
-
234
- ### Licensing Information
235
-
236
- CC BY-SA 4.0
237
-
238
- ### Citation Information
239
-
240
- ```bibtex
241
- @article{yanaka-mineshima-2022-compositional,
242
- title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
243
- author = "Yanaka, Hitomi and
244
- Mineshima, Koji",
245
- journal = "Transactions of the Association for Computational Linguistics",
246
- volume = "10",
247
- year = "2022",
248
- address = "Cambridge, MA",
249
- publisher = "MIT Press",
250
- url = "https://aclanthology.org/2022.tacl-1.73",
251
- doi = "10.1162/tacl_a_00518",
252
- pages = "1266--1284",
253
- }
254
-
255
- @article{谷中 瞳2021,
256
- title={JSICK: 日本語構成的推論・類似度データセットの構築},
257
- author={谷中 瞳 and 峯島 宏次},
258
- journal={人工知能学会全国大会論文集},
259
- volume={JSAI2021},
260
- number={ },
261
- pages={4J3GS6f02-4J3GS6f02},
262
- year={2021},
263
- doi={10.11517/pjsai.JSAI2021.0_4J3GS6f02}
264
- }
265
- ```
266
-
267
- ### Contributions
268
-
269
- Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
base/jsick-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbdacbe035475b857ddb2ed95ade89271a8aa2a633687aca5d5c4593009eee9b
3
+ size 490960
base/jsick-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e800bc1798bdeb01a5571d359f1b359bfdaf96cb6da759e5dcdd63caddc85dd7
3
+ size 468526
jsick.py DELETED
@@ -1,208 +0,0 @@
1
- import datasets as ds
2
- import pandas as pd
3
-
4
- _CITATION = """\
5
- @article{yanaka-mineshima-2022-compositional,
6
- title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
7
- author = "Yanaka, Hitomi and Mineshima, Koji",
8
- journal = "Transactions of the Association for Computational Linguistics",
9
- volume = "10",
10
- year = "2022",
11
- address = "Cambridge, MA",
12
- publisher = "MIT Press",
13
- url = "https://aclanthology.org/2022.tacl-1.73",
14
- doi = "10.1162/tacl_a_00518",
15
- pages = "1266--1284",
16
- }
17
- """
18
-
19
- _DESCRIPTION = """\
20
- Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
21
- JSICK is the Japanese NLI and STS dataset by manually translating the English dataset SICK (Marelli et al., 2014) into Japanese.
22
- We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
23
- (from official website)
24
- """
25
-
26
- _HOMEPAGE = "https://github.com/verypluming/JSICK"
27
-
28
- _LICENSE = "CC BY-SA 4.0"
29
-
30
- _URLS = {
31
- "base": "https://raw.githubusercontent.com/verypluming/JSICK/main/jsick/jsick.tsv",
32
- "stress": "https://raw.githubusercontent.com/verypluming/JSICK/main/jsick-stress/jsick-stress-all-annotations.tsv",
33
- }
34
-
35
-
36
- class JSICKDataset(ds.GeneratorBasedBuilder):
37
- VERSION = ds.Version("1.0.0")
38
- DEFAULT_CONFIG_NAME = "base"
39
-
40
- BUILDER_CONFIGS = [
41
- ds.BuilderConfig(
42
- name="base",
43
- version=VERSION,
44
- description="A version adopting the column names of a typical NLI dataset.",
45
- ),
46
- ds.BuilderConfig(
47
- name="original",
48
- version=VERSION,
49
- description="The original version retaining the unaltered column names.",
50
- ),
51
- ds.BuilderConfig(
52
- name="stress",
53
- version=VERSION,
54
- description="The dataset to investigate whether models capture word order and case particles in Japanese.",
55
- ),
56
- ds.BuilderConfig(
57
- name="stress-original",
58
- version=VERSION,
59
- description="The original version of JSICK-stress Test set retaining the unaltered column names.",
60
- ),
61
- ]
62
-
63
- def _info(self) -> ds.DatasetInfo:
64
- labels = ds.ClassLabel(names=["entailment", "neutral", "contradiction"])
65
- if self.config.name == "base":
66
- features = ds.Features(
67
- {
68
- "id": ds.Value("int32"),
69
- "premise": ds.Value("string"),
70
- "hypothesis": ds.Value("string"),
71
- "label": labels,
72
- "score": ds.Value("float32"),
73
- "premise_en": ds.Value("string"),
74
- "hypothesis_en": ds.Value("string"),
75
- "label_en": labels,
76
- "score_en": ds.Value("float32"),
77
- "corr_entailment_labelAB_En": ds.Value("string"),
78
- "corr_entailment_labelBA_En": ds.Value("string"),
79
- "image_ID": ds.Value("string"),
80
- "original_caption": ds.Value("string"),
81
- "semtag_short": ds.Value("string"),
82
- "semtag_long": ds.Value("string"),
83
- }
84
- )
85
- elif self.config.name == "original":
86
- features = ds.Features(
87
- {
88
- "pair_ID": ds.Value("int32"),
89
- "sentence_A_Ja": ds.Value("string"),
90
- "sentence_B_Ja": ds.Value("string"),
91
- "entailment_label_Ja": labels,
92
- "relatedness_score_Ja": ds.Value("float32"),
93
- "sentence_A_En": ds.Value("string"),
94
- "sentence_B_En": ds.Value("string"),
95
- "entailment_label_En": labels,
96
- "relatedness_score_En": ds.Value("float32"),
97
- "corr_entailment_labelAB_En": ds.Value("string"),
98
- "corr_entailment_labelBA_En": ds.Value("string"),
99
- "image_ID": ds.Value("string"),
100
- "original_caption": ds.Value("string"),
101
- "semtag_short": ds.Value("string"),
102
- "semtag_long": ds.Value("string"),
103
- }
104
- )
105
-
106
- elif self.config.name == "stress":
107
- features = ds.Features(
108
- {
109
- "id": ds.Value("string"),
110
- "premise": ds.Value("string"),
111
- "hypothesis": ds.Value("string"),
112
- "label": labels,
113
- "score": ds.Value("float32"),
114
- "sentence_A_Ja_origin": ds.Value("string"),
115
- "entailment_label_origin": labels,
116
- "relatedness_score_Ja_origin": ds.Value("float32"),
117
- "rephrase_type": ds.Value("string"),
118
- "case_particles": ds.Value("string"),
119
- }
120
- )
121
-
122
- elif self.config.name == "stress-original":
123
- features = ds.Features(
124
- {
125
- "pair_ID": ds.Value("string"),
126
- "sentence_A_Ja": ds.Value("string"),
127
- "sentence_B_Ja": ds.Value("string"),
128
- "entailment_label_Ja": labels,
129
- "relatedness_score_Ja": ds.Value("float32"),
130
- "sentence_A_Ja_origin": ds.Value("string"),
131
- "entailment_label_origin": labels,
132
- "relatedness_score_Ja_origin": ds.Value("float32"),
133
- "rephrase_type": ds.Value("string"),
134
- "case_particles": ds.Value("string"),
135
- }
136
- )
137
-
138
- return ds.DatasetInfo(
139
- description=_DESCRIPTION,
140
- citation=_CITATION,
141
- homepage=_HOMEPAGE,
142
- license=_LICENSE,
143
- features=features,
144
- )
145
-
146
- def _split_generators(self, dl_manager: ds.DownloadManager):
147
- if self.config.name in ["base", "original"]:
148
- url = _URLS["base"]
149
- elif self.config.name in ["stress", "stress-original"]:
150
- url = _URLS["stress"]
151
-
152
- data_path = dl_manager.download_and_extract(url)
153
- df: pd.DataFrame = pd.read_table(data_path, sep="\t", header=0)
154
-
155
- if self.config.name in ["stress", "stress-original"]:
156
- df = df[
157
- [
158
- "pair_ID",
159
- "sentence_A_Ja",
160
- "sentence_B_Ja",
161
- "entailment_label_Ja",
162
- "relatedness_score_Ja",
163
- "sentence_A_Ja_origin",
164
- "entailment_label_origin",
165
- "relatedness_score_Ja_origin",
166
- "rephrase_type",
167
- "case_particles",
168
- ]
169
- ]
170
-
171
- if self.config.name in ["base", "stress"]:
172
- df = df.rename(
173
- columns={
174
- "pair_ID": "id",
175
- "sentence_A_Ja": "premise",
176
- "sentence_B_Ja": "hypothesis",
177
- "entailment_label_Ja": "label",
178
- "relatedness_score_Ja": "score",
179
- "sentence_A_En": "premise_en",
180
- "sentence_B_En": "hypothesis_en",
181
- "entailment_label_En": "label_en",
182
- "relatedness_score_En": "score_en",
183
- }
184
- )
185
-
186
- if self.config.name in ["base", "original"]:
187
- return [
188
- ds.SplitGenerator(
189
- name=ds.Split.TRAIN,
190
- gen_kwargs={"df": df[df["data"] == "train"].drop("data", axis=1)},
191
- ),
192
- ds.SplitGenerator(
193
- name=ds.Split.TEST,
194
- gen_kwargs={"df": df[df["data"] == "test"].drop("data", axis=1)},
195
- ),
196
- ]
197
-
198
- elif self.config.name in ["stress", "stress-original"]:
199
- return [
200
- ds.SplitGenerator(
201
- name=ds.Split.TEST,
202
- gen_kwargs={"df": df},
203
- ),
204
- ]
205
-
206
- def _generate_examples(self, df: pd.DataFrame):
207
- for i, row in enumerate(df.to_dict("records")):
208
- yield i, row
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
original/jsick-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5993d57775641363fe43fba991d04ea2222c48162524e88092fd937d58034749
3
+ size 491980
original/jsick-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33b09a3c2a9b91d39be1f80639d1bc423126312e0327d453b5dc59c0eddb649b
3
+ size 469546
poetry.lock DELETED
The diff for this file is too large to render. See raw diff
 
pyproject.toml DELETED
@@ -1,23 +0,0 @@
1
- [tool.poetry]
2
- name = "datasets-jsick"
3
- version = "0.1.0"
4
- description = ""
5
- authors = ["hppRC <hpp.ricecake@gmail.com>"]
6
- readme = "README.md"
7
- packages = []
8
-
9
- [tool.poetry.dependencies]
10
- python = "^3.8.1"
11
- datasets = "^2.11.0"
12
-
13
-
14
- [tool.poetry.group.dev.dependencies]
15
- black = "^22.12.0"
16
- isort = "^5.11.4"
17
- flake8 = "^6.0.0"
18
- mypy = "^0.991"
19
- pytest = "^7.2.0"
20
-
21
- [build-system]
22
- requires = ["poetry-core"]
23
- build-backend = "poetry.core.masonry.api"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
stress-original/jsick-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2863f1fd6fb1d07351973dcb93aceb55428d65229926066a8cd6039a2efabc52
3
+ size 85842
stress/jsick-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fda0695e52c3dc375951341549429ee74fad71457bb60f713afbc9391bc9cdd6
3
+ size 85562