system HF staff commited on
Commit
a7b4daa
0 Parent(s):

Update files from the datasets library (from 1.6.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.6.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - found
7
+ - machine-generated
8
+ languages:
9
+ - en
10
+ - de
11
+ - es
12
+ - fr
13
+ - it
14
+ - nl
15
+ - pl
16
+ - pt
17
+ - ru
18
+ - zh
19
+ licenses:
20
+ - custom
21
+ multilinguality:
22
+ - multilingual
23
+ size_categories:
24
+ - 10K<n<100K
25
+ source_datasets:
26
+ - extended|other-sts-b
27
+ task_categories:
28
+ - text-scoring
29
+ task_ids:
30
+ - semantic-similarity-scoring
31
+ ---
32
+
33
+ # Dataset Card for STSb Multi MT
34
+
35
+ ## Table of Contents
36
+ - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
37
+ - [Table of Contents](#table-of-contents)
38
+ - [Dataset Description](#dataset-description)
39
+ - [Dataset Summary](#dataset-summary)
40
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
41
+ - [Languages](#languages)
42
+ - [Dataset Structure](#dataset-structure)
43
+ - [Data Instances](#data-instances)
44
+ - [Data Fields](#data-fields)
45
+ - [Data Splits](#data-splits)
46
+ - [Dataset Creation](#dataset-creation)
47
+ - [Curation Rationale](#curation-rationale)
48
+ - [Source Data](#source-data)
49
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
50
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
51
+ - [Annotations](#annotations)
52
+ - [Annotation process](#annotation-process)
53
+ - [Who are the annotators?](#who-are-the-annotators)
54
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
55
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
56
+ - [Social Impact of Dataset](#social-impact-of-dataset)
57
+ - [Discussion of Biases](#discussion-of-biases)
58
+ - [Other Known Limitations](#other-known-limitations)
59
+ - [Additional Information](#additional-information)
60
+ - [Dataset Curators](#dataset-curators)
61
+ - [Licensing Information](#licensing-information)
62
+ - [Citation Information](#citation-information)
63
+ - [Contributions](#contributions)
64
+
65
+ ## Dataset Description
66
+
67
+ - **Repository**: https://github.com/PhilipMay/stsb-multi-mt
68
+ - **Homepage (original dataset):** https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark
69
+ - **Paper about original dataset:** https://arxiv.org/abs/1708.00055
70
+ - **Leaderboard:** https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark#Results
71
+ - **Point of Contact:** [Open an issue on GitHub](https://github.com/PhilipMay/stsb-multi-mt/issues/new)
72
+
73
+ ### Dataset Summary
74
+
75
+ > STS Benchmark comprises a selection of the English datasets used in the STS tasks organized
76
+ > in the context of SemEval between 2012 and 2017. The selection of datasets include text from
77
+ > image captions, news headlines and user forums. ([source](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark))
78
+
79
+ These are different multilingual translations and the English original of the [STSbenchmark dataset](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark). Translation has been done with [deepl.com](https://www.deepl.com/). It can be used to train [sentence embeddings](https://github.com/UKPLab/sentence-transformers) like [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer).
80
+
81
+
82
+ **Examples of Use**
83
+
84
+ Load German dev Dataset:
85
+ ```python
86
+ from datasets import load_dataset
87
+ dataset = load_dataset("stsb_multi_mt", name="de", split="dev")
88
+ ```
89
+
90
+ Load English train Dataset:
91
+ ```python
92
+ from datasets import load_dataset
93
+ dataset = load_dataset("stsb_multi_mt", name="en", split="train")
94
+ ```
95
+
96
+ ### Supported Tasks and Leaderboards
97
+
98
+ [More Information Needed]
99
+
100
+ ### Languages
101
+
102
+ Available languages are: de, en, es, fr, it, nl, pl, pt, ru, zh
103
+
104
+ ## Dataset Structure
105
+
106
+ ### Data Instances
107
+
108
+ This dataset provides pairs of sentences and a score of their similarity.
109
+
110
+ score | 2 example sentences | explanation
111
+ ------|---------|------------
112
+ 5 | *The bird is bathing in the sink.<br/>Birdie is washing itself in the water basin.* | The two sentences are completely equivalent, as they mean the same thing.
113
+ 4 | *Two boys on a couch are playing video games.<br/>Two boys are playing a video game.* | The two sentences are mostly equivalent, but some unimportant details differ.
114
+ 3 | *John said he is considered a witness but not a suspect.<br/>“He is not a suspect anymore.” John said.* | The two sentences are roughly equivalent, but some important information differs/missing.
115
+ 2 | *They flew out of the nest in groups.<br/>They flew into the nest together.* | The two sentences are not equivalent, but share some details.
116
+ 1 | *The woman is playing the violin.<br/>The young lady enjoys listening to the guitar.* | The two sentences are not equivalent, but are on the same topic.
117
+ 0 | *The black dog is running through the snow.<br/>A race car driver is driving his car through the mud.* | The two sentences are completely dissimilar.
118
+
119
+ An example:
120
+ ```
121
+ {
122
+ "sentence1": "A man is playing a large flute.",
123
+ "sentence2": "A man is playing a flute.",
124
+ "similarity_score": 3.8
125
+ }
126
+ ```
127
+
128
+ ### Data Fields
129
+
130
+ - `sentence1`: The 1st sentence as a `str`.
131
+ - `sentence2`: The 2nd sentence as a `str`.
132
+ - `similarity_score`: The similarity score as a `float` which is `<= 5.0` and `>= 0.0`.
133
+
134
+ ### Data Splits
135
+
136
+ - train with 5749 samples
137
+ - dev with 1500 samples
138
+ - test with 1379 sampples
139
+
140
+ ## Dataset Creation
141
+
142
+ ### Curation Rationale
143
+
144
+ [More Information Needed]
145
+
146
+ ### Source Data
147
+
148
+ #### Initial Data Collection and Normalization
149
+
150
+ [More Information Needed]
151
+
152
+ #### Who are the source language producers?
153
+
154
+ [More Information Needed]
155
+
156
+ ### Annotations
157
+
158
+ #### Annotation process
159
+
160
+ [More Information Needed]
161
+
162
+ #### Who are the annotators?
163
+
164
+ [More Information Needed]
165
+
166
+ ### Personal and Sensitive Information
167
+
168
+ [More Information Needed]
169
+
170
+ ## Considerations for Using the Data
171
+
172
+ ### Social Impact of Dataset
173
+
174
+ [More Information Needed]
175
+
176
+ ### Discussion of Biases
177
+
178
+ [More Information Needed]
179
+
180
+ ### Other Known Limitations
181
+
182
+ [More Information Needed]
183
+
184
+ ## Additional Information
185
+
186
+ ### Dataset Curators
187
+
188
+ [More Information Needed]
189
+
190
+ ### Licensing Information
191
+
192
+ See [LICENSE](https://github.com/PhilipMay/stsb-multi-mt/blob/main/LICENSE) and [download at original dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark).
193
+
194
+ ### Citation Information
195
+
196
+ ```
197
+ @InProceedings{huggingface:dataset:stsb_multi_mt,
198
+ title = {Machine translated multilingual STS benchmark dataset.},
199
+ author={Philip May},
200
+ year={2021},
201
+ url={https://github.com/PhilipMay/stsb-multi-mt}
202
+ }
203
+ ```
204
+
205
+ ### Contributions
206
+
207
+ Thanks to [@PhilipMay](https://github.com/PhilipMay) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"en": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 731803, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 164466, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 210072, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-en-train.csv": {"num_bytes": 708863, "checksum": "e1e84fec60bbb598735552f54a35f4949904a484750fd2cb11e2720e49f63da6"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-en-dev.csv": {"num_bytes": 204675, "checksum": "d29586e96558c4eb52cf5ea5d14e9c24d3bf0e44f111b017caba43a5adc33226"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-en-test.csv": {"num_bytes": 158891, "checksum": "11523b625219e94e9ca05d2816b5f02cac1614c5894fe657376fa0806378d053"}}, "download_size": 1072429, "post_processing_size": null, "dataset_size": 1106341, "size_in_bytes": 2178770}, "de": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "de", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 867473, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 193333, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 247077, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-de-train.csv": {"num_bytes": 847845, "checksum": "7a393283b1ec86d9919452bd793db2938831169b29f1584660c3d20e2ee2a53e"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-de-dev.csv": {"num_bytes": 242726, "checksum": "40ddf84ce3d2dfb957e11687edea30e5567c542219998f906ee552cfd3ef6769"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-de-test.csv": {"num_bytes": 188602, "checksum": "a92ce1015b201a342784ab7e34e024ceb0a0eeb479231fbadd05f756afb817fd"}}, "download_size": 1279173, "post_processing_size": null, "dataset_size": 1307883, "size_in_bytes": 2587056}, "es": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "es", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 887101, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 194616, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 245250, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-es-train.csv": {"num_bytes": 864921, "checksum": "c0b1e809c1d5e51f95e90753656a371a5c92fff9f459ba01be20ce23f3ecc8b4"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-es-dev.csv": {"num_bytes": 240097, "checksum": "7afece0711add2fb35bc65874bdf0fbbacbd5c784ebb3b926f4099f3989317d4"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-es-test.csv": {"num_bytes": 189142, "checksum": "2b6f60e63f19806436cdfd8fe314f91144f97e403d1d50f20ff9ffcb323d5b2f"}}, "download_size": 1294160, "post_processing_size": null, "dataset_size": 1326967, "size_in_bytes": 2621127}, "fr": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "fr", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 910195, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 200446, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 254083, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-fr-train.csv": {"num_bytes": 888493, "checksum": "29cece1d7e713cbc5c6940d4a0bbba7764da093e6d0b9da8fee26662b8af1d0a"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-fr-dev.csv": {"num_bytes": 249002, "checksum": "d9f0ed00811dc0d91663f195790bccc4627e0da3575e79e4e06ac777ec38039f"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-fr-test.csv": {"num_bytes": 195020, "checksum": "f7da31a1d88557344afcb26430dc1c6cb85e76b564333de1a29f8008205b39b2"}}, "download_size": 1332515, "post_processing_size": null, "dataset_size": 1364724, "size_in_bytes": 2697239}, "it": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "it", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 871526, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 191647, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 243144, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-it-train.csv": {"num_bytes": 849516, "checksum": "2e6563f62debb6c9a1233dc411424083b8694c8fe0a6d85723c8ce1ec8cfd894"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-it-dev.csv": {"num_bytes": 237958, "checksum": "ee64355376825a84ee6cc82b36d9f22b7f03a77d73ea97756cb7aaf85079552b"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-it-test.csv": {"num_bytes": 186156, "checksum": "e6f0be0552dd3c9e49ac794505086056b07f9cf2be68615e961107841a540248"}}, "download_size": 1273630, "post_processing_size": null, "dataset_size": 1306317, "size_in_bytes": 2579947}, "nl": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "nl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 833667, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 182904, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 234887, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-nl-train.csv": {"num_bytes": 810934, "checksum": "4d60f75ec8e7a51c79f2fd7207747cf567bb16812f12f798031b119a14018f7c"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-nl-dev.csv": {"num_bytes": 229492, "checksum": "d2b2a895824d6bd60a28dbf9c22248df6e65603a389c08943255b7c2c694f8f2"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-nl-test.csv": {"num_bytes": 177327, "checksum": "a0ee96b6ee9ed0181664e687f6b42f6273a87c885f34833a1e5a06368759cb35"}}, "download_size": 1217753, "post_processing_size": null, "dataset_size": 1251458, "size_in_bytes": 2469211}, "pl": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "pl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 828433, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 181266, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 231758, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-pl-train.csv": {"num_bytes": 808667, "checksum": "87a9c4443badde50c56e5fe4e198ee6bfd35c01cc82c487a9a97921c2a2e8415"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-pl-dev.csv": {"num_bytes": 227287, "checksum": "0caa1ef060e886a41f6ae05bdead735089ed14e9715e63a47583ba2f13b4a17b"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-pl-test.csv": {"num_bytes": 176382, "checksum": "abea78b1b3c4a39017da96d5074f4d61c1b825590bfb65e50d64216a7c68de59"}}, "download_size": 1212336, "post_processing_size": null, "dataset_size": 1241457, "size_in_bytes": 2453793}, "pt": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "pt", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 854356, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 189163, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 240559, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-pt-train.csv": {"num_bytes": 832314, "checksum": "8d769a45cf9435b840b3567dc8e327bee437a6a496eec0a1840d97262e032f60"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-pt-dev.csv": {"num_bytes": 235432, "checksum": "8f063a119e9e4cd8eaa6de42ff125984867c54bfbacef14fce493783bbe6b113"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-pt-test.csv": {"num_bytes": 183762, "checksum": "9ba9018a1919efd24d243112ecbb3c448e2d361eebaa1c9aadf8988e3767e9be"}}, "download_size": 1251508, "post_processing_size": null, "dataset_size": 1284078, "size_in_bytes": 2535586}, "ru": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "ru", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1391674, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 300007, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 386268, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-ru-train.csv": {"num_bytes": 1373901, "checksum": "9b2fa623b9a0fa827151a47f4713c5ebc56313c069ff47855e393d505951aa30"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-ru-dev.csv": {"num_bytes": 382231, "checksum": "8e4de0dcb2d87c1a2adc7bfdbe8ce4a7e0a1e10e3deba947e26069490ab66abd"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-ru-test.csv": {"num_bytes": 295513, "checksum": "87a92ee27b26e724c4e4923d744d8198f9eef49698be7b514734118a755c4fce"}}, "download_size": 2051645, "post_processing_size": null, "dataset_size": 2077949, "size_in_bytes": 4129594}, "zh": {"description": "This is a multilingual translation of the STSbenchmark dataset. Translation has been done with deepl.com.\n", "citation": "@InProceedings{huggingface:dataset:stsb_multi_mt,\ntitle = {Machine translated multilingual STS benchmark dataset.},\nauthor={Philip May},\nyear={2021},\nurl={https://github.com/PhilipMay/stsb-multi-mt}\n}\n", "homepage": "https://github.com/PhilipMay/stsb-multi-mt", "license": "custom license - see project page", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "similarity_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "stsb_multi_mt", "config_name": "zh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 694424, "num_examples": 5749, "dataset_name": "stsb_multi_mt"}, "test": {"name": "test", "num_bytes": 154834, "num_examples": 1379, "dataset_name": "stsb_multi_mt"}, "dev": {"name": "dev", "num_bytes": 195821, "num_examples": 1500, "dataset_name": "stsb_multi_mt"}}, "download_checksums": {"https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-zh-train.csv": {"num_bytes": 669068, "checksum": "6d44b5faa6c88e76f0c5f39fcd5963622c121b1d106f139a33b5b1df24c20ce2"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-zh-dev.csv": {"num_bytes": 189283, "checksum": "f24a3580938f840d82ee3d520de3af2be355b389850f21d0cc5c228ba3f6cdc9"}, "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data/stsb-zh-test.csv": {"num_bytes": 148541, "checksum": "a583195cb30b9d99ae5fe1d03cd3e544c4c2a1c48d27713fb0686c7bae9fbb19"}}, "download_size": 1006892, "post_processing_size": null, "dataset_size": 1045079, "size_in_bytes": 2051971}}
dummy/de/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e481ed2b8647212f5318253eb0e9ba7964116b340fa81ff9a1bc28fcff71ebaa
3
+ size 1590
dummy/en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfde930ddac504c25974c1e5e3910dc527a49f13e2275127f4f8900c4986e88b
3
+ size 1461
dummy/es/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822f35f91cddd0a644518a07679867327dde378039455ebdf6620fb8ea9b2e98
3
+ size 1540
dummy/fr/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2118b914abfffc59d959854d89a2d978b5abffbf39bddf50d214fc561e76f68b
3
+ size 1569
dummy/it/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae644488f18cb63b0d5ce44fe6b8ba505d35aead4d54dc546c393fcda30333b2
3
+ size 1573
dummy/nl/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bce9bdd2817ca0ea5d9a0cbea396e689ccb288da7c7578aa2e0f179fff022f2
3
+ size 1478
dummy/pl/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af3dfb9a41c269c283a90978d3154e74ae3aa7929625cf74cbc949cc424143f7
3
+ size 1580
dummy/pt/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca490db1efc81d3b6f4979e2294f1d28ee26522b93ce11432a7a8e3e112314f5
3
+ size 1550
dummy/ru/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0221f1e2ba621562b02aadfecb5af31955fbb4712f4d8a188a354b80e4ec5d7
3
+ size 1766
dummy/zh/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdf8fe491f012f6ca119ebaca152e4aacfaa1f1f24fde7872eaea9f3571da610
3
+ size 1542
stsb_multi_mt.py ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """These are different multilingual translations and the English original of the STSbenchmark dataset."""
16
+
17
+
18
+ import csv
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @InProceedings{huggingface:dataset:stsb_multi_mt,
25
+ title = {Machine translated multilingual STS benchmark dataset.},
26
+ author={Philip May},
27
+ year={2021},
28
+ url={https://github.com/PhilipMay/stsb-multi-mt}
29
+ }
30
+ """
31
+
32
+ _DESCRIPTION = """\
33
+ These are different multilingual translations and the English original of the STSbenchmark dataset. \
34
+ Translation has been done with deepl.com.
35
+ """
36
+
37
+ _HOMEPAGE = "https://github.com/PhilipMay/stsb-multi-mt"
38
+
39
+ _LICENSE = "custom license - see project page"
40
+
41
+ _BASE_URL = "https://raw.githubusercontent.com/PhilipMay/stsb-multi-mt/main/data"
42
+
43
+
44
+ class StsbMultiMt(datasets.GeneratorBasedBuilder):
45
+ """These are different multilingual translations and the English original of the STSbenchmark dataset.
46
+
47
+ Translation has been done with deepl.com.
48
+ """
49
+
50
+ VERSION = datasets.Version("1.0.0")
51
+
52
+ BUILDER_CONFIGS = [
53
+ datasets.BuilderConfig(
54
+ name="en",
55
+ version=VERSION,
56
+ description="This is the original English STS benchmark data set.",
57
+ ),
58
+ datasets.BuilderConfig(
59
+ name="de",
60
+ version=VERSION,
61
+ description="This is the German STS benchmark data set.",
62
+ ),
63
+ datasets.BuilderConfig(
64
+ name="es",
65
+ version=VERSION,
66
+ description="This is the Spanish STS benchmark data set.",
67
+ ),
68
+ datasets.BuilderConfig(
69
+ name="fr",
70
+ version=VERSION,
71
+ description="This is the French STS benchmark data set.",
72
+ ),
73
+ datasets.BuilderConfig(
74
+ name="it",
75
+ version=VERSION,
76
+ description="This is the Italian STS benchmark data set.",
77
+ ),
78
+ # here seems to be an issue - see https://github.com/PhilipMay/stsb-multi-mt/issues/1
79
+ # datasets.BuilderConfig(name="ja", version=VERSION, description="This is the Japanese STS benchmark data set."),
80
+ datasets.BuilderConfig(
81
+ name="nl",
82
+ version=VERSION,
83
+ description="This is the Dutch STS benchmark data set.",
84
+ ),
85
+ datasets.BuilderConfig(
86
+ name="pl",
87
+ version=VERSION,
88
+ description="This is the Polish STS benchmark data set.",
89
+ ),
90
+ datasets.BuilderConfig(
91
+ name="pt",
92
+ version=VERSION,
93
+ description="This is the Portuguese STS benchmark data set.",
94
+ ),
95
+ datasets.BuilderConfig(
96
+ name="ru",
97
+ version=VERSION,
98
+ description="This is the Russian STS benchmark data set.",
99
+ ),
100
+ datasets.BuilderConfig(
101
+ name="zh",
102
+ version=VERSION,
103
+ description="This is the Chinese (simplified) STS benchmark data set.",
104
+ ),
105
+ ]
106
+
107
+ def _info(self):
108
+ features = datasets.Features(
109
+ {
110
+ "sentence1": datasets.Value("string"),
111
+ "sentence2": datasets.Value("string"),
112
+ "similarity_score": datasets.Value("float"),
113
+ }
114
+ )
115
+ return datasets.DatasetInfo(
116
+ # This is the description that will appear on the datasets page.
117
+ description=_DESCRIPTION,
118
+ # This defines the different columns of the dataset and their types
119
+ features=features, # Here we define them above because they are different between the two configurations
120
+ # If there's a common (input, target) tuple from the features,
121
+ # specify them here. They'll be used if as_supervised=True in
122
+ # builder.as_dataset.
123
+ supervised_keys=None,
124
+ # Homepage of the dataset for documentation
125
+ homepage=_HOMEPAGE,
126
+ # License for the dataset if available
127
+ license=_LICENSE,
128
+ # Citation for the dataset
129
+ citation=_CITATION,
130
+ )
131
+
132
+ def _split_generators(self, dl_manager):
133
+ """Returns SplitGenerators."""
134
+ urls_to_download = {
135
+ "train": "{}/stsb-{}-train.csv".format(_BASE_URL, self.config.name),
136
+ "dev": "{}/stsb-{}-dev.csv".format(_BASE_URL, self.config.name),
137
+ "test": "{}/stsb-{}-test.csv".format(_BASE_URL, self.config.name),
138
+ }
139
+ downloaded_files = dl_manager.download(urls_to_download)
140
+ return [
141
+ datasets.SplitGenerator(
142
+ name=datasets.Split.TRAIN,
143
+ # These kwargs will be passed to _generate_examples
144
+ gen_kwargs={
145
+ "filepath": downloaded_files["train"],
146
+ },
147
+ ),
148
+ datasets.SplitGenerator(
149
+ name=datasets.Split.TEST,
150
+ # These kwargs will be passed to _generate_examples
151
+ gen_kwargs={
152
+ "filepath": downloaded_files["test"],
153
+ },
154
+ ),
155
+ datasets.SplitGenerator(
156
+ name=datasets.NamedSplit("dev"),
157
+ # These kwargs will be passed to _generate_examples
158
+ gen_kwargs={
159
+ "filepath": downloaded_files["dev"],
160
+ },
161
+ ),
162
+ ]
163
+
164
+ def _generate_examples(
165
+ self,
166
+ filepath, # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
167
+ ):
168
+ """ Yields examples as (key, example) tuples. """
169
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
170
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
171
+ with open(filepath, newline="", encoding="utf-8") as csvfile:
172
+ csv_dict_reader = csv.DictReader(
173
+ csvfile,
174
+ fieldnames=["sentence1", "sentence2", "similarity_score"],
175
+ dialect="excel",
176
+ )
177
+ for id_, row in enumerate(csv_dict_reader):
178
+ # do asserts
179
+ assert "sentence1" in row
180
+ assert isinstance(row["sentence1"], str)
181
+ assert len(row["sentence1"].strip()) > 0
182
+ assert "sentence2" in row
183
+ assert isinstance(row["sentence2"], str)
184
+ assert len(row["sentence2"].strip()) > 0
185
+ assert "similarity_score" in row
186
+ assert isinstance(row["similarity_score"], str)
187
+ assert len(row["similarity_score"].strip()) > 0
188
+
189
+ # convert similarity_score from str to float
190
+ row["similarity_score"] = float(row["similarity_score"])
191
+
192
+ # do more asserts
193
+ assert row["similarity_score"] >= 0.0
194
+ assert row["similarity_score"] <= 5.0
195
+
196
+ yield id_, row