parquet-converter commited on
Commit
1fc07fb
1 Parent(s): 7dcc7d4

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,215 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language_creators:
5
- - machine-generated
6
- language:
7
- - it
8
- language_bcp47:
9
- - it-IT
10
- license:
11
- - unknown
12
- multilinguality:
13
- - monolingual
14
- size_categories:
15
- - unknown
16
- source_datasets:
17
- - extended|squad
18
- task_categories:
19
- - question-answering
20
- task_ids:
21
- - open-domain-qa
22
- - extractive-qa
23
- paperswithcode_id: squad-it
24
- pretty_name: SQuAD-it
25
- dataset_info:
26
- features:
27
- - name: id
28
- dtype: string
29
- - name: context
30
- dtype: string
31
- - name: question
32
- dtype: string
33
- - name: answers
34
- sequence:
35
- - name: text
36
- dtype: string
37
- - name: answer_start
38
- dtype: int32
39
- splits:
40
- - name: train
41
- num_bytes: 50864824
42
- num_examples: 54159
43
- - name: test
44
- num_bytes: 7858336
45
- num_examples: 7609
46
- download_size: 8776531
47
- dataset_size: 58723160
48
- ---
49
-
50
- # Dataset Card for "squad_it"
51
-
52
- ## Table of Contents
53
- - [Dataset Description](#dataset-description)
54
- - [Dataset Summary](#dataset-summary)
55
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
56
- - [Languages](#languages)
57
- - [Dataset Structure](#dataset-structure)
58
- - [Data Instances](#data-instances)
59
- - [Data Fields](#data-fields)
60
- - [Data Splits](#data-splits)
61
- - [Dataset Creation](#dataset-creation)
62
- - [Curation Rationale](#curation-rationale)
63
- - [Source Data](#source-data)
64
- - [Annotations](#annotations)
65
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
66
- - [Considerations for Using the Data](#considerations-for-using-the-data)
67
- - [Social Impact of Dataset](#social-impact-of-dataset)
68
- - [Discussion of Biases](#discussion-of-biases)
69
- - [Other Known Limitations](#other-known-limitations)
70
- - [Additional Information](#additional-information)
71
- - [Dataset Curators](#dataset-curators)
72
- - [Licensing Information](#licensing-information)
73
- - [Citation Information](#citation-information)
74
- - [Contributions](#contributions)
75
-
76
- ## Dataset Description
77
-
78
- - **Homepage:** [https://github.com/crux82/squad-it](https://github.com/crux82/squad-it)
79
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
80
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
81
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
- - **Size of downloaded dataset files:** 8.37 MB
83
- - **Size of the generated dataset:** 56.07 MB
84
- - **Total amount of disk used:** 64.44 MB
85
-
86
- ### Dataset Summary
87
-
88
- SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset
89
- into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian.
90
- The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is
91
- split into training and test sets to support the replicability of the benchmarking of QA systems:
92
-
93
- ### Supported Tasks and Leaderboards
94
-
95
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
96
-
97
- ### Languages
98
-
99
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
100
-
101
- ## Dataset Structure
102
-
103
- ### Data Instances
104
-
105
- #### default
106
-
107
- - **Size of downloaded dataset files:** 8.37 MB
108
- - **Size of the generated dataset:** 56.07 MB
109
- - **Total amount of disk used:** 64.44 MB
110
-
111
- An example of 'train' looks as follows.
112
- ```
113
- This example was too long and was cropped:
114
-
115
- {
116
- "answers": "{\"answer_start\": [243, 243, 243, 243, 243], \"text\": [\"evitare di essere presi di mira dal boicottaggio\", \"evitare di essere pres...",
117
- "context": "\"La crisi ha avuto un forte impatto sulle relazioni internazionali e ha creato una frattura all' interno della NATO. Alcune nazi...",
118
- "id": "5725b5a689a1e219009abd28",
119
- "question": "Perchè le nazioni europee e il Giappone si sono separati dagli Stati Uniti durante la crisi?"
120
- }
121
- ```
122
-
123
- ### Data Fields
124
-
125
- The data fields are the same among all splits.
126
-
127
- #### default
128
- - `id`: a `string` feature.
129
- - `context`: a `string` feature.
130
- - `question`: a `string` feature.
131
- - `answers`: a dictionary feature containing:
132
- - `text`: a `string` feature.
133
- - `answer_start`: a `int32` feature.
134
-
135
- ### Data Splits
136
-
137
- | name | train | test |
138
- | ------- | ----: | ---: |
139
- | default | 54159 | 7609 |
140
-
141
- ## Dataset Creation
142
-
143
- ### Curation Rationale
144
-
145
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
-
147
- ### Source Data
148
-
149
- #### Initial Data Collection and Normalization
150
-
151
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
-
153
- #### Who are the source language producers?
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- ### Annotations
158
-
159
- #### Annotation process
160
-
161
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
-
163
- #### Who are the annotators?
164
-
165
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
-
167
- ### Personal and Sensitive Information
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- ## Considerations for Using the Data
172
-
173
- ### Social Impact of Dataset
174
-
175
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
-
177
- ### Discussion of Biases
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ### Other Known Limitations
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ## Additional Information
186
-
187
- ### Dataset Curators
188
-
189
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
-
191
- ### Licensing Information
192
-
193
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
-
195
- ### Citation Information
196
-
197
- ```
198
- @InProceedings{10.1007/978-3-030-03840-3_29,
199
- author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto",
200
- editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo",
201
- title="Neural Learning for Question Answering in Italian",
202
- booktitle="AI*IA 2018 -- Advances in Artificial Intelligence",
203
- year="2018",
204
- publisher="Springer International Publishing",
205
- address="Cham",
206
- pages="389--402",
207
- isbn="978-3-030-03840-3"
208
- }
209
-
210
- ```
211
-
212
-
213
- ### Contributions
214
-
215
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset\ninto Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian.\n The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is\n split into training and test sets to support the replicability of the benchmarking of QA systems:\n", "citation": "@InProceedings{10.1007/978-3-030-03840-3_29,\n author={Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto},\n editor={Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo\",\n title={Neural Learning for Question Answering in Italian},\n booktitle={AI*IA 2018 -- Advances in Artificial Intelligence},\n year={2018},\n publisher={Springer International Publishing},\n address={Cham},\n pages={389--402},\n isbn={978-3-030-03840-3}\n}\n", "homepage": "https://github.com/crux82/squad-it", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "answer_start": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "question-answering-extractive", "question_column": "question", "context_column": "context", "answers_column": "answers"}], "builder_name": "squad_it", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 50864824, "num_examples": 54159, "dataset_name": "squad_it"}, "test": {"name": "test", "num_bytes": 7858336, "num_examples": 7609, "dataset_name": "squad_it"}}, "download_checksums": {"https://github.com/crux82/squad-it/raw/master/SQuAD_it-train.json.gz": {"num_bytes": 7725286, "checksum": "75d4d2832961f7a0f76a43d7e919e56a880ccc55de434ec90ae82cd67bec5d25"}, "https://github.com/crux82/squad-it/raw/master/SQuAD_it-test.json.gz": {"num_bytes": 1051245, "checksum": "25986c617cc7d58e82e916755b8a5684e5efae69835332858a6534a304cd293c"}}, "download_size": 8776531, "post_processing_size": null, "dataset_size": 58723160, "size_in_bytes": 67499691}}
 
 
default/squad_it-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a51f7ac2dcb6da735c22ade26d6331a5c639e4b9e19d42c2e7e8b37d2d489fcb
3
+ size 1624270
default/squad_it-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:916a74dd1fa22162d91f521a190f9f9b637307bd0a79cbe96b471d12235a7a75
3
+ size 12172540
squad_it.py DELETED
@@ -1,116 +0,0 @@
1
- """TODO(squad_it): Add a description here."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
- from datasets.tasks import QuestionAnsweringExtractive
8
-
9
-
10
- # TODO(squad_it): BibTeX citation
11
- _CITATION = """\
12
- @InProceedings{10.1007/978-3-030-03840-3_29,
13
- author={Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto},
14
- editor={Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo",
15
- title={Neural Learning for Question Answering in Italian},
16
- booktitle={AI*IA 2018 -- Advances in Artificial Intelligence},
17
- year={2018},
18
- publisher={Springer International Publishing},
19
- address={Cham},
20
- pages={389--402},
21
- isbn={978-3-030-03840-3}
22
- }
23
- """
24
-
25
- # TODO(squad_it):
26
- _DESCRIPTION = """\
27
- SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset
28
- into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian.
29
- The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is
30
- split into training and test sets to support the replicability of the benchmarking of QA systems:
31
- """
32
-
33
- _URL = "https://github.com/crux82/squad-it/raw/master/"
34
- _URLS = {
35
- "train": _URL + "SQuAD_it-train.json.gz",
36
- "test": _URL + "SQuAD_it-test.json.gz",
37
- }
38
-
39
-
40
- class SquadIt(datasets.GeneratorBasedBuilder):
41
- """TODO(squad_it): Short description of my dataset."""
42
-
43
- # TODO(squad_it): Set up version.
44
- VERSION = datasets.Version("0.1.0")
45
-
46
- def _info(self):
47
- # TODO(squad_it): Specifies the datasets.DatasetInfo object
48
- return datasets.DatasetInfo(
49
- # This is the description that will appear on the datasets page.
50
- description=_DESCRIPTION,
51
- # datasets.features.FeatureConnectors
52
- features=datasets.Features(
53
- {
54
- "id": datasets.Value("string"),
55
- "context": datasets.Value("string"),
56
- "question": datasets.Value("string"),
57
- "answers": datasets.features.Sequence(
58
- {
59
- "text": datasets.Value("string"),
60
- "answer_start": datasets.Value("int32"),
61
- }
62
- ),
63
- # These are the features of your dataset like images, labels ...
64
- }
65
- ),
66
- # If there's a common (input, target) tuple from the features,
67
- # specify them here. They'll be used if as_supervised=True in
68
- # builder.as_dataset.
69
- supervised_keys=None,
70
- # Homepage of the dataset for documentation
71
- homepage="https://github.com/crux82/squad-it",
72
- citation=_CITATION,
73
- task_templates=[
74
- QuestionAnsweringExtractive(
75
- question_column="question", context_column="context", answers_column="answers"
76
- )
77
- ],
78
- )
79
-
80
- def _split_generators(self, dl_manager):
81
- """Returns SplitGenerators."""
82
- # TODO(squad_it): Downloads the data and defines the splits
83
- # dl_manager is a datasets.download.DownloadManager that can be used to
84
- # download and extract URLs
85
- urls_to_download = _URLS
86
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
87
-
88
- return [
89
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
90
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
91
- ]
92
-
93
- def _generate_examples(self, filepath):
94
- """Yields examples."""
95
- # TODO(squad_it): Yields (key, example) tuples from the dataset
96
- with open(filepath, encoding="utf-8") as f:
97
- data = json.load(f)
98
- for example in data["data"]:
99
- for paragraph in example["paragraphs"]:
100
- context = paragraph["context"].strip()
101
- for qa in paragraph["qas"]:
102
- question = qa["question"].strip()
103
- id_ = qa["id"]
104
-
105
- answer_starts = [answer["answer_start"] for answer in qa["answers"]]
106
- answers = [answer["text"].strip() for answer in qa["answers"]]
107
-
108
- yield id_, {
109
- "context": context,
110
- "question": question,
111
- "id": id_,
112
- "answers": {
113
- "answer_start": answer_starts,
114
- "text": answers,
115
- },
116
- }