parquet-converter commited on
Commit
79dc61d
1 Parent(s): 839230b

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,30 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- dev_wiki.json filter=lfs diff=lfs merge=lfs -text
29
- test_wiki.json filter=lfs diff=lfs merge=lfs -text
30
- train_wiki.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,126 +0,0 @@
1
- ---
2
- language_creators:
3
- - found
4
- language:
5
- - en
6
- license:
7
- - mit
8
- multilinguality:
9
- - monolingual
10
- size_categories:
11
- - 100K<n<1M
12
- source_datasets:
13
- - extended|wikipedia
14
- task_categories:
15
- - fill-mask
16
- - other
17
- - text-generation
18
- task_ids:
19
- - language-modeling
20
- - masked-language-modeling
21
- pretty_name: Wiki-Convert
22
- YAML tags:
23
- - {}
24
- - found
25
- language_bcp47:
26
- - en-US
27
- tags:
28
- - numeracy
29
- - natural-language-understanding
30
- - tokenization
31
- ---
32
-
33
- # Dataset Card Creation Guide
34
-
35
- ## Table of Contents
36
- - [Dataset Card Creation Guide](#dataset-card-creation-guide)
37
- - [Table of Contents](#table-of-contents)
38
- - [Dataset Description](#dataset-description)
39
- - [Dataset Summary](#dataset-summary)
40
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
41
- - [Languages](#languages)
42
- - [Dataset Structure](#dataset-structure)
43
- - [Data Instances](#data-instances)
44
- - [Data Fields](#data-fields)
45
- - [Data Splits](#data-splits)
46
- - [Dataset Creation](#dataset-creation)
47
- - [Curation Rationale](#curation-rationale)
48
- - [Source Data](#source-data)
49
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
50
- - [Who are the source language producers?](#who-are-the-source-language-producers)
51
- - [Annotations](#annotations)
52
- - [Annotation process](#annotation-process)
53
- - [Who are the annotators?](#who-are-the-annotators)
54
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
55
- - [Considerations for Using the Data](#considerations-for-using-the-data)
56
- - [Social Impact of Dataset](#social-impact-of-dataset)
57
- - [Discussion of Biases](#discussion-of-biases)
58
- - [Other Known Limitations](#other-known-limitations)
59
- - [Additional Information](#additional-information)
60
- - [Dataset Curators](#dataset-curators)
61
- - [Licensing Information](#licensing-information)
62
- - [Citation Information](#citation-information)
63
- - [Contributions](#contributions)
64
-
65
- ## Dataset Description
66
-
67
- - **Repository:** [Github](https://github.com/avi-jit/numeracy-literacy)
68
- - **Paper:** [Anthology](https://aclanthology.org/2021.emnlp-main.557)
69
- - **Point of Contact:** [Avijit Thawani](mailto:thawani@isi.edu)
70
-
71
- ### Dataset Summary
72
-
73
- Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a [{{Convert}}](https://en.wikipedia.org/wiki/Template:Convert) template.
74
-
75
- ### Supported Tasks and Leaderboards
76
-
77
- - `sequence-modeling`: The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low [perplexity](https://huggingface.co/transformers/perplexity.html).
78
-
79
- ### Languages
80
-
81
- The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text.
82
-
83
- ## Dataset Structure
84
-
85
- ### Data Instances
86
-
87
- Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., `number: 10` in the below example. The annotations are inspired by Numeracy-600K and are in the form of `length` and `offset` from the beginning of the sentence.
88
-
89
- ```
90
- {
91
- 'id': 1080801, 'UNIQUE_STORY_INDEX': '1080801', 'offset': 83, 'length': 2, 'magnitude': 0, 'comment': "Like all Type UB III submarines, UB-117 carried 10 torpedoes and was armed with a  10 cms deck gun. ''", 'number': 10
92
- }
93
- ```
94
-
95
- Please refer to https://github.com/avi-jit/numeracy-literacy for more details.
96
-
97
- ### Data Splits
98
-
99
- | | Tain | Dev | Test |
100
- | ----- | :------: | :-----: | :----: |
101
- | Input Sentences | 739,583 | 92,447 | 92,449|
102
-
103
- ## License
104
-
105
- Provided under MIT License.
106
-
107
- ## Citation
108
-
109
- ```
110
- @inproceedings{thawani-etal-2021-numeracy,
111
- title = "Numeracy enhances the Literacy of Language Models",
112
- author = "Thawani, Avijit and
113
- Pujara, Jay and
114
- Ilievski, Filip",
115
- booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
116
- month = nov,
117
- year = "2021",
118
- address = "Online and Punta Cana, Dominican Republic",
119
- publisher = "Association for Computational Linguistics",
120
- url = "https://aclanthology.org/2021.emnlp-main.557",
121
- pages = "6960--6967",
122
- abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.",
123
- }
124
- ```
125
-
126
- Thanks to [@avi-jit](https://github.com/avi-jit) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
WikiConvert.py DELETED
@@ -1,134 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Wiki-Convert: Language Modelling with Cardinal Number Annotations"""
18
-
19
-
20
- import json
21
- import sys
22
- import datasets
23
- #from datasets.tasks import QuestionAnsweringExtractive
24
-
25
-
26
- logger = datasets.logging.get_logger(__name__)
27
-
28
-
29
- _CITATION = """\
30
- @inproceedings{thawani-etal-2021-numeracy,
31
- title = "Numeracy enhances the Literacy of Language Models",
32
- author = "Thawani, Avijit and
33
- Pujara, Jay and
34
- Ilievski, Filip",
35
- booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
36
- month = nov,
37
- year = "2021",
38
- address = "Online and Punta Cana, Dominican Republic",
39
- publisher = "Association for Computational Linguistics",
40
- url = "https://aclanthology.org/2021.emnlp-main.557",
41
- pages = "6960--6967",
42
- abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.",
43
- }
44
- """
45
-
46
- _DESCRIPTION = """\
47
- Language Modelling with Cardinal Number Annotations.
48
- """
49
-
50
- #_URL = "https://github.com/avi-jit/numeracy-literacy/"
51
- _URL = "https://huggingface.co/datasets/usc-isi/WikiConvert/resolve/main/"
52
- _URLS = {
53
- "train": _URL + "train_wiki.json",
54
- "dev": _URL + "dev_wiki.json",
55
- "test": _URL + "test_wiki.json",
56
- }
57
-
58
-
59
- class WikiConvertConfig(datasets.BuilderConfig):
60
- """BuilderConfig for WikiConvert."""
61
-
62
- def __init__(self, **kwargs):
63
- """BuilderConfig for WikiConvert.
64
-
65
- Args:
66
- **kwargs: keyword arguments forwarded to super.
67
- """
68
- super(WikiConvertConfig, self).__init__(**kwargs)
69
-
70
-
71
- class WikiConvert(datasets.GeneratorBasedBuilder):
72
- """WikiConvert: Language Modelling with Cardinal Number Annotations.. Version 1.1."""
73
-
74
- BUILDER_CONFIGS = [
75
- WikiConvertConfig(
76
- name="plain_text",
77
- version=datasets.Version("1.0.0", ""),
78
- description="Plain text",
79
- ),
80
- ]
81
-
82
- def _info(self): # {"id": 1336448, "UNIQUE_STORY_INDEX": "1336448", "offset": 24, "length": 1, "magnitude": 0, "comment": "The floral cup is about 2 mm long and covered with silky white hairs.", "number": 2}
83
- return datasets.DatasetInfo(
84
- description=_DESCRIPTION,
85
- features=datasets.Features(
86
- {
87
- "id": datasets.Value("int32"),
88
- "UNIQUE_STORY_INDEX": datasets.Value("string"),
89
- "offset": datasets.Value("int32"),
90
- "length": datasets.Value("int32"),
91
- "magnitude": datasets.Value("int32"),
92
- "comment": datasets.Value("string"),
93
- "number": datasets.Value("int64"),
94
- }
95
- ),
96
- # No default supervised_keys (use offset and length to locate the number of order magnitude).
97
- supervised_keys=None,
98
- homepage="https://github.com/avi-jit/numeracy-literacy/",
99
- citation=_CITATION,
100
- #task_templates=[QuestionAnsweringExtractive(question_column="question", context_column="context", answers_column="answers")],
101
- )
102
-
103
- def _split_generators(self, dl_manager):
104
- downloaded_files = dl_manager.download_and_extract(_URLS)
105
-
106
- return [
107
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
108
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
109
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
110
- ]
111
-
112
- def _generate_examples(self, filepath):
113
- """This function returns the examples in the raw (text) form."""
114
- logger.info("generating examples from = %s", filepath)
115
- key = 0
116
- with open(filepath, encoding="utf-8") as f:
117
- ds = json.load(f)
118
- #for row in f:
119
- # yield key, {"comment": row[:100], "id": 1, "offset": 2, "length": 3, "magnitude": 4, "number": 5, "UNIQUE_STORY_INDEX": "6"}
120
- # key += 1
121
- #print(row[:100])
122
- #ds = json.loads(row)
123
- # print(len(ds))
124
- for row in ds:
125
- yield key, {
126
- "id": row["id"],
127
- "UNIQUE_STORY_INDEX": row["UNIQUE_STORY_INDEX"],
128
- "offset": row["offset"],
129
- "length": row["length"],
130
- "magnitude": row["magnitude"],
131
- "comment": row["comment"],
132
- "number": min(sys.maxsize, row["number"]),
133
- }
134
- key += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test_wiki.json → plain_text/wiki_convert-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a34d3fc96525cee13b4af524e3738e03f6106c418893550219c22f50c445bfa
3
- size 21514898
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da5fab49325f61c124a202a2637f6557df0cfb646325ef5beab6818b08eb3cd6
3
+ size 7562915
dev_wiki.json → plain_text/wiki_convert-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:575c39ca5b3fd1324a73e313ee598cf51152e930cd7a691ecbfbb71bd70add30
3
- size 21920689
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f09a1991694fedaab74fcb89b239da972a31f71084f377dd39879d109abfd97f
3
+ size 67569824
train_wiki.json → plain_text/wiki_convert-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:04f12fa35ab84166d6c008f75674e75b864bc4d5ff976ff697b3307726b77c95
3
- size 177344688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31ee3237d67e0737446ca92a43c4b3ec51a82ecb40a30d795b0c63a86ac5b781
3
+ size 8039239