system HF staff commited on
Commit
66cdf1b
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other-university-of-washington-academic
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n>1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - other
18
+ task_ids:
19
+ - other-stuctured-to-text
20
+ - other-other-relation-extraction
21
+ ---
22
+
23
+ # Dataset Card for Ollie
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [Ollie](https://knowitall.github.io/ollie/)
51
+ - **Repository:** [Github](https://github.com/knowitall/ollie)
52
+ - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D12-1048/)
53
+
54
+ ### Dataset Summary
55
+
56
+ The Ollie dataset includes two configs for the data
57
+ used to train the Ollie informatation extraction algorithm, for 18M
58
+ sentences and 3M sentences respectively.
59
+
60
+ This data is for academic use only. From the authors:
61
+
62
+ Ollie is a program that automatically identifies and extracts binary
63
+ relationships from English sentences. Ollie is designed for Web-scale
64
+ information extraction, where target relations are not specified in
65
+ advance.
66
+
67
+ Ollie is our second-generation information extraction system . Whereas
68
+ ReVerb operates on flat sequences of tokens, Ollie works with the
69
+ tree-like (graph with only small cycles) representation using
70
+ Stanford's compression of the dependencies. This allows Ollie to
71
+ capture expression that ReVerb misses, such as long-range relations.
72
+
73
+ Ollie also captures context that modifies a binary relation. Presently
74
+ Ollie handles attribution (He said/she believes) and enabling
75
+ conditions (if X then).
76
+
77
+ More information is available at the Ollie homepage:
78
+ https://knowitall.github.io/ollie/
79
+
80
+ ### Languages
81
+ en
82
+
83
+ ## Dataset Structure
84
+
85
+ ### Data Instances
86
+
87
+ There are two configurations for the dataset: ollie_lemmagrep which
88
+ are 18M sentences from web searches for a subset of the Reverb
89
+ relationships (110,000 relationships), and the 3M sentences for
90
+ ollie_patterned which is a subset of the ollie_lemmagrep dataset
91
+ derived from patterns according to the Ollie paper.
92
+
93
+ An example of an ollie_lemmagrep record:
94
+
95
+ ``
96
+ {'arg1': 'adobe reader',
97
+ 'arg2': 'pdf',
98
+ 'chunk': 'B-NP I-NP I-NP I-NP B-PP B-NP I-NP B-VP B-PP B-NP I-NP O B-VP B-NP I-NP I-NP I-NP B-VP I-VP I-VP O',
99
+ 'pos': 'JJ NNS CC NNS IN PRP$ NN VBP IN NNP NN CC VB DT NNP NNP NNP TO VB VBN .',
100
+ 'rel': 'be require to view',
101
+ 'search_query': 'require reader pdf adobe view',
102
+ 'sentence': 'Many documents and reports on our site are in PDF format and require the Adobe Acrobat Reader to be viewed .',
103
+ 'sentence_cnt': '9',
104
+ 'words': 'many,document,and,report,on,our,site,be,in,pdf,format,and,require,the,adobe,acrobat,reader,to,be,view'}
105
+ ``
106
+
107
+ An example of an ollie_patterned record:
108
+ ``
109
+ {'arg1': 'english',
110
+ 'arg2': 'internet',
111
+ 'parse': '(in_IN_6), advmod(important_JJ_4, most_RBS_3); nsubj(language_NN_5, English_NNP_0); cop(language_NN_5, being_VBG_1); det(language_NN_5, the_DT_2); amod(language_NN_5, important_JJ_4); prep_in(language_NN_5, era_NN_9); punct(language_NN_5, ,_,_10); conj(language_NN_5, education_NN_12); det(era_NN_9, the_DT_7); nn(era_NN_9, Internet_NNP_8); amod(education_NN_12, English_JJ_11); nsubjpass(enriched_VBN_15, language_NN_5); aux(enriched_VBN_15, should_MD_13); auxpass(enriched_VBN_15, be_VB_14); punct(enriched_VBN_15, ._._16)',
112
+ 'pattern': '{arg1} <nsubj< {rel:NN} >prep_in> {slot0:NN} >nn> {arg2}',
113
+ 'rel': 'be language of',
114
+ 'search_query': 'english language internet',
115
+ 'sentence': 'English being the most important language in the Internet era , English education should be enriched .',
116
+ 'slot0': 'era'}
117
+ ``
118
+
119
+
120
+ ### Data Fields
121
+
122
+ For ollie_lemmagrep:
123
+ * rel: the relationship phrase/verb phrase. This may be empty, which represents the "be" relationship.
124
+ * arg1: the first argument in the relationship
125
+ * arg2: the second argument in the relationship.
126
+ * chunk: a tag of each token in the sentence, showing the pos chunks
127
+ * pos: part of speech tagging of the sentence
128
+ * sentence: the sentence
129
+ * sentence_cnt: the number of copies of this sentence encountered
130
+ * search_query: a combintion of rel, arg1, arg2
131
+ * words: the lemma of the words of the sentence separated by commas
132
+
133
+ For ollie_patterned:
134
+ * rel: the relationship phrase/verb phrase.
135
+ * arg1: the first argument in the relationship
136
+ * arg2: the second argument in the relationship.
137
+ * slot0: the third argument in the relationship, which might be empty.
138
+ * pattern: a parse pattern for the relationship
139
+ * parse: a dependency parse forthe sentence
140
+ * search_query: a combintion of rel, arg1, arg2
141
+ * sentence: the senence
142
+
143
+ ### Data Splits
144
+
145
+ There are no splits.
146
+
147
+ ## Dataset Creation
148
+
149
+ ### Curation Rationale
150
+
151
+ This dataset was created as part of research on open information extraction.
152
+
153
+ ### Source Data
154
+
155
+ #### Initial Data Collection and Normalization
156
+
157
+ See the research paper on OLlie. The training data is extracted from web pages (Cluebweb09).
158
+
159
+ #### Who are the source language producers?
160
+
161
+ The Ollie authors at the Univeristy of Washington and data from Cluebweb09 and the open web.
162
+
163
+ ### Annotations
164
+
165
+ #### Annotation process
166
+
167
+ The various parsers and code from the Ollie alogrithm.
168
+
169
+ #### Who are the annotators?
170
+
171
+ Machine annotated.
172
+
173
+ ### Personal and Sensitive Information
174
+
175
+ Unkown, but likely there are names of famous individuals.
176
+
177
+ ## Considerations for Using the Data
178
+
179
+ ### Social Impact of Dataset
180
+
181
+ The goal for the work is to help machines learn to extract information form open domains.
182
+
183
+ ### Discussion of Biases
184
+
185
+ Since the data is gathered from the web, there is likely to be biased text and relationships.
186
+
187
+ [More Information Needed]
188
+
189
+ ### Other Known Limitations
190
+
191
+ [More Information Needed]
192
+
193
+
194
+ ## Additional Information
195
+
196
+ ### Dataset Curators
197
+
198
+ The authors of Ollie at The University of Washington
199
+
200
+ ### Licensing Information
201
+
202
+ The University of Washington acamdemic license: https://raw.githubusercontent.com/knowitall/ollie/master/LICENSE
203
+
204
+
205
+ ### Citation Information
206
+ @inproceedings{ollie-emnlp12,
207
+ author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},
208
+ title = {Open Language Learning for Information Extraction},
209
+ booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},
210
+ year = {2012}
211
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"ollie_lemmagrep": {"description": "The Ollie dataset includes two configs for the data\nused to train the Ollie informatation extraction algorithm, for 18M\nsentences and 3M sentences respectively. \n\nThis data is for academic use only. From the authors:\n\nOllie is a program that automatically identifies and extracts binary\nrelationships from English sentences. Ollie is designed for Web-scale\ninformation extraction, where target relations are not specified in\nadvance.\n\nOllie is our second-generation information extraction system . Whereas\nReVerb operates on flat sequences of tokens, Ollie works with the\ntree-like (graph with only small cycles) representation using\nStanford's compression of the dependencies. This allows Ollie to\ncapture expression that ReVerb misses, such as long-range relations.\n\nOllie also captures context that modifies a binary relation. Presently\nOllie handles attribution (He said/she believes) and enabling\nconditions (if X then).\n\nMore information is available at the Ollie homepage:\nhttps://knowitall.github.io/ollie/\n", "citation": "@inproceedings{ollie-emnlp12,\n author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},\n title = {Open Language Learning for Information Extraction},\n booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},\n year = {2012}\n}", "homepage": "https://knowitall.github.io/ollie/", "license": "The University of Washington acamdemic license:\nhttps://raw.githubusercontent.com/knowitall/ollie/master/LICENSE\n", "features": {"arg1": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}, "rel": {"dtype": "string", "id": null, "_type": "Value"}, "search_query": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "words": {"dtype": "string", "id": null, "_type": "Value"}, "pos": {"dtype": "string", "id": null, "_type": "Value"}, "chunk": {"dtype": "string", "id": null, "_type": "Value"}, "sentence_cnt": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ollie", "config_name": "ollie_lemmagrep", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12324648919, "num_examples": 18674630, "dataset_name": "ollie"}}, "download_checksums": {"http://knowitall.cs.washington.edu/ollie/data/lemmagrep.txt.bz2": {"num_bytes": 1789363108, "checksum": "76ed3141fd95597c889eea1c05eb655914e76c72746893b856a00f2a422cbbab"}}, "download_size": 1789363108, "post_processing_size": null, "dataset_size": 12324648919, "size_in_bytes": 14114012027}, "ollie_patterned": {"description": "The Ollie dataset includes two configs for the data\nused to train the Ollie informatation extraction algorithm, for 18M\nsentences and 3M sentences respectively. \n\nThis data is for academic use only. From the authors:\n\nOllie is a program that automatically identifies and extracts binary\nrelationships from English sentences. Ollie is designed for Web-scale\ninformation extraction, where target relations are not specified in\nadvance.\n\nOllie is our second-generation information extraction system . Whereas\nReVerb operates on flat sequences of tokens, Ollie works with the\ntree-like (graph with only small cycles) representation using\nStanford's compression of the dependencies. This allows Ollie to\ncapture expression that ReVerb misses, such as long-range relations.\n\nOllie also captures context that modifies a binary relation. Presently\nOllie handles attribution (He said/she believes) and enabling\nconditions (if X then).\n\nMore information is available at the Ollie homepage:\nhttps://knowitall.github.io/ollie/\n", "citation": "@inproceedings{ollie-emnlp12,\n author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},\n title = {Open Language Learning for Information Extraction},\n booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},\n year = {2012}\n}", "homepage": "https://knowitall.github.io/ollie/", "license": "The University of Washington acamdemic license:\nhttps://raw.githubusercontent.com/knowitall/ollie/master/LICENSE\n", "features": {"rel": {"dtype": "string", "id": null, "_type": "Value"}, "arg1": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}, "slot0": {"dtype": "string", "id": null, "_type": "Value"}, "search_query": {"dtype": "string", "id": null, "_type": "Value"}, "pattern": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "parse": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ollie", "config_name": "ollie_patterned", "version": "1.1.0", "splits": {"train": {"name": "train", "num_bytes": 2930309084, "num_examples": 3048961, "dataset_name": "ollie"}}, "download_checksums": {"http://knowitall.cs.washington.edu/ollie/data/patterned-all.txt.bz2": {"num_bytes": 387514061, "checksum": "a99e0907ff4c20f4a02a1a86453097affa73d6ab4160441c9b7203d756348f0d"}}, "download_size": 387514061, "post_processing_size": null, "dataset_size": 2930309084, "size_in_bytes": 3317823145}}
dummy/ollie_lemmagrep/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fccecb3e1e445fa0f92decf51ad5bea61e09ceefb214ecfa35d679b7c0c08adb
3
+ size 1377
dummy/ollie_patterned/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40e86e5493788240bf91a1f3a2b78aa4d135cd824a0b94ae88e64ae34ca3eff8
3
+ size 2123
ollie.py ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Ollie"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import bz2
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{ollie-emnlp12,
26
+ author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},
27
+ title = {Open Language Learning for Information Extraction},
28
+ booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},
29
+ year = {2012}
30
+ }"""
31
+
32
+
33
+ _DESCRIPTION = """The Ollie dataset includes two configs for the data
34
+ used to train the Ollie informatation extraction algorithm, for 18M
35
+ sentences and 3M sentences respectively.
36
+
37
+ This data is for academic use only. From the authors:
38
+
39
+ Ollie is a program that automatically identifies and extracts binary
40
+ relationships from English sentences. Ollie is designed for Web-scale
41
+ information extraction, where target relations are not specified in
42
+ advance.
43
+
44
+ Ollie is our second-generation information extraction system . Whereas
45
+ ReVerb operates on flat sequences of tokens, Ollie works with the
46
+ tree-like (graph with only small cycles) representation using
47
+ Stanford's compression of the dependencies. This allows Ollie to
48
+ capture expression that ReVerb misses, such as long-range relations.
49
+
50
+ Ollie also captures context that modifies a binary relation. Presently
51
+ Ollie handles attribution (He said/she believes) and enabling
52
+ conditions (if X then).
53
+
54
+ More information is available at the Ollie homepage:
55
+ https://knowitall.github.io/ollie/
56
+ """
57
+
58
+
59
+ _LICENSE = """The University of Washington acamdemic license:
60
+ https://raw.githubusercontent.com/knowitall/ollie/master/LICENSE
61
+ """
62
+
63
+ _URLs = {
64
+ "ollie_lemmagrep": "http://knowitall.cs.washington.edu/ollie/data/lemmagrep.txt.bz2",
65
+ "ollie_patterned": "http://knowitall.cs.washington.edu/ollie/data/patterned-all.txt.bz2",
66
+ }
67
+
68
+
69
+ class Ollie(datasets.GeneratorBasedBuilder):
70
+ """Ollie dataset for knowledge bases and knowledge graphs and underlying sentences."""
71
+
72
+ VERSION = datasets.Version("0.1.0")
73
+
74
+ BUILDER_CONFIGS = [
75
+ datasets.BuilderConfig(name="ollie_lemmagrep", description="The Ollie training data", version="1.1.0"),
76
+ datasets.BuilderConfig(
77
+ name="ollie_patterned", description="The Ollie data used in the Ollie paper.", version="1.1.0"
78
+ ),
79
+ ]
80
+
81
+ DEFAULT_CONFIG_NAME = "ollie_lemmagrep"
82
+
83
+ def _info(self):
84
+ if self.config.name == "ollie_lemmagrep":
85
+ features = datasets.Features(
86
+ {
87
+ "arg1": datasets.Value("string"),
88
+ "arg2": datasets.Value("string"),
89
+ "rel": datasets.Value("string"),
90
+ "search_query": datasets.Value("string"),
91
+ "sentence": datasets.Value("string"),
92
+ "words": datasets.Value("string"),
93
+ "pos": datasets.Value("string"),
94
+ "chunk": datasets.Value("string"),
95
+ "sentence_cnt": datasets.Value("string"),
96
+ }
97
+ )
98
+ else:
99
+ features = datasets.Features(
100
+ {
101
+ "rel": datasets.Value("string"),
102
+ "arg1": datasets.Value("string"),
103
+ "arg2": datasets.Value("string"),
104
+ "slot0": datasets.Value("string"),
105
+ "search_query": datasets.Value("string"),
106
+ "pattern": datasets.Value("string"),
107
+ "sentence": datasets.Value("string"),
108
+ "parse": datasets.Value("string"),
109
+ }
110
+ )
111
+ return datasets.DatasetInfo(
112
+ description=_DESCRIPTION,
113
+ features=features,
114
+ supervised_keys=None,
115
+ homepage="https://knowitall.github.io/ollie/",
116
+ license=_LICENSE,
117
+ citation=_CITATION,
118
+ )
119
+
120
+ def _split_generators(self, dl_manager):
121
+ """Returns SplitGenerators."""
122
+ my_urls = _URLs[self.config.name]
123
+ data_dir = dl_manager.download_and_extract(my_urls)
124
+ return [
125
+ datasets.SplitGenerator(
126
+ name=datasets.Split.TRAIN,
127
+ gen_kwargs={
128
+ "filepath": data_dir,
129
+ "split": "train",
130
+ },
131
+ ),
132
+ ]
133
+
134
+ def _generate_examples(self, filepath, split):
135
+ """ Yields examples from the Ollie predicates and sentences. """
136
+
137
+ with bz2.open(filepath, "rt") as f:
138
+ id_ = -1
139
+ if self.config.name == "ollie_lemmagrep":
140
+ for row in f:
141
+ row = row.strip().split("\t")
142
+ id_ += 1
143
+ if len(row) == 8:
144
+ yield id_, {
145
+ "arg1": row[0].strip(),
146
+ "arg2": row[1].strip(),
147
+ "rel": "",
148
+ "search_query": row[2].strip(),
149
+ "sentence": row[3].strip(),
150
+ "words": row[4].strip(),
151
+ "pos": row[5].strip(),
152
+ "chunk": row[6].strip(),
153
+ "sentence_cnt": row[7].strip(),
154
+ }
155
+ else:
156
+ yield id_, {
157
+ "arg1": row[1].strip(),
158
+ "arg2": row[2].strip(),
159
+ "rel": row[0].strip(),
160
+ "search_query": row[3].strip(),
161
+ "sentence": row[4].strip(),
162
+ "words": row[5].strip(),
163
+ "pos": row[6].strip(),
164
+ "chunk": row[7].strip(),
165
+ "sentence_cnt": row[8].strip(),
166
+ }
167
+ else:
168
+ for row in f:
169
+ row = row.strip().split("\t")
170
+ id_ += 1
171
+ if len(row) == 7:
172
+ yield id_, {
173
+ "rel": row[0].strip(),
174
+ "arg1": row[1].strip(),
175
+ "arg2": row[2].strip(),
176
+ "slot0": "",
177
+ "search_query": row[3].strip(),
178
+ "pattern": row[4].strip(),
179
+ "sentence": row[5].strip(),
180
+ "parse": row[6].strip(),
181
+ }
182
+ else:
183
+ yield id_, {
184
+ "rel": row[0].strip(),
185
+ "arg1": row[1].strip(),
186
+ "arg2": row[2].strip(),
187
+ "slot0": row[7].strip(),
188
+ "search_query": row[3].strip(),
189
+ "pattern": row[4].strip(),
190
+ "sentence": row[5].strip(),
191
+ "parse": row[6].strip(),
192
+ }