Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100k
Language Creators:
found
Annotations Creators:
other
Source Datasets:
extended|other
ArXiv:
Tags:
relation extraction
License:
phucdev commited on
Commit
23082db
1 Parent(s): 45c27b6

Add loading script and README.md

Browse files
Files changed (2) hide show
  1. README.md +267 -0
  2. gids.py +180 -0
README.md ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Google-IISc Distant Supervision (GIDS) dataset for distantly-supervised
13
+ relation extraction
14
+ size_categories:
15
+ - 10K<n<100k
16
+ source_datasets:
17
+ - extended|other
18
+ tags:
19
+ - relation extraction
20
+ task_categories:
21
+ - text-classification
22
+ task_ids:
23
+ - multi-class-classification
24
+ dataset_info:
25
+ - config_name: gids
26
+ features:
27
+ - name: sentence
28
+ dtype: string
29
+ - name: subj_id
30
+ dtype: string
31
+ - name: obj_id
32
+ dtype: string
33
+ - name: subj_text
34
+ dtype: string
35
+ - name: obj_text
36
+ dtype: string
37
+ - name: relation
38
+ dtype:
39
+ class_label:
40
+ names:
41
+ '0': NA
42
+ '1': /people/person/education./education/education/institution
43
+ '2': /people/person/education./education/education/degree
44
+ '3': /people/person/place_of_birth
45
+ '4': /people/deceased_person/place_of_death
46
+ splits:
47
+ - name: train
48
+ num_bytes: 5088421
49
+ num_examples: 11297
50
+ - name: validation
51
+ num_bytes: 844784
52
+ num_examples: 1864
53
+ - name: test
54
+ num_bytes: 2568673
55
+ num_examples: 5663
56
+ download_size: 8941490
57
+ dataset_size: 8501878
58
+ - config_name: gids_formatted
59
+ features:
60
+ - name: token
61
+ sequence: string
62
+ - name: subj_start
63
+ dtype: int32
64
+ - name: subj_end
65
+ dtype: int32
66
+ - name: obj_start
67
+ dtype: int32
68
+ - name: obj_end
69
+ dtype: int32
70
+ - name: relation
71
+ dtype:
72
+ class_label:
73
+ names:
74
+ '0': NA
75
+ '1': /people/person/education./education/education/institution
76
+ '2': /people/person/education./education/education/degree
77
+ '3': /people/person/place_of_birth
78
+ '4': /people/deceased_person/place_of_death
79
+ splits:
80
+ - name: train
81
+ num_bytes: 7075362
82
+ num_examples: 11297
83
+ - name: validation
84
+ num_bytes: 1173957
85
+ num_examples: 1864
86
+ - name: test
87
+ num_bytes: 3573706
88
+ num_examples: 5663
89
+ download_size: 8941490
90
+ dataset_size: 11823025
91
+ ---
92
+ # Dataset Card for "gids"
93
+ ## Table of Contents
94
+ - [Table of Contents](#table-of-contents)
95
+ - [Dataset Description](#dataset-description)
96
+ - [Dataset Summary](#dataset-summary)
97
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
98
+ - [Languages](#languages)
99
+ - [Dataset Structure](#dataset-structure)
100
+ - [Data Instances](#data-instances)
101
+ - [Data Fields](#data-fields)
102
+ - [Data Splits](#data-splits)
103
+ - [Dataset Creation](#dataset-creation)
104
+ - [Curation Rationale](#curation-rationale)
105
+ - [Source Data](#source-data)
106
+ - [Annotations](#annotations)
107
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
108
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
109
+ - [Social Impact of Dataset](#social-impact-of-dataset)
110
+ - [Discussion of Biases](#discussion-of-biases)
111
+ - [Other Known Limitations](#other-known-limitations)
112
+ - [Additional Information](#additional-information)
113
+ - [Dataset Curators](#dataset-curators)
114
+ - [Licensing Information](#licensing-information)
115
+ - [Citation Information](#citation-information)
116
+ - [Contributions](#contributions)
117
+ ## Dataset Description
118
+ - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
+ - **Repository:** [RE-DS-Word-Attention-Models](https://github.com/SharmisthaJat/RE-DS-Word-Attention-Models/tree/master/Data/GIDS)
120
+ - **Paper:** [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
121
+ - **Size of downloaded dataset files:** 8.94 MB
122
+ - **Size of the generated dataset:** 11.82 MB
123
+
124
+ ### Dataset Summary
125
+ The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
126
+ GIDS is seeded from the human-judged Google relation extraction corpus.
127
+ See the paper for full details: [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
128
+
129
+ Note:
130
+ - There is a formatted version that you can load with `datasets.load_dataset('gids', name='gids_formatted')`. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.
131
+
132
+ ### Supported Tasks and Leaderboards
133
+ - **Tasks:** Relation Classification
134
+ - **Leaderboards:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+ ### Languages
136
+ The language in the dataset is English.
137
+ ## Dataset Structure
138
+ ### Data Instances
139
+ - **Size of downloaded dataset files:** 8.94 MB
140
+ - **Size of the generated dataset:** 11.82 MB
141
+
142
+ #### gids
143
+ An example of 'train' looks as follows:
144
+ ```json
145
+ {
146
+ "relation": "org:founded_by",
147
+ "sentence": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
148
+ "subj_text": 10,
149
+ "subj_id": 13,
150
+ "obj_text": 0,
151
+ "obj_id": 2
152
+ }
153
+ ```
154
+
155
+ #### gids_formatted
156
+ An example of 'train' looks as follows:
157
+ ```json
158
+ {
159
+ "relation": "org:founded_by",
160
+ "token": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
161
+ "subj_start": 10,
162
+ "subj_end": 13,
163
+ "obj_start": 0,
164
+ "obj_end": 2
165
+ }
166
+ ```
167
+
168
+ ### Data Fields
169
+ The data fields are the same among all splits.
170
+
171
+ #### gids
172
+ - `sentence`: the sentence, a `string` feature.
173
+ - `subj_id`: the id of the relation subject mention, a `string` feature.
174
+ - `obj_id`: the id of the relation object mention, a `string` feature.
175
+ - `subj_text`: the text of the relation subject mention, a `string` feature.
176
+ - `obj_text`: the text of the relation object mention, a `string` feature.
177
+ - `relation`: the relation label of this instance, a `string` classification label.
178
+
179
+ #### gids_formatted
180
+ - `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
181
+ - `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
182
+ - `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
183
+ - `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
184
+ - `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
185
+ - `relation`: the relation label of this instance, a `string` classification label.
186
+
187
+ ### Data Splits
188
+
189
+ | | Train | Dev | Test |
190
+ |------|-------|------|------|
191
+ | GIDS | 11297 | 1864 | 5663 |
192
+
193
+ ## Dataset Creation
194
+ ### Curation Rationale
195
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
+
197
+ ### Source Data
198
+
199
+ #### Initial Data Collection and Normalization
200
+
201
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
+
203
+ #### Who are the source language producers?
204
+
205
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
206
+
207
+ ### Annotations
208
+
209
+ #### Annotation process
210
+
211
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
212
+
213
+ #### Who are the annotators?
214
+
215
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
216
+
217
+ ### Personal and Sensitive Information
218
+
219
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
+
221
+ ## Considerations for Using the Data
222
+
223
+ ### Social Impact of Dataset
224
+
225
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
226
+
227
+ ### Discussion of Biases
228
+
229
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
230
+
231
+ ### Other Known Limitations
232
+
233
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
234
+
235
+ ## Additional Information
236
+
237
+ ### Dataset Curators
238
+
239
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
240
+
241
+ ### Licensing Information
242
+
243
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
244
+
245
+ ### Citation Information
246
+
247
+ ```
248
+ @article{DBLP:journals/corr/abs-1804-06987,
249
+ author = {Sharmistha Jat and
250
+ Siddhesh Khandelwal and
251
+ Partha P. Talukdar},
252
+ title = {Improving Distantly Supervised Relation Extraction using Word and
253
+ Entity Based Attention},
254
+ journal = {CoRR},
255
+ volume = {abs/1804.06987},
256
+ year = {2018},
257
+ url = {http://arxiv.org/abs/1804.06987},
258
+ eprinttype = {arXiv},
259
+ eprint = {1804.06987},
260
+ timestamp = {Fri, 15 Nov 2019 17:16:02 +0100},
261
+ biburl = {https://dblp.org/rec/journals/corr/abs-1804-06987.bib},
262
+ bibsource = {dblp computer science bibliography, https://dblp.org}
263
+ }
264
+ ```
265
+
266
+ ### Contributions
267
+ Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
gids.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """The Google-IISc Distant Supervision (GIDS) dataset for distantly-supervised relation extraction"""
17
+
18
+ import csv
19
+ import datasets
20
+
21
+ _CITATION = """\
22
+ @inproceedings{bassignana-plank-2022-crossre,
23
+ title = "Cross{RE}: A {C}ross-{D}omain {D}ataset for {R}elation {E}xtraction",
24
+ author = "Bassignana, Elisa and Plank, Barbara",
25
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
26
+ year = "2022",
27
+ publisher = "Association for Computational Linguistics"
28
+ }
29
+ """
30
+
31
+ _DESCRIPTION = """\
32
+ Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
33
+ GIDS is seeded from the human-judged Google relation extraction corpus.
34
+ """
35
+
36
+ _HOMEPAGE = ""
37
+
38
+ _LICENSE = ""
39
+
40
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
41
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
42
+ _URLs = {
43
+ "train": "https://raw.githubusercontent.com/SharmisthaJat/RE-DS-Word-Attention-Models/master/Data/GIDS/train.tsv",
44
+ "validation": "https://raw.githubusercontent.com/SharmisthaJat/RE-DS-Word-Attention-Models/master/Data/GIDS/dev.tsv",
45
+ "test": "https://raw.githubusercontent.com/SharmisthaJat/RE-DS-Word-Attention-Models/master/Data/GIDS/test.tsv",
46
+ }
47
+ _VERSION = datasets.Version("1.0.0")
48
+
49
+ _CLASS_LABELS = [
50
+ "NA",
51
+ "/people/person/education./education/education/institution",
52
+ "/people/person/education./education/education/degree",
53
+ "/people/person/place_of_birth",
54
+ "/people/deceased_person/place_of_death"
55
+ ]
56
+
57
+
58
+ def replace_underscore_in_span(text, start, end):
59
+ cleaned_text = text[:start] + text[start:end].replace("_", " ") + text[end:]
60
+ return cleaned_text
61
+
62
+
63
+ class GIDS(datasets.GeneratorBasedBuilder):
64
+ """Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction."""
65
+
66
+ BUILDER_CONFIGS = [
67
+ datasets.BuilderConfig(
68
+ name="gids", version=_VERSION, description="GIDS dataset."
69
+ ),
70
+ datasets.BuilderConfig(
71
+ name="gids_formatted", version=_VERSION, description="Formatted GIDS dataset."
72
+ ),
73
+ ]
74
+
75
+ DEFAULT_CONFIG_NAME = "gids" # type: ignore
76
+
77
+ def _info(self):
78
+ if self.config.name == "gids_formatted":
79
+ features = datasets.Features(
80
+ {
81
+ "token": datasets.Sequence(datasets.Value("string")),
82
+ "subj_start": datasets.Value("int32"),
83
+ "subj_end": datasets.Value("int32"),
84
+ "obj_start": datasets.Value("int32"),
85
+ "obj_end": datasets.Value("int32"),
86
+ "relation": datasets.ClassLabel(names=_CLASS_LABELS),
87
+ }
88
+ )
89
+ else:
90
+ features = datasets.Features(
91
+ {
92
+ "sentence": datasets.Value("string"),
93
+ "subj_id": datasets.Value("string"),
94
+ "obj_id": datasets.Value("string"),
95
+ "subj_text": datasets.Value("string"),
96
+ "obj_text": datasets.Value("string"),
97
+ "relation": datasets.ClassLabel(names=_CLASS_LABELS)
98
+ }
99
+ )
100
+
101
+ return datasets.DatasetInfo(
102
+ # This is the description that will appear on the datasets page.
103
+ description=_DESCRIPTION,
104
+ # This defines the different columns of the dataset and their types
105
+ features=features, # Here we define them above because they are different between the two configurations
106
+ # If there's a common (input, target) tuple from the features,
107
+ # specify them here. They'll be used if as_supervised=True in
108
+ # builder.as_dataset.
109
+ supervised_keys=None,
110
+ # Homepage of the dataset for documentation
111
+ homepage=_HOMEPAGE,
112
+ # License for the dataset if available
113
+ license=_LICENSE,
114
+ # Citation for the dataset
115
+ citation=_CITATION,
116
+ )
117
+
118
+ def _split_generators(self, dl_manager):
119
+ """Returns SplitGenerators."""
120
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
121
+
122
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
123
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
124
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
125
+
126
+ downloaded_files = dl_manager.download_and_extract(_URLs)
127
+
128
+ return [datasets.SplitGenerator(name=i, gen_kwargs={"filepath": downloaded_files[str(i)]})
129
+ for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
130
+
131
+ def _generate_examples(self, filepath):
132
+ """Yields examples."""
133
+ # This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
134
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
135
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
136
+ if self.config.name == "gids_formatted":
137
+ from spacy.lang.en import English
138
+ word_splitter = English()
139
+ else:
140
+ word_splitter = None
141
+ with open(filepath, encoding="utf-8") as f:
142
+ data = csv.reader(f, delimiter="\t")
143
+ for id_, example in enumerate(data):
144
+ text = example[5].strip()[:-9].strip() # remove '###END###' from text,
145
+ subj_text = example[2]
146
+ obj_text = example[3]
147
+ rel_type = example[4]
148
+
149
+ if self.config.name == "gids_formatted":
150
+ subj_char_start = text.find(subj_text)
151
+ assert subj_char_start != -1, f"Did not find <{subj_text}> in the text"
152
+ subj_char_end = subj_char_start + len(subj_text)
153
+ obj_char_start = text.find(obj_text)
154
+ assert obj_char_start != -1, f"Did not find <{obj_text}> in the text"
155
+ obj_char_end = obj_char_start + len(obj_text)
156
+ text = replace_underscore_in_span(text, subj_char_start, subj_char_end)
157
+ text = replace_underscore_in_span(text, obj_char_start, obj_char_end)
158
+ doc = word_splitter(text)
159
+ word_tokens = [t.text for t in doc]
160
+ subj_span = doc.char_span(subj_char_start, subj_char_end, alignment_mode="expand")
161
+ obj_span = doc.char_span(obj_char_start, obj_char_end, alignment_mode="expand")
162
+
163
+ yield id_, {
164
+ "token": word_tokens,
165
+ "subj_start": subj_span.start,
166
+ "subj_end": subj_span.end,
167
+ "obj_start": obj_span.start,
168
+ "obj_end": obj_span.end,
169
+ "relation": rel_type,
170
+ }
171
+ else:
172
+ yield id_, {
173
+ "sentence": text,
174
+ "subj_id": example[0],
175
+ "obj_id": example[1],
176
+ "subj_text": subj_text,
177
+ "obj_text": obj_text,
178
+ "relation": rel_type,
179
+ }
180
+