phucdev commited on
Commit
2f8afd8
1 Parent(s): 6ed9217

Add science_ie default config that only converts the original structure to a dictionary format

Browse files
Files changed (2) hide show
  1. README.md +127 -55
  2. science_ie.py +178 -113
README.md CHANGED
@@ -51,7 +51,7 @@ dataset_info:
51
  - name: test
52
  num_bytes: 399069
53
  num_examples: 838
54
- download_size: 391944
55
  dataset_size: 1788822
56
  - config_name: re
57
  features:
@@ -80,8 +80,8 @@ dataset_info:
80
  '2': Hyponym-of
81
  splits:
82
  - name: train
83
- num_bytes: 11738520
84
- num_examples: 24558
85
  - name: validation
86
  num_bytes: 2347796
87
  num_examples: 4838
@@ -89,7 +89,57 @@ dataset_info:
89
  num_bytes: 2835275
90
  num_examples: 6618
91
  download_size: 13704567
92
- dataset_size: 16921591
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  - config_name: subtask_a
94
  features:
95
  - name: id
@@ -105,7 +155,7 @@ dataset_info:
105
  '2': I
106
  splits:
107
  - name: train
108
- num_bytes: 1185670
109
  num_examples: 2388
110
  - name: validation
111
  num_bytes: 204095
@@ -114,7 +164,7 @@ dataset_info:
114
  num_bytes: 399069
115
  num_examples: 838
116
  download_size: 13704567
117
- dataset_size: 1788834
118
  - config_name: subtask_b
119
  features:
120
  - name: id
@@ -131,7 +181,7 @@ dataset_info:
131
  '3': T
132
  splits:
133
  - name: train
134
- num_bytes: 1185670
135
  num_examples: 2388
136
  - name: validation
137
  num_bytes: 204095
@@ -140,7 +190,7 @@ dataset_info:
140
  num_bytes: 399069
141
  num_examples: 838
142
  download_size: 13704567
143
- dataset_size: 1788834
144
  - config_name: subtask_c
145
  features:
146
  - name: id
@@ -157,7 +207,7 @@ dataset_info:
157
  '2': H
158
  splits:
159
  - name: train
160
- num_bytes: 20103682
161
  num_examples: 2388
162
  - name: validation
163
  num_bytes: 3575511
@@ -166,17 +216,7 @@ dataset_info:
166
  num_bytes: 6431513
167
  num_examples: 838
168
  download_size: 13704567
169
- dataset_size: 30110706
170
- configs:
171
- - config_name: ner
172
- data_files:
173
- - split: train
174
- path: ner/train-*
175
- - split: validation
176
- path: ner/validation-*
177
- - split: test
178
- path: ner/test-*
179
- default: true
180
  ---
181
 
182
  # Dataset Card for ScienceIE
@@ -212,8 +252,6 @@ configs:
212
  - **Repository:** [https://github.com/ScienceIE/scienceie.github.io](https://github.com/ScienceIE/scienceie.github.io)
213
  - **Paper:** [SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications](https://arxiv.org/abs/1704.02853)
214
  - **Leaderboard:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898)
215
- - **Size of downloaded dataset files:** 13.7 MB
216
- - **Size of generated dataset files:** 17.4 MB
217
 
218
  ### Dataset Summary
219
 
@@ -236,7 +274,8 @@ There are three subtasks:
236
  - HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color.
237
  - SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML.
238
 
239
- Note: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The `id` consists of the document id and the example index within the document separated by an underscore, e.g. `S0375960115004120_1`. This should enable you to reconstruct the documents from the sentences.
 
240
 
241
  ### Supported Tasks and Leaderboards
242
 
@@ -251,11 +290,31 @@ The language in the dataset is English.
251
 
252
  ### Data Instances
253
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
254
  #### subtask_a
255
- - **Size of downloaded dataset files:** 13.7 MB
256
- - **Size of the generated dataset:** 17.4 MB
257
-
258
- An example of 'train' looks as follows:
259
  ```json
260
  {
261
  "id": "S0375960115004120_1",
@@ -264,10 +323,7 @@ An example of 'train' looks as follows:
264
  }
265
  ```
266
  #### subtask_b
267
- - **Size of downloaded dataset files:** 13.7 MB
268
- - **Size of the generated dataset:** 17.4 MB
269
-
270
- An example of 'train' looks as follows:
271
  ```json
272
  {
273
  "id": "S0375960115004120_2",
@@ -277,10 +333,7 @@ An example of 'train' looks as follows:
277
  ```
278
 
279
  #### subtask_c
280
- - **Size of downloaded dataset files:** 13.7 MB
281
- - **Size of the generated dataset:** 30.1 MB
282
-
283
- An example of 'train' looks as follows:
284
  ```json
285
  {
286
  "id": "S0375960115004120_3",
@@ -292,10 +345,7 @@ Note: The tag sequence consists of vectors for each token, that encode what the
292
  and every other token in the sequence is for the first token in each key phrase.
293
 
294
  #### ner
295
- - **Size of downloaded dataset files:** 13.7 MB
296
- - **Size of the generated dataset:** 17.4 MB
297
-
298
- An example of 'train' looks as follows:
299
  ```json
300
  {
301
  "id": "S0375960115004120_4",
@@ -305,10 +355,7 @@ An example of 'train' looks as follows:
305
  ```
306
 
307
  #### re
308
- - **Size of downloaded dataset files:** 13.7 MB
309
- - **Size of the generated dataset:** 16.4 MB
310
-
311
- An example of 'train' looks as follows:
312
  ```json
313
  {
314
  "id": "S0375960115004120_5",
@@ -325,12 +372,36 @@ An example of 'train' looks as follows:
325
 
326
  ### Data Fields
327
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
328
  #### subtask_a
329
  - `id`: the instance id of this sentence, a `string` feature.
330
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
331
  - `tags`: the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a `list` of classification labels.
332
 
333
- ```python
334
  {"O": 0, "B": 1, "I": 2}
335
  ```
336
 
@@ -339,7 +410,7 @@ An example of 'train' looks as follows:
339
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
340
  - `tags`: the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a `list` of classification labels.
341
 
342
- ```python
343
  {"O": 0, "M": 1, "P": 2, "T": 3}
344
  ```
345
 
@@ -348,7 +419,7 @@ An example of 'train' looks as follows:
348
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
349
  - `tags`: a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a `list` of a `list` of a classification label.
350
 
351
- ```python
352
  {"O": 0, "S": 1, "H": 2}
353
  ```
354
 
@@ -357,7 +428,7 @@ An example of 'train' looks as follows:
357
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
358
  - `tags`: the list of ner tags of this sentence, a `list` of classification labels.
359
 
360
- ```python
361
  {"O": 0, "B-Material": 1, "I-Material": 2, "B-Process": 3, "I-Process": 4, "B-Task": 5, "I-Task": 6}
362
  ```
363
 
@@ -372,19 +443,20 @@ An example of 'train' looks as follows:
372
  - `arg2_type`: the key phrase type of the relation arg2 mention, a `string` feature.
373
  - `relation`: the relation label of this instance, a classification label.
374
 
375
- ```python
376
  {"O": 0, "Synonym-of": 1, "Hyponym-of": 2}
377
  ```
378
 
379
  ### Data Splits
380
 
381
- | | Train | Dev | Test |
382
- |-----------|-------|------|------|
383
- | subtask_a | 2388 | 400 | 838 |
384
- | subtask_b | 2388 | 400 | 838 |
385
- | subtask_c | 2388 | 400 | 838 |
386
- | ner | 2388 | 400 | 838 |
387
- | re | 24558 | 4838 | 6618 |
 
388
 
389
  ## Dataset Creation
390
 
 
51
  - name: test
52
  num_bytes: 399069
53
  num_examples: 838
54
+ download_size: 13704567
55
  dataset_size: 1788822
56
  - config_name: re
57
  features:
 
80
  '2': Hyponym-of
81
  splits:
82
  - name: train
83
+ num_bytes: 11737101
84
+ num_examples: 24556
85
  - name: validation
86
  num_bytes: 2347796
87
  num_examples: 4838
 
89
  num_bytes: 2835275
90
  num_examples: 6618
91
  download_size: 13704567
92
+ dataset_size: 16920172
93
+ - config_name: science_ie
94
+ features:
95
+ - name: id
96
+ dtype: string
97
+ - name: text
98
+ dtype: string
99
+ - name: keyphrases
100
+ list:
101
+ - name: id
102
+ dtype: string
103
+ - name: start
104
+ dtype: int32
105
+ - name: end
106
+ dtype: int32
107
+ - name: type
108
+ dtype:
109
+ class_label:
110
+ names:
111
+ '0': Material
112
+ '1': Process
113
+ '2': Task
114
+ - name: type_
115
+ dtype: string
116
+ - name: relations
117
+ list:
118
+ - name: arg1
119
+ dtype: string
120
+ - name: arg2
121
+ dtype: string
122
+ - name: relation
123
+ dtype:
124
+ class_label:
125
+ names:
126
+ '0': O
127
+ '1': Synonym-of
128
+ '2': Hyponym-of
129
+ - name: relation_
130
+ dtype: string
131
+ splits:
132
+ - name: train
133
+ num_bytes: 640060
134
+ num_examples: 350
135
+ - name: validation
136
+ num_bytes: 112588
137
+ num_examples: 50
138
+ - name: test
139
+ num_bytes: 206857
140
+ num_examples: 100
141
+ download_size: 13704567
142
+ dataset_size: 959505
143
  - config_name: subtask_a
144
  features:
145
  - name: id
 
155
  '2': I
156
  splits:
157
  - name: train
158
+ num_bytes: 1185658
159
  num_examples: 2388
160
  - name: validation
161
  num_bytes: 204095
 
164
  num_bytes: 399069
165
  num_examples: 838
166
  download_size: 13704567
167
+ dataset_size: 1788822
168
  - config_name: subtask_b
169
  features:
170
  - name: id
 
181
  '3': T
182
  splits:
183
  - name: train
184
+ num_bytes: 1185658
185
  num_examples: 2388
186
  - name: validation
187
  num_bytes: 204095
 
190
  num_bytes: 399069
191
  num_examples: 838
192
  download_size: 13704567
193
+ dataset_size: 1788822
194
  - config_name: subtask_c
195
  features:
196
  - name: id
 
207
  '2': H
208
  splits:
209
  - name: train
210
+ num_bytes: 20102706
211
  num_examples: 2388
212
  - name: validation
213
  num_bytes: 3575511
 
216
  num_bytes: 6431513
217
  num_examples: 838
218
  download_size: 13704567
219
+ dataset_size: 30109730
 
 
 
 
 
 
 
 
 
 
220
  ---
221
 
222
  # Dataset Card for ScienceIE
 
252
  - **Repository:** [https://github.com/ScienceIE/scienceie.github.io](https://github.com/ScienceIE/scienceie.github.io)
253
  - **Paper:** [SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications](https://arxiv.org/abs/1704.02853)
254
  - **Leaderboard:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898)
 
 
255
 
256
  ### Dataset Summary
257
 
 
274
  - HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color.
275
  - SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML.
276
 
277
+ Note: The default config `science_ie` converts the original .txt & .ann files to a dictionary format that is easier to use.
278
+ For every other configuration the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The `id` consists of the document id and the example index within the document separated by an underscore, e.g. `S0375960115004120_1`. This should enable you to reconstruct the documents from the sentences.
279
 
280
  ### Supported Tasks and Leaderboards
281
 
 
290
 
291
  ### Data Instances
292
 
293
+ #### science_ie
294
+ An example of "train" looks as follows:
295
+ ```json
296
+ {
297
+ "id": "S221266781300018X",
298
+ "text": "Amodel are proposed for modeling data-centric Web services which are powered by relational databases and interact with users according to logical formulas specifying input constraints, control-flow constraints and state/output/action rules. The Linear Temporal First-Order Logic (LTL-FO) formulas over inputs, states, outputs and actions are used to express the properties to be verified.We have proven that automatic verification of LTL-FO properties of data-centric Web services under input-bounded constraints is decidable by reducing Web services to data-centric Web applications. Thus, we can verify Web service specifications using existing verifier designed for Web applications.",
299
+ "keyphrases": [
300
+ {
301
+ "id": "T1", "start": 24, "end": 58, "type": 2, "type_": "Task"
302
+ },
303
+ ...,
304
+ {"id": "T3", "start": 245, "end": 278, "type": 1, "type_": "Process"},
305
+ {"id": "T4", "start": 280, "end": 286, "type": 1, "type_": "Process"},
306
+ ...
307
+ ],
308
+ "relations": [
309
+ {"arg1": "T4", "arg2": "T3", "relation": 1, "relation_": "Synonym-of"},
310
+ {"arg1": "T3", "arg2": "T4", "relation": 1, "relation_": "Synonym-of"}
311
+ ]
312
+ }
313
+ ```
314
+
315
+
316
  #### subtask_a
317
+ An example of "train" looks as follows:
 
 
 
318
  ```json
319
  {
320
  "id": "S0375960115004120_1",
 
323
  }
324
  ```
325
  #### subtask_b
326
+ An example of "train" looks as follows:
 
 
 
327
  ```json
328
  {
329
  "id": "S0375960115004120_2",
 
333
  ```
334
 
335
  #### subtask_c
336
+ An example of "train" looks as follows:
 
 
 
337
  ```json
338
  {
339
  "id": "S0375960115004120_3",
 
345
  and every other token in the sequence is for the first token in each key phrase.
346
 
347
  #### ner
348
+ An example of "train" looks as follows:
 
 
 
349
  ```json
350
  {
351
  "id": "S0375960115004120_4",
 
355
  ```
356
 
357
  #### re
358
+ An example of "train" looks as follows:
 
 
 
359
  ```json
360
  {
361
  "id": "S0375960115004120_5",
 
372
 
373
  ### Data Fields
374
 
375
+ #### science_ie
376
+ - `id`: the instance id of this document, a `string` feature.
377
+ - `text`: the text of this document, a `string` feature.
378
+ - `keyphrases`: the list of keyphrases of this document, a `list` of `dict`.
379
+ - `id`: the instance id of this keyphrase, a `string` feature.
380
+ - `start`: the character offset start of this keyphrase, an `int` feature.
381
+ - `end`: the character offset end of this keyphrase, exclusive, an `int` feature.
382
+ - `type`: the key phrase type of this keyphrase, a classification label.
383
+ - `type_`: the key phrase type of this keyphrase, a `string` feature.
384
+ - `relations`: the list of relations of this document, a `list` of `dict`.
385
+ - `arg1`: the instance id of the first keyphrase, a `string` feature.
386
+ - `arg2`: the instance id of the second keyphrase, a `string` feature.
387
+ - `relation`: the relation label of this instance, a classification label.
388
+ - `relation_`: the relation label of this instance, a `string` feature.
389
+
390
+ Keyphrase types:
391
+ ```json
392
+ {"O": 0, "Material": 1, "Process": 2, "Task": 3}
393
+ ```
394
+ Relation types:
395
+ ```json
396
+ {"O": 0, "Synonym-of": 1, "Hyponym-of": 2}
397
+ ```
398
+
399
  #### subtask_a
400
  - `id`: the instance id of this sentence, a `string` feature.
401
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
402
  - `tags`: the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a `list` of classification labels.
403
 
404
+ ```json
405
  {"O": 0, "B": 1, "I": 2}
406
  ```
407
 
 
410
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
411
  - `tags`: the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a `list` of classification labels.
412
 
413
+ ```json
414
  {"O": 0, "M": 1, "P": 2, "T": 3}
415
  ```
416
 
 
419
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
420
  - `tags`: a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a `list` of a `list` of a classification label.
421
 
422
+ ```json
423
  {"O": 0, "S": 1, "H": 2}
424
  ```
425
 
 
428
  - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
429
  - `tags`: the list of ner tags of this sentence, a `list` of classification labels.
430
 
431
+ ```json
432
  {"O": 0, "B-Material": 1, "I-Material": 2, "B-Process": 3, "I-Process": 4, "B-Task": 5, "I-Task": 6}
433
  ```
434
 
 
443
  - `arg2_type`: the key phrase type of the relation arg2 mention, a `string` feature.
444
  - `relation`: the relation label of this instance, a classification label.
445
 
446
+ ```json
447
  {"O": 0, "Synonym-of": 1, "Hyponym-of": 2}
448
  ```
449
 
450
  ### Data Splits
451
 
452
+ | | Train | Dev | Test |
453
+ |------------|-------|------|------|
454
+ | science_ie | 350 | 50 | 100 |
455
+ | subtask_a | 2388 | 400 | 838 |
456
+ | subtask_b | 2388 | 400 | 838 |
457
+ | subtask_c | 2388 | 400 | 838 |
458
+ | ner | 2388 | 400 | 838 |
459
+ | re | 24558 | 4838 | 6618 |
460
 
461
  ## Dataset Creation
462
 
science_ie.py CHANGED
@@ -13,13 +13,11 @@
13
  # limitations under the License.
14
  """ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents"""
15
 
16
-
17
  import glob
18
  import datasets
19
 
20
  from pathlib import Path
21
  from itertools import permutations
22
- from spacy.lang.en import English
23
 
24
  # Find for instance the citation on arxiv or on the dataset repo/website
25
  _CITATION = """\
@@ -92,9 +90,10 @@ class ScienceIE(datasets.GeneratorBasedBuilder):
92
  """ScienceIE is a dataset for the task of extracting key phrases and relations between them from scientific
93
  documents"""
94
 
95
- VERSION = datasets.Version("1.0.0")
96
 
97
  BUILDER_CONFIGS = [
 
98
  datasets.BuilderConfig(name="subtask_a", version=VERSION,
99
  description="Subtask A of ScienceIE for tokens being outside, at the beginning, "
100
  "or inside a key phrase"),
@@ -107,10 +106,40 @@ class ScienceIE(datasets.GeneratorBasedBuilder):
107
  datasets.BuilderConfig(name="re", version=VERSION, description="Relation extraction part of ScienceIE"),
108
  ]
109
 
110
- DEFAULT_CONFIG_NAME = "ner"
111
 
112
  def _info(self):
113
- if self.config.name == "subtask_a":
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
  features = datasets.Features(
115
  {
116
  "id": datasets.Value("string"),
@@ -199,8 +228,12 @@ class ScienceIE(datasets.GeneratorBasedBuilder):
199
  def _generate_examples(self, dir_path):
200
  # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
201
  annotation_files = glob.glob(dir_path + "/**/*.ann", recursive=True)
202
- word_splitter = English()
203
- word_splitter.add_pipe('sentencizer')
 
 
 
 
204
  for f_anno_file in annotation_files:
205
  doc_example_idx = 0
206
  f_anno_path = Path(f_anno_file)
@@ -209,7 +242,10 @@ class ScienceIE(datasets.GeneratorBasedBuilder):
209
  with open(f_anno_path, mode="r", encoding="utf8") as f_anno, \
210
  open(f_text_path, mode="r", encoding="utf8") as f_text:
211
  text = f_text.read().strip()
212
- doc = word_splitter(text)
 
 
 
213
  entities = []
214
  synonym_groups = []
215
  hyponyms = []
@@ -242,120 +278,149 @@ class ScienceIE(datasets.GeneratorBasedBuilder):
242
  print("Spans don't match for anno " + line.strip() + " in file " + f_anno_file)
243
  char_start = int(start)
244
  char_end = int(end)
245
- entity_span = doc.char_span(char_start, char_end, alignment_mode="expand")
246
- start = entity_span.start
247
- end = entity_span.end
248
- entities.append({
249
- "id": identifier,
250
- "start": start,
251
- "end": end,
252
- "char_start": char_start,
253
- "char_end": char_end,
254
- "type": key_type
255
- })
256
- # check if any annotation is lost during sentence splitting
257
- synonym_groups_used = [False for _ in synonym_groups]
258
- hyponyms_used = [False for _ in hyponyms]
259
- for sent in doc.sents:
260
- token_offset = sent.start
261
- tokens = [token.text for token in sent]
262
- tags = ["O" for _ in tokens]
263
- sent_entities = []
264
- sent_entity_ids = []
265
- for entity in entities:
266
- if entity["start"] >= sent.start and entity["end"] <= sent.end:
267
- sent_entity = {k: v for k, v in entity.items()}
268
- sent_entity["start"] -= token_offset
269
- sent_entity["end"] -= token_offset
270
- sent_entities.append(sent_entity)
271
- sent_entity_ids.append(entity["id"])
272
- for entity in sent_entities:
273
- tags[entity["start"]] = "B-" + entity["type"]
274
- for i in range(entity["start"] + 1, entity["end"]):
275
- tags[i] = "I-" + entity["type"]
276
-
277
- relations = []
278
- entity_pairs_in_relation = []
279
  for idx, synonym_group in enumerate(synonym_groups):
280
- if all(entity_id in sent_entity_ids for entity_id in synonym_group):
281
- synonym_groups_used[idx] = True
282
- for arg1_id, arg2_id in permutations(synonym_group, 2):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
283
  relations.append(
284
- generate_relation(sent_entities, arg1_id, arg2_id, relation="Synonym-of"))
285
- entity_pairs_in_relation.append((arg1_id, arg2_id))
286
- for idx, hyponym in enumerate(hyponyms):
287
- if hyponym["arg1_id"] in sent_entity_ids and hyponym["arg2_id"] in sent_entity_ids:
288
- hyponyms_used[idx] = True
289
- relations.append(
290
- generate_relation(sent_entities, hyponym["arg1_id"], hyponym["arg2_id"],
291
- relation="Hyponym-of"))
292
 
293
- entity_pairs_in_relation.append((arg1_id, arg2_id))
294
- entity_pairs = [(arg1["id"], arg2["id"]) for arg1, arg2 in permutations(sent_entities, 2)
295
- if (arg1["id"], arg2["id"]) not in entity_pairs_in_relation]
296
- for arg1_id, arg2_id in entity_pairs:
297
- relations.append(generate_relation(sent_entities, arg1_id, arg2_id, relation="O"))
298
 
299
- if self.config.name == "subtask_a":
300
- doc_example_idx += 1
301
- key = f"{doc_id}_{doc_example_idx}"
302
- # Yields examples as (key, example) tuples
303
- yield key, {
304
- "id": key,
305
- "tokens": tokens,
306
- "tags": [tag[0] for tag in tags]
307
- }
308
- elif self.config.name == "subtask_b":
309
- doc_example_idx += 1
310
- key = f"{doc_id}_{doc_example_idx}"
311
- # Yields examples as (key, example) tuples
312
- key_phrase_tags = []
313
- for tag in tags:
314
- if tag == "O":
315
- key_phrase_tags.append(tag)
316
- else:
317
- # use first letter of key phrase type
318
- key_phrase_tags.append(tag[2])
319
- yield key, {
320
- "id": key,
321
- "tokens": tokens,
322
- "tags": key_phrase_tags
323
- }
324
- elif self.config.name == "subtask_c":
325
- doc_example_idx += 1
326
- key = f"{doc_id}_{doc_example_idx}"
327
- tag_vectors = [["O" for _ in tokens] for _ in tokens]
328
- for relation in relations:
329
- tag = relation["relation"][0]
330
- if tag != "O":
331
- tag_vectors[relation["arg1_start"]][relation["arg2_start"]] = tag
332
- # Yields examples as (key, example) tuples
333
- yield key, {
334
- "id": key,
335
- "tokens": tokens,
336
- "tags": tag_vectors
337
- }
338
- elif self.config.name == "re":
339
- for relation in relations:
340
  doc_example_idx += 1
341
  key = f"{doc_id}_{doc_example_idx}"
342
  # Yields examples as (key, example) tuples
343
- example = {
344
  "id": key,
345
- "tokens": tokens
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
346
  }
347
- for k, v in relation.items():
348
- example[k] = v
349
- yield key, example
350
- else: # NER config
351
- doc_example_idx += 1
352
- key = f"{doc_id}_{doc_example_idx}"
353
- # Yields examples as (key, example) tuples
354
- yield key, {
355
- "id": key,
356
- "tokens": tokens,
357
- "tags": tags
358
- }
359
 
360
  assert all(synonym_groups_used) and all(hyponyms_used), \
361
  f"Annotations were lost: {len([e for e in synonym_groups_used if e])} synonym annotations," \
 
13
  # limitations under the License.
14
  """ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents"""
15
 
 
16
  import glob
17
  import datasets
18
 
19
  from pathlib import Path
20
  from itertools import permutations
 
21
 
22
  # Find for instance the citation on arxiv or on the dataset repo/website
23
  _CITATION = """\
 
90
  """ScienceIE is a dataset for the task of extracting key phrases and relations between them from scientific
91
  documents"""
92
 
93
+ VERSION = datasets.Version("1.1.0")
94
 
95
  BUILDER_CONFIGS = [
96
+ datasets.BuilderConfig(name="science_ie", version=VERSION, description="Full ScienceIE dataset"),
97
  datasets.BuilderConfig(name="subtask_a", version=VERSION,
98
  description="Subtask A of ScienceIE for tokens being outside, at the beginning, "
99
  "or inside a key phrase"),
 
106
  datasets.BuilderConfig(name="re", version=VERSION, description="Relation extraction part of ScienceIE"),
107
  ]
108
 
109
+ DEFAULT_CONFIG_NAME = "science_ie"
110
 
111
  def _info(self):
112
+ if self.config.name == "science_ie":
113
+ features = datasets.Features(
114
+ {
115
+ "id": datasets.Value("string"),
116
+ "text": datasets.Value("string"),
117
+ "keyphrases": [
118
+ {
119
+ "id": datasets.Value("string"),
120
+ "start": datasets.Value("int32"),
121
+ "end": datasets.Value("int32"),
122
+ "type": datasets.features.ClassLabel(
123
+ names=[
124
+ "Material",
125
+ "Process",
126
+ "Task"
127
+ ]
128
+ ),
129
+ "type_": datasets.Value("string")
130
+ }
131
+ ],
132
+ "relations": [
133
+ {
134
+ "arg1": datasets.Value("string"),
135
+ "arg2": datasets.Value("string"),
136
+ "relation": datasets.features.ClassLabel(names=["O", "Synonym-of", "Hyponym-of"]),
137
+ "relation_": datasets.Value("string")
138
+ }
139
+ ]
140
+ }
141
+ )
142
+ elif self.config.name == "subtask_a":
143
  features = datasets.Features(
144
  {
145
  "id": datasets.Value("string"),
 
228
  def _generate_examples(self, dir_path):
229
  # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
230
  annotation_files = glob.glob(dir_path + "/**/*.ann", recursive=True)
231
+ if self.config.name != "science_ie":
232
+ from spacy.lang.en import English
233
+ word_splitter = English()
234
+ word_splitter.add_pipe('sentencizer')
235
+ else:
236
+ word_splitter = None
237
  for f_anno_file in annotation_files:
238
  doc_example_idx = 0
239
  f_anno_path = Path(f_anno_file)
 
242
  with open(f_anno_path, mode="r", encoding="utf8") as f_anno, \
243
  open(f_text_path, mode="r", encoding="utf8") as f_text:
244
  text = f_text.read().strip()
245
+ if word_splitter:
246
+ doc = word_splitter(text)
247
+ else:
248
+ doc = None
249
  entities = []
250
  synonym_groups = []
251
  hyponyms = []
 
278
  print("Spans don't match for anno " + line.strip() + " in file " + f_anno_file)
279
  char_start = int(start)
280
  char_end = int(end)
281
+ if doc:
282
+ entity_span = doc.char_span(char_start, char_end, alignment_mode="expand")
283
+ start = entity_span.start
284
+ end = entity_span.end
285
+ entities.append({
286
+ "id": identifier,
287
+ "start": start,
288
+ "end": end,
289
+ "char_start": char_start,
290
+ "char_end": char_end,
291
+ "type": key_type,
292
+ "type_": key_type
293
+ })
294
+ else:
295
+ entities.append({
296
+ "id": identifier,
297
+ "start": char_start,
298
+ "end": char_end,
299
+ "type": key_type,
300
+ "type_": key_type
301
+ })
302
+ if self.config.name == "science_ie":
303
+ # just to pass the assertion at the end of the method, check is not relevant for this config
304
+ synonym_groups_used = [True for _ in synonym_groups]
305
+ hyponyms_used = [True for _ in hyponyms]
306
+ gen_relations = []
 
 
 
 
 
 
 
 
307
  for idx, synonym_group in enumerate(synonym_groups):
308
+ for arg1_id, arg2_id in permutations(synonym_group, 2):
309
+ gen_relations.append(dict(arg1=arg1_id, arg2=arg2_id, relation="Synonym-of",
310
+ relation_="Synonym-of"))
311
+ for hyponym in hyponyms:
312
+ gen_relations.append(dict(arg1=hyponym["arg1_id"], arg2=hyponym["arg2_id"],
313
+ relation="Hyponym-of", relation_="Hyponym-of"))
314
+ yield doc_id, {
315
+ "id": doc_id,
316
+ "text": text,
317
+ "keyphrases": entities,
318
+ "relations": gen_relations
319
+ }
320
+ else:
321
+ # check if any annotation is lost during sentence splitting
322
+ synonym_groups_used = [False for _ in synonym_groups]
323
+ hyponyms_used = [False for _ in hyponyms]
324
+ for sent in doc.sents:
325
+ token_offset = sent.start
326
+ tokens = [token.text for token in sent]
327
+ tags = ["O" for _ in tokens]
328
+ sent_entities = []
329
+ sent_entity_ids = []
330
+ for entity in entities:
331
+ if entity["start"] >= sent.start and entity["end"] <= sent.end:
332
+ sent_entity = {k: v for k, v in entity.items()}
333
+ sent_entity["start"] -= token_offset
334
+ sent_entity["end"] -= token_offset
335
+ sent_entities.append(sent_entity)
336
+ sent_entity_ids.append(entity["id"])
337
+ for entity in sent_entities:
338
+ tags[entity["start"]] = "B-" + entity["type"]
339
+ for i in range(entity["start"] + 1, entity["end"]):
340
+ tags[i] = "I-" + entity["type"]
341
+
342
+ relations = []
343
+ entity_pairs_in_relation = []
344
+ for idx, synonym_group in enumerate(synonym_groups):
345
+ if all(entity_id in sent_entity_ids for entity_id in synonym_group):
346
+ synonym_groups_used[idx] = True
347
+ for arg1_id, arg2_id in permutations(synonym_group, 2):
348
+ relations.append(
349
+ generate_relation(sent_entities, arg1_id, arg2_id, relation="Synonym-of"))
350
+ entity_pairs_in_relation.append((arg1_id, arg2_id))
351
+ for idx, hyponym in enumerate(hyponyms):
352
+ if hyponym["arg1_id"] in sent_entity_ids and hyponym["arg2_id"] in sent_entity_ids:
353
+ hyponyms_used[idx] = True
354
  relations.append(
355
+ generate_relation(sent_entities, hyponym["arg1_id"], hyponym["arg2_id"],
356
+ relation="Hyponym-of"))
 
 
 
 
 
 
357
 
358
+ entity_pairs_in_relation.append((arg1_id, arg2_id))
359
+ entity_pairs = [(arg1["id"], arg2["id"]) for arg1, arg2 in permutations(sent_entities, 2)
360
+ if (arg1["id"], arg2["id"]) not in entity_pairs_in_relation]
361
+ for arg1_id, arg2_id in entity_pairs:
362
+ relations.append(generate_relation(sent_entities, arg1_id, arg2_id, relation="O"))
363
 
364
+ if self.config.name == "subtask_a":
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
365
  doc_example_idx += 1
366
  key = f"{doc_id}_{doc_example_idx}"
367
  # Yields examples as (key, example) tuples
368
+ yield key, {
369
  "id": key,
370
+ "tokens": tokens,
371
+ "tags": [tag[0] for tag in tags]
372
+ }
373
+ elif self.config.name == "subtask_b":
374
+ doc_example_idx += 1
375
+ key = f"{doc_id}_{doc_example_idx}"
376
+ # Yields examples as (key, example) tuples
377
+ key_phrase_tags = []
378
+ for tag in tags:
379
+ if tag == "O":
380
+ key_phrase_tags.append(tag)
381
+ else:
382
+ # use first letter of key phrase type
383
+ key_phrase_tags.append(tag[2])
384
+ yield key, {
385
+ "id": key,
386
+ "tokens": tokens,
387
+ "tags": key_phrase_tags
388
+ }
389
+ elif self.config.name == "subtask_c":
390
+ doc_example_idx += 1
391
+ key = f"{doc_id}_{doc_example_idx}"
392
+ tag_vectors = [["O" for _ in tokens] for _ in tokens]
393
+ for relation in relations:
394
+ tag = relation["relation"][0]
395
+ if tag != "O":
396
+ tag_vectors[relation["arg1_start"]][relation["arg2_start"]] = tag
397
+ # Yields examples as (key, example) tuples
398
+ yield key, {
399
+ "id": key,
400
+ "tokens": tokens,
401
+ "tags": tag_vectors
402
+ }
403
+ elif self.config.name == "re":
404
+ for relation in relations:
405
+ doc_example_idx += 1
406
+ key = f"{doc_id}_{doc_example_idx}"
407
+ # Yields examples as (key, example) tuples
408
+ example = {
409
+ "id": key,
410
+ "tokens": tokens
411
+ }
412
+ for k, v in relation.items():
413
+ example[k] = v
414
+ yield key, example
415
+ else: # NER config
416
+ doc_example_idx += 1
417
+ key = f"{doc_id}_{doc_example_idx}"
418
+ # Yields examples as (key, example) tuples
419
+ yield key, {
420
+ "id": key,
421
+ "tokens": tokens,
422
+ "tags": tags
423
  }
 
 
 
 
 
 
 
 
 
 
 
 
424
 
425
  assert all(synonym_groups_used) and all(hyponyms_used), \
426
  f"Annotations were lost: {len([e for e in synonym_groups_used if e])} synonym annotations," \