Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
instructions for participants + typos
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: cc-by-4.0
|
|
3 |
---
|
4 |
# Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022)
|
5 |
|
6 |
-
## Dataset
|
7 |
Datasets are in JSON Lines format (each line is a json dictionary).
|
8 |
The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax
|
9 |
|
@@ -16,17 +16,25 @@ The following keys are not strictly needed by the participants:
|
|
16 |
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
|
17 |
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
18 |
|
19 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
How to load the data (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed).
|
21 |
|
22 |
How to load into python (as list of dictionaries):
|
23 |
```
|
24 |
import json
|
25 |
-
with open(
|
26 |
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
27 |
```
|
|
|
28 |
How to load into Huggingface (as a Hugginface Dataset):
|
29 |
```
|
30 |
from datasets import Dataset
|
31 |
-
wiesp_dev_from_json = Dataset.from_json(path_or_paths=
|
32 |
```
|
|
|
3 |
---
|
4 |
# Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022)
|
5 |
|
6 |
+
## Dataset Description
|
7 |
Datasets are in JSON Lines format (each line is a json dictionary).
|
8 |
The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax
|
9 |
|
|
|
16 |
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
|
17 |
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
18 |
|
19 |
+
## Instructions for Workshop participants:
|
20 |
+
Predictions must be given in the same JSON Lines format, must include the same "unique_id" and "tokens" keys from the dataset, as well as the list of predicted NER tags under the "pred_ner_tags" key.
|
21 |
+
|
22 |
+
### How-To
|
23 |
+
How to compute your scores on the training data:
|
24 |
+
1. If not already done, convert your predictions to a Huggingface dataset with the format described above.
|
25 |
+
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` function (from the `.py` files with the same names).
|
26 |
+
|
27 |
How to load the data (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed).
|
28 |
|
29 |
How to load into python (as list of dictionaries):
|
30 |
```
|
31 |
import json
|
32 |
+
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
33 |
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
34 |
```
|
35 |
+
|
36 |
How to load into Huggingface (as a Hugginface Dataset):
|
37 |
```
|
38 |
from datasets import Dataset
|
39 |
+
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.json")
|
40 |
```
|