Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
formating How-To
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ The following keys are not strictly needed by the participants:
|
|
17 |
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
18 |
|
19 |
## Instructions for Workshop participants:
|
20 |
-
Predictions must be given in the same JSON Lines format, must include the same "unique_id" and "tokens" keys from the dataset, as well as the list of predicted NER tags under the "pred_ner_tags" key.
|
21 |
|
22 |
### How-To
|
23 |
How to compute your scores on the training data:
|
@@ -25,15 +25,13 @@ How to compute your scores on the training data:
|
|
25 |
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` function (from the `.py` files with the same names).
|
26 |
|
27 |
How to load the data (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed).
|
28 |
-
|
29 |
-
How to load into python (as list of dictionaries):
|
30 |
```
|
31 |
import json
|
32 |
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
33 |
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
34 |
```
|
35 |
-
|
36 |
-
How to load into Huggingface (as a Hugginface Dataset):
|
37 |
```
|
38 |
from datasets import Dataset
|
39 |
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.json")
|
|
|
17 |
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
18 |
|
19 |
## Instructions for Workshop participants:
|
20 |
+
Predictions must be given in the same JSON Lines format, must include the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key.
|
21 |
|
22 |
### How-To
|
23 |
How to compute your scores on the training data:
|
|
|
25 |
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` function (from the `.py` files with the same names).
|
26 |
|
27 |
How to load the data (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed).
|
28 |
+
- python (as list of dictionaries):
|
|
|
29 |
```
|
30 |
import json
|
31 |
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
32 |
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
33 |
```
|
34 |
+
- into Huggingface (as a Hugginface Dataset):
|
|
|
35 |
```
|
36 |
from datasets import Dataset
|
37 |
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.json")
|