Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
created readme with description and how-to
Browse files
README.md
CHANGED
@@ -1,4 +1,32 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
---
|
4 |
-
# Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
---
|
4 |
+
# Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022)
|
5 |
+
|
6 |
+
## Dataset Descriptions
|
7 |
+
Datasets are in JSON Lines format (each line is a json dictionary).
|
8 |
+
The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax
|
9 |
+
|
10 |
+
Each entry consists of a dictionary with the following keys:
|
11 |
+
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
|
12 |
+
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
|
13 |
+
- `"ner_tags"`: the list of NER tags (in IOB2 format)
|
14 |
+
|
15 |
+
The following keys are not strictly needed by the participants:
|
16 |
+
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
|
17 |
+
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
18 |
+
|
19 |
+
## How-To
|
20 |
+
How to load the data (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed).
|
21 |
+
|
22 |
+
How to load into python (as list of dictionaries):
|
23 |
+
```
|
24 |
+
import json
|
25 |
+
with open('./WIESP2022-NER-DEV.jsonl', 'r') as f:
|
26 |
+
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
27 |
+
```
|
28 |
+
How to load into Huggingface (as a Hugginface Dataset):
|
29 |
+
```
|
30 |
+
from datasets import Dataset
|
31 |
+
wiesp_dev_from_json = Dataset.from_json(path_or_paths='./WIESP2022-NER-DEV.json')
|
32 |
+
```
|