Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
added file list
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: cc-by-4.0
|
|
5 |
|
6 |
## Dataset Description
|
7 |
Datasets are in JSON Lines format (each line is a json dictionary).
|
8 |
-
The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax
|
9 |
|
10 |
Each entry consists of a dictionary with the following keys:
|
11 |
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
|
@@ -35,4 +35,15 @@ with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
|
35 |
```
|
36 |
from datasets import Dataset
|
37 |
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
```
|
|
|
5 |
|
6 |
## Dataset Description
|
7 |
Datasets are in JSON Lines format (each line is a json dictionary).
|
8 |
+
The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax.
|
9 |
|
10 |
Each entry consists of a dictionary with the following keys:
|
11 |
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
|
|
|
35 |
```
|
36 |
from datasets import Dataset
|
37 |
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
|
38 |
+
```
|
39 |
+
|
40 |
+
## File list
|
41 |
+
```
|
42 |
+
βββ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
|
43 |
+
βββ WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
44 |
+
βββ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
|
45 |
+
βββ README.MD: this file.
|
46 |
+
βββ scoring-scripts/ : scripts used to evaluate submissions.
|
47 |
+
βββ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
48 |
+
βββ compute_seqeval.py : computes the seqeval scores (precision, recall, f1 overall and for each class) between two datasets.
|
49 |
```
|