Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K - 10K
License:
metadata
license: cc-by-4.0
Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022)
Dataset Description
Datasets are in JSON Lines format (each line is a json dictionary). The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax.
Each entry consists of a dictionary with the following keys:
"unique_id"
: a unique identifier for this data sample. Must be included in the predictions."tokens"
: the list of tokens (strings) that form the text of this sample. Must be included in the predictions."ner_tags"
: the list of NER tags (in IOB2 format)
The following keys are not strictly needed by the participants:
"ner_ids"
: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json"label_studio_id"
,"section"
,"bibcode"
: references for internal NASA/ADS use.
Instructions for Workshop participants:
Predictions must be given in the same JSON Lines format, must include the same "unique_id"
and "tokens"
keys from the dataset, as well as the list of predicted NER tags under the "pred_ner_tags"
key.
How-To
How to compute your scores on the training data:
- If not already done, convert your predictions to a Huggingface dataset with the format described above.
- pass the references and predictions datasets to the
compute_MCC()
andcompute_seqeval()
function (from the.py
files with the same names).
How to load the data (assuming ./WIESP2022-NER-DEV.jsonl
is in the current directory, change as needed):
- python (as list of dictionaries):
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
wiesp_dev_json = [json.loads(l) for l in list(f)]
- into Huggingface (as a Hugginface Dataset):
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
File list
βββ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
βββ WIESP2022-NER-DEV.jsonl : 20 samples for development.
βββ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
βββ README.MD: this file.
βββ scoring-scripts/ : scripts used to evaluate submissions.
βββ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
βββ compute_seqeval.py : computes the seqeval scores (precision, recall, f1 overall and for each class) between two datasets.