Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
WIESP2022-NER / README.md
fgrezes's picture
Update README.md
71c162f
metadata
annotations_creators:
  - expert-generated
language_creators:
  - found
languages:
  - en
licenses:
  - cc-by-4.0
multilinguality:
  - monolingual
pretty_name: |
  WIESP2022-NER
size_categories:
  - 1K<n<10K
source_datasets: []
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition

Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022)

Dataset Description

Datasets are in JSON Lines format (each line is a json dictionary). The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax.

Each entry consists of a dictionary with the following keys:

  • "unique_id": a unique identifier for this data sample. Must be included in the predictions.
  • "tokens": the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
  • "ner_tags": the list of NER tags (in IOB2 format)

The following keys are not strictly needed by the participants:

  • "ner_ids": the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
  • "label_studio_id", "section", "bibcode": references for internal NASA/ADS use.

Instructions for Workshop participants:

Predictions must be given in the same JSON Lines format, must include the same "unique_id" and "tokens" keys from the dataset, as well as the list of predicted NER tags under the "pred_ner_tags" key.

How-To

How to compute your scores on the training data:

  1. If not already done, convert your predictions to a Huggingface dataset with the format described above.
  2. pass the references and predictions datasets to the compute_MCC() and compute_seqeval() function (from the .py files with the same names).

How to load the data (assuming ./WIESP2022-NER-DEV.jsonl is in the current directory, change as needed):

  • python (as list of dictionaries):
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
    wiesp_dev_json = [json.loads(l) for l in list(f)]
  • into Huggingface (as a Hugginface Dataset):
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")

File list

β”œβ”€β”€ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
β”œβ”€β”€ WIESP2022-NER-DEV.jsonl : 20 samples for development.
β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
β”œβ”€β”€ README.MD: this file.
└── scoring-scripts/ : scripts used to evaluate submissions.
    β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
    └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1 overall and for each class) between two datasets.