Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
fgrezes commited on
Commit
8d91945
β€’
1 Parent(s): bf21434

reqs, better instructions, more link

Browse files
Files changed (1) hide show
  1. README.md +18 -16
README.md CHANGED
@@ -9,9 +9,7 @@ licenses:
9
  - cc-by-4.0
10
  multilinguality:
11
  - monolingual
12
- pretty_name: 'WIESP2022-NER
13
-
14
- '
15
  size_categories:
16
  - 1K<n<10K
17
  source_datasets: []
@@ -20,12 +18,12 @@ task_categories:
20
  task_ids:
21
  - named-entity-recognition
22
  ---
23
- # Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022)
24
- (<a href="https://ui.adsabs.harvard.edu/WIESP/" target="_blank">Website</a>)
25
 
26
  ## Dataset Description
27
- Datasets are in JSON Lines format (each line is a json dictionary).
28
- The datasets are formatted similarly to the CONLL2003 format in that they associate each token with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax.
 
29
 
30
  Each entry consists of a dictionary with the following keys:
31
  - `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
@@ -37,13 +35,6 @@ The following keys are not strictly needed by the participants:
37
  - `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
38
 
39
  ## Instructions for Workshop participants:
40
- Predictions must be given in the same JSON Lines format, must include the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key.
41
-
42
- ### How-To
43
- How to compute your scores on the training data:
44
- 1. If not already done, convert your predictions to a Huggingface dataset with the format described above.
45
- 2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` function (from the `.py` files with the same names).
46
-
47
  How to load the data:
48
  (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
49
  - python (as list of dictionaries):
@@ -57,16 +48,27 @@ with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
57
  from datasets import Dataset
58
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
59
  ```
60
-
61
  (NOTE: currently loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally)
62
 
 
 
 
 
 
 
 
 
 
 
 
63
  ## File list
64
  ```
65
  β”œβ”€β”€ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
66
  β”œβ”€β”€ WIESP2022-NER-DEV.jsonl : 20 samples for development.
 
67
  β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
68
  β”œβ”€β”€ README.MD: this file.
69
  └── scoring-scripts/ : scripts used to evaluate submissions.
70
  β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
71
- └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1 overall and for each class) between two datasets.
72
  ```
 
9
  - cc-by-4.0
10
  multilinguality:
11
  - monolingual
12
+ pretty_name: 'WIESP2022-NER'
 
 
13
  size_categories:
14
  - 1K<n<10K
15
  source_datasets: []
 
18
  task_ids:
19
  - named-entity-recognition
20
  ---
21
+ # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
 
22
 
23
  ## Dataset Description
24
+ Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
25
+ Datasets are in JSON Lines format (each line is a json dictionary).
26
+ The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
27
 
28
  Each entry consists of a dictionary with the following keys:
29
  - `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
 
35
  - `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
36
 
37
  ## Instructions for Workshop participants:
 
 
 
 
 
 
 
38
  How to load the data:
39
  (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
40
  - python (as list of dictionaries):
 
48
  from datasets import Dataset
49
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
50
  ```
 
51
  (NOTE: currently loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally)
52
 
53
+ How to compute your scores on the training data:
54
+ 1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
55
+ 2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
56
+
57
+ Requirement to run the scoring scripts:
58
+ [NumPy](https://numpy.org/install/)
59
+ [scikit-learn](https://scikit-learn.org/stable/install.html)
60
+ [seqeval](https://github.com/chakki-works/seqeval#installation)
61
+
62
+ To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
63
+
64
  ## File list
65
  ```
66
  β”œβ”€β”€ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
67
  β”œβ”€β”€ WIESP2022-NER-DEV.jsonl : 20 samples for development.
68
+ β”œβ”€β”€ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
69
  β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
70
  β”œβ”€β”€ README.MD: this file.
71
  └── scoring-scripts/ : scripts used to evaluate submissions.
72
  β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
73
+ └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
74
  ```