Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K - 10K
License:
added latest files, python syntax color
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ task_ids:
|
|
19 |
- named-entity-recognition
|
20 |
---
|
21 |
# Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
|
22 |
-
|
23 |
|
24 |
## Dataset Description
|
25 |
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
|
@@ -36,20 +36,26 @@ The following keys are not strictly needed by the participants:
|
|
36 |
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
37 |
|
38 |
## Instructions for Workshop participants:
|
39 |
-
How to load the data:
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
- python (as list of dictionaries):
|
42 |
-
```
|
43 |
import json
|
44 |
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
45 |
Β Β Β Β wiesp_dev_json = [json.loads(l) for l in list(f)]
|
46 |
```
|
47 |
- into Huggingface (as a Huggingface Dataset):
|
48 |
-
```
|
49 |
from datasets import Dataset
|
50 |
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
|
51 |
```
|
52 |
-
|
53 |
|
54 |
How to compute your scores on the training data:
|
55 |
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
|
@@ -68,8 +74,11 @@ To get scores on the validation data, zip your predictions file (a single `.json
|
|
68 |
βββ WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
69 |
βββ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
|
70 |
βββ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
|
71 |
-
βββ
|
72 |
-
βββ
|
|
|
|
|
|
|
73 |
βββ scoring-scripts/ : scripts used to evaluate submissions.
|
74 |
βββ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
75 |
βββ compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
|
|
|
19 |
- named-entity-recognition
|
20 |
---
|
21 |
# Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
|
22 |
+
|
23 |
|
24 |
## Dataset Description
|
25 |
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
|
|
|
36 |
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
37 |
|
38 |
## Instructions for Workshop participants:
|
39 |
+
How to load the data using the Huggingface library:
|
40 |
+
```python
|
41 |
+
from datasets import load_dataset
|
42 |
+
dataset = load_dataset("adsabs/WIESP2022-NER")
|
43 |
+
```
|
44 |
+
|
45 |
+
How to load the data if you cloned the repository locally:
|
46 |
+
(assuming you `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
|
47 |
- python (as list of dictionaries):
|
48 |
+
```python
|
49 |
import json
|
50 |
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
51 |
Β Β Β Β wiesp_dev_json = [json.loads(l) for l in list(f)]
|
52 |
```
|
53 |
- into Huggingface (as a Huggingface Dataset):
|
54 |
+
```python
|
55 |
from datasets import Dataset
|
56 |
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
|
57 |
```
|
58 |
+
|
59 |
|
60 |
How to compute your scores on the training data:
|
61 |
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
|
|
|
74 |
βββ WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
75 |
βββ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
|
76 |
βββ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
|
77 |
+
βββ WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
|
78 |
+
βββ WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
|
79 |
+
βββ WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
|
80 |
+
βββ README.MD : this file.
|
81 |
+
βββ tag_definitions.md : short descriptions and examples of the tags used in the task.
|
82 |
βββ scoring-scripts/ : scripts used to evaluate submissions.
|
83 |
βββ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
84 |
βββ compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
|