Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
fgrezes commited on
Commit
8078a6f
Β·
1 Parent(s): a35c275

added latest files, python syntax color

Browse files
Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -19,7 +19,7 @@ task_ids:
19
  - named-entity-recognition
20
  ---
21
  # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
22
- **(NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)**
23
 
24
  ## Dataset Description
25
  Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
@@ -36,20 +36,26 @@ The following keys are not strictly needed by the participants:
36
  - `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
37
 
38
  ## Instructions for Workshop participants:
39
- How to load the data:
40
- (assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
 
 
 
 
 
 
41
  - python (as list of dictionaries):
42
- ```
43
  import json
44
  with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
45
  Β Β Β Β wiesp_dev_json = [json.loads(l) for l in list(f)]
46
  ```
47
  - into Huggingface (as a Huggingface Dataset):
48
- ```
49
  from datasets import Dataset
50
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
51
  ```
52
- (NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)
53
 
54
  How to compute your scores on the training data:
55
  1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
@@ -68,8 +74,11 @@ To get scores on the validation data, zip your predictions file (a single `.json
68
  β”œβ”€β”€ WIESP2022-NER-DEV.jsonl : 20 samples for development.
69
  β”œβ”€β”€ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
70
  β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
71
- β”œβ”€β”€ README.MD: this file.
72
- β”œβ”€β”€ tag_definitions.txt: short descriptions and examples of the tags used in the task.
 
 
 
73
  └── scoring-scripts/ : scripts used to evaluate submissions.
74
  β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
75
  └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
 
19
  - named-entity-recognition
20
  ---
21
  # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
22
+
23
 
24
  ## Dataset Description
25
  Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
 
36
  - `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
37
 
38
  ## Instructions for Workshop participants:
39
+ How to load the data using the Huggingface library:
40
+ ```python
41
+ from datasets import load_dataset
42
+ dataset = load_dataset("adsabs/WIESP2022-NER")
43
+ ```
44
+
45
+ How to load the data if you cloned the repository locally:
46
+ (assuming you `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
47
  - python (as list of dictionaries):
48
+ ```python
49
  import json
50
  with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
51
  Β Β Β Β wiesp_dev_json = [json.loads(l) for l in list(f)]
52
  ```
53
  - into Huggingface (as a Huggingface Dataset):
54
+ ```python
55
  from datasets import Dataset
56
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
57
  ```
58
+
59
 
60
  How to compute your scores on the training data:
61
  1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
 
74
  β”œβ”€β”€ WIESP2022-NER-DEV.jsonl : 20 samples for development.
75
  β”œβ”€β”€ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
76
  β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
77
+ β”œβ”€β”€ WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
78
+ β”œβ”€β”€ WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
79
+ β”œβ”€β”€ WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
80
+ β”œβ”€β”€ README.MD : this file.
81
+ β”œβ”€β”€ tag_definitions.md : short descriptions and examples of the tags used in the task.
82
  └── scoring-scripts/ : scripts used to evaluate submissions.
83
  β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
84
  └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.