Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
fgrezes commited on
Commit
f62035c
β€’
1 Parent(s): d168d0f

fixed readme

Browse files
Files changed (1) hide show
  1. README.md +0 -16
README.md CHANGED
@@ -3,15 +3,9 @@ annotations_creators:
3
  - expert-generated
4
  language_creators:
5
  - found
6
- <<<<<<< HEAD
7
- languages:
8
- - en
9
- licenses:
10
- =======
11
  language:
12
  - en
13
  license:
14
- >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
15
  - cc-by-4.0
16
  multilinguality:
17
  - monolingual
@@ -25,10 +19,7 @@ task_ids:
25
  - named-entity-recognition
26
  ---
27
  # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
28
- <<<<<<< HEAD
29
- =======
30
  **(NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)**
31
- >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
32
 
33
  ## Dataset Description
34
  Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
@@ -58,11 +49,7 @@ with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
58
  from datasets import Dataset
59
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
60
  ```
61
- <<<<<<< HEAD
62
- (NOTE: currently loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally)
63
- =======
64
  (NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)
65
- >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
66
 
67
  How to compute your scores on the training data:
68
  1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
@@ -82,10 +69,7 @@ To get scores on the validation data, zip your predictions file (a single `.json
82
  β”œβ”€β”€ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
83
  β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
84
  β”œβ”€β”€ README.MD: this file.
85
- <<<<<<< HEAD
86
- =======
87
  β”œβ”€β”€ tag_definitions.txt: short descriptions and examples of the tags used in the task.
88
- >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
89
  └── scoring-scripts/ : scripts used to evaluate submissions.
90
  β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
91
  └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
 
3
  - expert-generated
4
  language_creators:
5
  - found
 
 
 
 
 
6
  language:
7
  - en
8
  license:
 
9
  - cc-by-4.0
10
  multilinguality:
11
  - monolingual
 
19
  - named-entity-recognition
20
  ---
21
  # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
 
 
22
  **(NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)**
 
23
 
24
  ## Dataset Description
25
  Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
 
49
  from datasets import Dataset
50
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
51
  ```
 
 
 
52
  (NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)
 
53
 
54
  How to compute your scores on the training data:
55
  1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
 
69
  β”œβ”€β”€ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
70
  β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
71
  β”œβ”€β”€ README.MD: this file.
 
 
72
  β”œβ”€β”€ tag_definitions.txt: short descriptions and examples of the tags used in the task.
 
73
  └── scoring-scripts/ : scripts used to evaluate submissions.
74
  β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
75
  └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.