Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Tags:
License:
fgrezes commited on
Commit
a9414b8
2 Parent(s): baf3073 297571f

Merge branch 'main' of https://huggingface.co/datasets/fgrezes/WIESP2022-NER into main

Browse files
Files changed (2) hide show
  1. README.md +18 -0
  2. tag_definitions.txt +100 -0
README.md CHANGED
@@ -3,9 +3,15 @@ annotations_creators:
3
  - expert-generated
4
  language_creators:
5
  - found
 
6
  languages:
7
  - en
8
  licenses:
 
 
 
 
 
9
  - cc-by-4.0
10
  multilinguality:
11
  - monolingual
@@ -19,6 +25,10 @@ task_ids:
19
  - named-entity-recognition
20
  ---
21
  # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
 
 
 
 
22
 
23
  ## Dataset Description
24
  Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
@@ -48,7 +58,11 @@ with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
48
  from datasets import Dataset
49
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
50
  ```
 
51
  (NOTE: currently loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally)
 
 
 
52
 
53
  How to compute your scores on the training data:
54
  1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
@@ -68,6 +82,10 @@ To get scores on the validation data, zip your predictions file (a single `.json
68
  ├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
69
  ├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
70
  ├── README.MD: this file.
 
 
 
 
71
  └── scoring-scripts/ : scripts used to evaluate submissions.
72
  ├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
73
  └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
 
3
  - expert-generated
4
  language_creators:
5
  - found
6
+ <<<<<<< HEAD
7
  languages:
8
  - en
9
  licenses:
10
+ =======
11
+ language:
12
+ - en
13
+ license:
14
+ >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
15
  - cc-by-4.0
16
  multilinguality:
17
  - monolingual
 
25
  - named-entity-recognition
26
  ---
27
  # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
28
+ <<<<<<< HEAD
29
+ =======
30
+ **(NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)**
31
+ >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
32
 
33
  ## Dataset Description
34
  Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
 
58
  from datasets import Dataset
59
  wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
60
  ```
61
+ <<<<<<< HEAD
62
  (NOTE: currently loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally)
63
+ =======
64
+ (NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)
65
+ >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
66
 
67
  How to compute your scores on the training data:
68
  1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
 
82
  ├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
83
  ├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
84
  ├── README.MD: this file.
85
+ <<<<<<< HEAD
86
+ =======
87
+ ├── tag_definitions.txt: short descriptions and examples of the tags used in the task.
88
+ >>>>>>> 297571f844c69c59b0a7d6325ad12c86b64aa523
89
  └── scoring-scripts/ : scripts used to evaluate submissions.
90
  ├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
91
  └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
tag_definitions.txt ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Person
2
+ A named person or their initials
3
+ Example: Andrea M. Ghez, Ghez A.
4
+
5
+ Organization
6
+ A named organization that is not an observatory.
7
+ Example: NASA, University of Toledo
8
+
9
+ Location
10
+ A named location on Earth.
11
+ Example: Canada
12
+
13
+ Entity of Future Interest
14
+ A general catch all for things that may be worth thinking about in the future. Often terms related to gravitational waves.
15
+
16
+ Observatory
17
+ A, often similarly located, group of telescopes.
18
+ Example: Keck Observatory, Fermi
19
+
20
+ Telescope
21
+ A "bucket" to catch light.
22
+ Example: Hubble Space Telescope, Discovery Channel Telescope
23
+
24
+ Mission
25
+ A spacecraft that is not a telescope or observatory that carries multiple instruments
26
+ Example: WIND
27
+
28
+ Instrument
29
+ A device, often, but not always, placed on a telescope, to make a measurement.
30
+ Example: Infrared Array Camera, NIRCam
31
+
32
+ Wavelength
33
+ A portion of the electromagnetic spectrum. Can be communicated as a particular wavelength, a name, or a particular transition.
34
+ Example: 656.46 nm, H-alpha
35
+
36
+ Archive
37
+ A curated collection of the literature or data. Very similar to Database.
38
+ Example: NASA ADS, MAST
39
+
40
+ Collaboration
41
+ An organizational entity containing multiple organizations, observatories, and/or countries.
42
+ Example: the Plank Collaboration
43
+
44
+ Survey
45
+ An organized search of the sky often dedicated to large scale science projects.
46
+ Example: 2MASS, SDSS
47
+
48
+ Grant
49
+ An allocation of money and/or time for a research project.
50
+ Example: grant No. 12345, ADAP grant 12345
51
+
52
+ Fellowship
53
+ A grant focused towards students and/or early career researchers.
54
+ Example: Hubble Fellowship
55
+
56
+ Database
57
+ A curated set of data. Very similar to Archive.
58
+ Example: Simbad database
59
+
60
+ Citation
61
+ A reference to previous work in the literature.
62
+ Example: Allen et al. 2012
63
+
64
+ Celestial Object
65
+ A named object in the sky
66
+ Example: ONC, Andromeda galaxy
67
+
68
+ Celestial Region
69
+ A defined region projected onto the sky, or celestial coordinates.
70
+ Example: GOODS field, l=2, b=15.
71
+
72
+ Celestial Object Region
73
+ A named area on/in a celestial body.
74
+ Example: Inner galaxy
75
+
76
+ Event
77
+ A conference, workshop or other event that often brings scientests together.
78
+ Example: Protostars and Planets VI
79
+
80
+ Formula
81
+ Mathematical formula or equations.
82
+ Example: F = Gm1m2/r^2, z=2.3
83
+
84
+ URL
85
+ A link to a website.
86
+ Example: https://www.astropy.org/
87
+
88
+ Identifier
89
+ A unique identifier for data, images, etc.
90
+ Example: ALMA 123.12345
91
+
92
+ Tag
93
+ A HTML tag.
94
+ Example: <bold><\bold>
95
+
96
+ Text Garbage
97
+ Incorrect text, often multiple punctuation marks with no inner text.
98
+ Example: ,,,
99
+
100
+