wzkariampuzha commited on
Commit
a54f83a
1 Parent(s): 3df90d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -57
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  annotations_creators:
3
- - expert-generated
 
 
4
  language_creators:
5
  - found
6
  languages:
@@ -10,17 +12,14 @@ licenses:
10
  multilinguality:
11
  - monolingual
12
  size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - extended|conll2003
16
  task_categories:
17
  - structure-prediction
18
  task_ids:
19
  - named-entity-recognition
20
- paperswithcode_id: conll
21
  ---
22
 
23
- # Dataset Card for "conllpp"
24
 
25
  ## Table of Contents
26
  - [Dataset Description](#dataset-description)
@@ -50,54 +49,18 @@ paperswithcode_id: conll
50
 
51
  - **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
52
  - **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
53
- - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1519)
54
- - **Leaderboard:**
55
- - **Point of Contact:**
56
 
57
  ### Dataset Summary
58
 
59
- CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set
60
- have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One
61
- correction on the test set for example, is:
62
-
63
- ```
64
- {
65
- "tokens": ["SOCCER", "-", "JAPAN", "GET", "LUCKY", "WIN", ",", "CHINA", "IN", "SURPRISE", "DEFEAT", "."],
66
- "original_ner_tags_in_conll2003": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O"],
67
- "corrected_ner_tags_in_conllpp": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-LOC", "O", "O", "O", "O"],
68
- }
69
- ```
70
-
71
- ### Supported Tasks and Leaderboards
72
-
73
- [More Information Needed]
74
-
75
- ### Languages
76
-
77
- [More Information Needed]
78
-
79
- ## Dataset Structure
80
-
81
- We show detailed information for up to 5 configurations of the dataset.
82
-
83
- ### Data Instances
84
-
85
- #### conllpp
86
-
87
- - **Size of downloaded dataset files:** 4.63 MB
88
- - **Size of the generated dataset:** 9.78 MB
89
- - **Total amount of disk used:** 14.41 MB
90
 
91
  An example of 'train' looks as follows.
92
  ```
93
- This example was too long and was cropped:
94
-
95
  {
96
- "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
97
- "id": "0",
98
- "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
99
- "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
100
- "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
101
  }
102
  ```
103
 
@@ -108,15 +71,13 @@ The data fields are the same among all splits.
108
  #### conllpp
109
  - `id`: a `string` feature.
110
  - `tokens`: a `list` of `string` features.
111
- - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4).
112
- - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
113
  - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4).
114
 
115
  ### Data Splits
116
 
117
  | name |train|validation|test|
118
  |---------|----:|---------:|---:|
119
- |conll2003|14041| 3250|3453|
120
 
121
  ## Dataset Creation
122
 
@@ -130,15 +91,11 @@ The data fields are the same among all splits.
130
 
131
  [More Information Needed]
132
 
133
- #### Who are the source language producers?
134
-
135
- [More Information Needed]
136
-
137
  ### Annotations
138
 
139
  #### Annotation process
140
 
141
- [More Information Needed]
142
 
143
  #### Who are the annotators?
144
 
@@ -177,5 +134,4 @@ The data fields are the same among all splits.
177
  [More Information Needed]
178
 
179
  ### Contributions
180
-
181
- Thanks to [@ZihanWangKi](https://github.com/ZihanWangKi) for adding this dataset.
 
1
  ---
2
  annotations_creators:
3
+ - train: programmatically-generated
4
+ - val: programmatically-generated
5
+ - test: programmatically-generated, expert-validated
6
  language_creators:
7
  - found
8
  languages:
 
12
  multilinguality:
13
  - monolingual
14
  size_categories:
15
+ - 10K<n<100K **FIX**
 
 
16
  task_categories:
17
  - structure-prediction
18
  task_ids:
19
  - named-entity-recognition
 
20
  ---
21
 
22
+ # Dataset Card for "EpiSet4NER by [NIH NCATS GARD](https://rarediseases.info.nih.gov/)"
23
 
24
  ## Table of Contents
25
  - [Dataset Description](#dataset-description)
 
49
 
50
  - **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
51
  - **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
52
+ - **Paper:** Pending
 
 
53
 
54
  ### Dataset Summary
55
 
56
+ EpiSet4NER is a dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  An example of 'train' looks as follows.
59
  ```
 
 
60
  {
61
+ "id": "333",
62
+ "tokens": ['Conclusions', 'The', 'birth', 'prevalence', 'of', 'CLD', 'in', 'the', 'northern', 'Netherlands', 'was', '21.1/10,000', 'births', '.'],
63
+ "ner_tags": [0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 5, 6, 0],
 
 
64
  }
65
  ```
66
 
 
71
  #### conllpp
72
  - `id`: a `string` feature.
73
  - `tokens`: a `list` of `string` features.
 
 
74
  - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4).
75
 
76
  ### Data Splits
77
 
78
  | name |train|validation|test|
79
  |---------|----:|---------:|---:|
80
+ |EpiSet |14041| 3250|3453|
81
 
82
  ## Dataset Creation
83
 
 
91
 
92
  [More Information Needed]
93
 
 
 
 
 
94
  ### Annotations
95
 
96
  #### Annotation process
97
 
98
+ See here and then here
99
 
100
  #### Who are the annotators?
101
 
 
134
  [More Information Needed]
135
 
136
  ### Contributions
137
+ Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at NCATS/Axle Informatics for adding this dataset.