wzkariampuzha commited on
Commit
c609ced
1 Parent(s): 485b677

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -29
README.md CHANGED
@@ -48,12 +48,7 @@ task_ids:
48
 
49
  ### Dataset Summary
50
 
51
- EpiSet4NER is a bronze-standard dataset for epidemiological entity recognition of location, epidemiologic types, and created using weakly-supervised teaching methods
52
-
53
- locations, epidemiological identifiers (e.g. "prevalence", "annual incidence", "estimated occurrence") and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%")
54
-
55
- These are the V3 training (456 abstracts), validation (114 abstracts), and programmatically generated test (50 abstracts) set. The training set was copied to ```datasets/EpiCustomV3``` and renamed *train.tsv*. The validation set was copied to ```datasets/EpiCustomV3``` and ```datasets/Large_DatasetV3``` and renamed *val.tsv*. The V3 test set (uncorrected) is important as it is used by *Find efficacy of test predictions.ipynb* to find the efficacy of the programmatic labeling, but was otherwise not used with the model.
56
- [NIH NCATS GARD](https://rarediseases.info.nih.gov/)
57
 
58
  An example of 'train' looks as follows.
59
  ```
@@ -66,78 +61,85 @@ An example of 'train' looks as follows.
66
  ### Data Fields
67
 
68
  The data fields are the same among all splits.
69
- - `id`: a `string` feature.
70
  - `tokens`: a `list` of `string` features.
71
  - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
72
 
73
- ### Data Splits by number of tokens
74
 
75
  |name |train |validation|test|
76
  |---------|-----:|----:|----:|
77
- |EpiSet |117888|31262|13910|
 
78
 
79
- ## Dataset Creation
80
 
81
- This bronze-standard dataset was created from 620 rare disease abstracts
82
- Programmatic Labeling using statistical and rule-based methods (Weakly Supervised Teaching)
83
- ![Text Labeling](https://raw.githubusercontent.com/ncats/epi4GARD/master/EpiExtract4GARD/datasets/EpiCustomV3/Text%20Labeling4.png)
 
84
 
 
85
  | Evaluation Level | Entity | Precision | Recall | F1 |
86
  |:----------------:|:------------------------:|:---------:|:------:|:-----:|
87
  | Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
88
  | | Location | 0.597 | 0.661 | 0.627 |
89
  | | Epidemiologic Identifier | 0.854 | 0.911 | 0.882 |
90
  | | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
91
- |------------------|--------------------------|-----------|--------|-------|
92
- | Token-Level | Overall | 0.805 | 0.710 | 0.755 |
93
  | | Location | 0.868 | 0.713 | 0.783 |
94
  | | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
95
  | | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
96
 
 
 
 
 
97
  ### Curation Rationale
98
 
99
- [More Information Needed]
100
 
101
  ### Source Data
 
102
 
103
  #### Initial Data Collection and Normalization
104
 
105
- A sample of 500 disease names were gathered from ~6061 rare diseases tracked by GARD.
106
 
107
  ### Annotations
108
 
109
  #### Annotation process
110
 
111
- See here and then here
112
 
113
  #### Who are the annotators?
114
 
115
- [More Information Needed]
 
116
 
117
  ### Personal and Sensitive Information
118
 
119
- [More Information Needed]
120
 
121
  ## Considerations for Using the Data
122
 
123
  ### Social Impact of Dataset
124
 
125
- [More Information Needed]
126
-
127
- ### Discussion of Biases
128
 
129
- Generates *whole_abstract_set.csv* and *positive_abstract_set.csv*. *whole_abstract_set.csv* is a dataset created by sampling 500 rare disease names and their synonyms from *GARD.csv* until ≥50 abstracts had been returned or the search results were exhausted. Although ~25,000 abstracts were expected, 7699 unique abstracts were returned due to the limited research on rare diseases. After running each of these through the LSTM RNN classifier, the *positive_abstract_set.csv* was created from the abstracts which had an epidemiological probability >50%. *positive_abstract_set.csv* will be passed to *create_labeled_dataset_V2.ipynb*
130
 
131
-
132
- ### Other Known Limitations
133
-
134
- [More Information Needed]
 
 
135
 
136
  ## Additional Information
137
 
138
  ### Dataset Curators
139
 
140
- [More Information Needed]
141
 
142
  ### Licensing Information
143
 
 
48
 
49
  ### Dataset Summary
50
 
51
+ EpiSet4NER is a bronze-standard dataset for epidemiological entity recognition of location, epidemiologic types (e.g. "prevalence", "annual incidence", "estimated occurrence"), and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%") created by the [Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/), a program in [the National Center for Advancing Translational Sciences](https://ncats.nih.gov/), one of the 27 [National Institutes of Health](https://www.nih.gov/). It was labeled programmatically using spaCy NER and rule-based methods. This weakly-supervised teaching method allowed us to construct this imprecise dataset with minimal manual effort and achieve satisfactory performance on a multi-type token classification problem. The test set was manually corrected by 3 NCATS researchers and a GARD curator (genetic and rare disease expert).
 
 
 
 
 
52
 
53
  An example of 'train' looks as follows.
54
  ```
 
61
  ### Data Fields
62
 
63
  The data fields are the same among all splits.
64
+ - `id`: a `string` feature that indicates sentence number.
65
  - `tokens`: a `list` of `string` features.
66
  - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
67
 
68
+ ### Data Splits
69
 
70
  |name |train |validation|test|
71
  |---------|-----:|----:|----:|
72
+ |EpiSet abstracts|456|114|50|
73
+ |EpiSet tokens |117888|31262|13910|
74
 
 
75
 
76
+ ## Dataset Creation
77
+ ![EpiSet Creation Flowchart](https://raw.githubusercontent.com/ncats/epi4GARD/master/EpiExtract4GARD/datasets/EpiCustomV3/EpiSet%20Flowchart%20FINAL.png | width=200)
78
+ *Figure 1:* Creation of EpiSet4NER by NIH/NCATS
79
+ Comparing the programmatically labeled test set to the manually corrected test set allowed us to measure the precision, recall, and F1 of the programmatic labeling.
80
 
81
+ *Table 1:* Programmatic labeling of EpiSet4NER
82
  | Evaluation Level | Entity | Precision | Recall | F1 |
83
  |:----------------:|:------------------------:|:---------:|:------:|:-----:|
84
  | Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
85
  | | Location | 0.597 | 0.661 | 0.627 |
86
  | | Epidemiologic Identifier | 0.854 | 0.911 | 0.882 |
87
  | | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
88
+ | Token-Level | Overall | 0.805 | 0.710 | 0.755 |
 
89
  | | Location | 0.868 | 0.713 | 0.783 |
90
  | | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
91
  | | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
92
 
93
+ An example of the text labeling:
94
+ ![Text Labeling](https://raw.githubusercontent.com/ncats/epi4GARD/master/EpiExtract4GARD/datasets/EpiCustomV3/Text%20Labeling4.png)
95
+ *Figure 2:* Text Labeling using spaCy and rule-based labeling. Ideal labeling is bolded on the left. Actual programmatic output is on the right. [Abstract citation](https://pubmed.ncbi.nlm.nih.gov/33649778/)
96
+
97
  ### Curation Rationale
98
 
99
+ To train ML/DL models that automate the process of rare disease epidemiological curation. This is crucial information to patients & families, researchers, grantors, and policy makers, primarily for funding purposes.
100
 
101
  ### Source Data
102
+ 620 rare disease abstracts classified as epidemiological by a LSTM RNN rare disease epi classifier from 488 diseases. See Figure 1.
103
 
104
  #### Initial Data Collection and Normalization
105
 
106
+ A random sample of 500 disease names were gathered from a list of ~6061 rare diseases tracked by GARD until ≥50 abstracts had been returned for each disease or the EBI RESTful API results were exhausted. Though we called ~25,000 abstracts from PubMed's db, only 7699 unique abstracts were returned for 488 diseases. Out of 7699 abstracts, only 620 were classified as epidemiological by the LSTM RNN epidemiological classifier.
107
 
108
  ### Annotations
109
 
110
  #### Annotation process
111
 
112
+ Programmatic labeling. See [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/create_labeled_dataset_V2.ipynb) and then [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/modify_existing_labels.ipynb). The test set was manually corrected after creation.
113
 
114
  #### Who are the annotators?
115
 
116
+ Programmatic labeling was done by [@William Kariampuzha](https://github.com/wzkariampuzha), one of the NCATS researchers.
117
+ The test set was manually corrected by 2 more NCATS researchers and a GARD curator (genetic and rare disease expert).
118
 
119
  ### Personal and Sensitive Information
120
 
121
+ None. These are freely available abstracts from PubMed.
122
 
123
  ## Considerations for Using the Data
124
 
125
  ### Social Impact of Dataset
126
 
127
+ Assisting 25-30 millions Americans with rare diseases. Additionally can be useful for Orphanet or CDC researchers/curators.
 
 
128
 
129
+ ### Discussion of Biases and Limitations
130
 
131
+ - There were errors in the source file that contained rare disease synonyms of names, which may have led to some unrelated abstracts being included in the training, validation, and test sets.
132
+ - The abstracts were gathered through the EBI API and is thus subject to any biases that the EBI API had. The NCBI API returns very different results as shown by an API analysis here.
133
+ - The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
134
+ - Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
135
+ - The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
136
+ - The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems to compete on.
137
 
138
  ## Additional Information
139
 
140
  ### Dataset Curators
141
 
142
+ NIH GARD
143
 
144
  ### Licensing Information
145