julien-c HF staff commited on
Commit
b295f77
1 Parent(s): f23819c

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/fran-martinez/scibert_scivocab_cased_ner_jnlpba/README.md

Files changed (1) hide show
  1. README.md +161 -0
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: scientific english
3
+ ---
4
+
5
+ # SciBERT finetuned on JNLPA for NER downstream task
6
+ ## Language Model
7
+ [SciBERT](https://arxiv.org/pdf/1903.10676.pdf) is a pretrained language model based on BERT and trained by the
8
+ [Allen Institute for AI](https://allenai.org/) on papers from the corpus of
9
+ [Semantic Scholar](https://www.semanticscholar.org/).
10
+ Corpus size is 1.14M papers, 3.1B tokens. SciBERT has its own vocabulary (scivocab) that's built to best match
11
+ the training corpus.
12
+
13
+ ## Downstream task
14
+ [`allenai/scibert_scivocab_cased`](https://huggingface.co/allenai/scibert_scivocab_cased#) has been finetuned for Named Entity
15
+ Recognition (NER) dowstream task. The code to train the NER can be found [here](https://github.com/fran-martinez/bio_ner_bert).
16
+
17
+ ### Data
18
+ The corpus used to fine-tune the NER is [BioNLP / JNLPBA shared task](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004).
19
+
20
+ - Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces).
21
+ - Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences).
22
+
23
+ The classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below:
24
+
25
+ | Class Label | # training examples| # evaluation examples|
26
+ |:--------------|--------------:|----------------:|
27
+ |O | 382,963 | 81,647 |
28
+ |B-protein | 30,269 | 5,067 |
29
+ |I-protein | 24,848 | 4,774 |
30
+ |B-cell_type | 6,718 | 1,921 |
31
+ |I-cell_type | 8,748 | 2,991 |
32
+ |B-DNA | 9,533 | 1,056 |
33
+ |I-DNA | 15,774 | 1,789 |
34
+ |B-cell_line | 3,830 | 500 |
35
+ |I-cell_line | 7,387 | 9,89 |
36
+ |B-RNA | 951 | 118 |
37
+ |I-RNA | 1,530 | 187 |
38
+
39
+ ### Model
40
+ An exhaustive hyperparameter search was done.
41
+ The hyperparameters that provided the best results are:
42
+
43
+ - Max length sequence: 128
44
+ - Number of epochs: 6
45
+ - Batch size: 32
46
+ - Dropout: 0.3
47
+ - Optimizer: Adam
48
+
49
+ The used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training
50
+ with a ratio of steps equal to 0.1 from the total training steps.
51
+
52
+ The model from the epoch with the best F1-score was selected, in this case, the model from epoch 5.
53
+
54
+
55
+ ### Evaluation
56
+ The following table shows the evaluation metrics calculated at span/entity level:
57
+
58
+ | | precision| recall| f1-score|
59
+ |:---------|-----------:|---------:|---------:|
60
+ cell_line | 0.5205 | 0.7100 | 0.6007 |
61
+ cell_type | 0.7736 | 0.7422 | 0.7576 |
62
+ protein | 0.6953 | 0.8459 | 0.7633 |
63
+ DNA | 0.6997 | 0.7894 | 0.7419 |
64
+ RNA | 0.6985 | 0.8051 | 0.7480 |
65
+ | | | |
66
+ **micro avg** | 0.6984 | 0.8076 | 0.7490|
67
+ **macro avg** | 0.7032 | 0.8076 | 0.7498 |
68
+
69
+ The macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their
70
+ [paper](https://arxiv.org/pdf/1903.10676.pdf), which is equal to 0.7728. This drop in performance could be due to
71
+ several reasons, but one hypothesis could be the fact that the authors used an additional conditional random field,
72
+ while this model uses a regular classification layer with softmax activation on top of SciBERT model.
73
+
74
+ At word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093.
75
+
76
+ ### Model usage in inference
77
+ Use the pipeline:
78
+ ````python
79
+ from transformers import pipeline
80
+
81
+ text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes."
82
+
83
+ nlp_ner = pipeline("ner",
84
+ model='fran-martinez/scibert_scivocab_cased_ner_jnlpba',
85
+ tokenizer='fran-martinez/scibert_scivocab_cased_ner_jnlpba')
86
+
87
+ nlp_ner(text)
88
+
89
+ """
90
+ Output:
91
+ ---------------------------
92
+ [
93
+ {'word': 'glucocorticoid',
94
+ 'score': 0.9894881248474121,
95
+ 'entity': 'B-protein'},
96
+
97
+ {'word': 'receptor',
98
+ 'score': 0.989505410194397,
99
+ 'entity': 'I-protein'},
100
+
101
+ {'word': 'normal',
102
+ 'score': 0.7680378556251526,
103
+ 'entity': 'B-cell_type'},
104
+
105
+ {'word': 'cs',
106
+ 'score': 0.5176806449890137,
107
+ 'entity': 'I-cell_type'},
108
+
109
+ {'word': 'lymphocytes',
110
+ 'score': 0.9898491501808167,
111
+ 'entity': 'I-cell_type'}
112
+ ]
113
+ """
114
+ ````
115
+ Or load model and tokenizer as follows:
116
+ ````python
117
+ import torch
118
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
119
+
120
+ # Example
121
+ text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes."
122
+
123
+ # Load model
124
+ tokenizer = AutoTokenizer.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba")
125
+ model = AutoModelForTokenClassification.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba")
126
+
127
+ # Get input for BERT
128
+ input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
129
+
130
+ # Predict
131
+ with torch.no_grad():
132
+ outputs = model(input_ids)
133
+
134
+ # From the output let's take the first element of the tuple.
135
+ # Then, let's get rid of [CLS] and [SEP] tokens (first and last)
136
+ predictions = outputs[0].argmax(axis=-1)[0][1:-1]
137
+
138
+ # Map label class indexes to string labels.
139
+ for token, pred in zip(tokenizer.tokenize(text), predictions):
140
+ print(token, '->', model.config.id2label[pred.numpy().item()])
141
+
142
+ """
143
+ Output:
144
+ ---------------------------
145
+ mouse -> O
146
+ thymus -> O
147
+ was -> O
148
+ used -> O
149
+ as -> O
150
+ a -> O
151
+ source -> O
152
+ of -> O
153
+ glucocorticoid -> B-protein
154
+ receptor -> I-protein
155
+ from -> O
156
+ normal -> B-cell_type
157
+ cs -> I-cell_type
158
+ lymphocytes -> I-cell_type
159
+ . -> O
160
+ """
161
+ ````