Token Classification
Transformers
PyTorch
English
bert
biology
medical
zero-shot
few-shot
Inference Endpoints
Files changed (1) hide show
  1. README.md +32 -2
README.md CHANGED
@@ -1,5 +1,35 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- #Model takes as input two strings. String1 is NER label. String1 must be phrase for entity. String2 is short text where String1 is searched for semantically.
5
- #model outputs list of zeros and ones corresponding to the occurance of NER and corresponing to tokens(tokens given by transformer tokenizer) of the Sring2, not to words.
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - bigbio/chemdner
5
+ - ncbi_disease
6
+ - jnlpba
7
+ - bigbio/n2c2_2018_track2
8
+ - bigbio/bc5cdr
9
+ language:
10
+ - en
11
+ metrics:
12
+ - precision
13
+ - recall
14
+ - f1
15
+ pipeline_tag: token-classification
16
+ tags:
17
+ - token-classification
18
+ - biology
19
+ - medical
20
+ - zero-shot
21
+ - few-shot
22
  ---
23
+ # Zero and few shot NER for biomedical texts
24
+
25
+ ## Model description
26
+ Model takes as input two strings. String1 is NER label. String1 must be phrase for entity. String2 is short text where String1 is searched for semantically.
27
+ model outputs list of zeros and ones corresponding to the occurance of NER and corresponing to tokens(tokens given by transformer tokenizer) of the Sring2, not to words.
28
+
29
+ ## Example of usage
30
+
31
+ ## Code availibility
32
+
33
+ Code used for training and testing the model is available at https://github.com/br-ai-ns-institute/Zero-ShotNER
34
+
35
+ ## Citation