sultan commited on
Commit
4ebc154
1 Parent(s): 101ce71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -23,8 +23,11 @@ models.
23
  # Model Description
24
  - This model is fine-tuned on the SQuAD2.0 dataset and then on the BioASQ8B-Factoid training dataset. We convert the BioASQ8B-Factoid training dataset to SQuAD1.1 format and train and evaluate our model (BioM-ELECTRA-Base-SQuAD2) on this dataset.
25
 
 
 
26
  - Please note that this version (PyTorch) is different than what we used in our participation in BioASQ9B (TensorFlow with Layer-Wise Decay). We combine all five batches of the BioASQ8B testing dataset as one dev.json file.
27
 
 
28
  - Below is unofficial results of our models against the original ELECTRA base and large :
29
 
30
 
 
23
  # Model Description
24
  - This model is fine-tuned on the SQuAD2.0 dataset and then on the BioASQ8B-Factoid training dataset. We convert the BioASQ8B-Factoid training dataset to SQuAD1.1 format and train and evaluate our model (BioM-ELECTRA-Base-SQuAD2) on this dataset.
25
 
26
+ - You can use this model to make prediction (inference) directly without fine-tuning it. To try this out to try to enter a PubMed abstract in the context box in this model card and try out couple of biomedical questions within the given context and see how it preforms compare to ELECTRA original model. This model should also be useful for creating pandamic QA system (e.g. COVID-19) .
27
+
28
  - Please note that this version (PyTorch) is different than what we used in our participation in BioASQ9B (TensorFlow with Layer-Wise Decay). We combine all five batches of the BioASQ8B testing dataset as one dev.json file.
29
 
30
+
31
  - Below is unofficial results of our models against the original ELECTRA base and large :
32
 
33