sultan commited on
Commit
6bb11dd
1 Parent(s): 7b4612b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -60,20 +60,21 @@ To reproduce results in Google Colab:
60
  - Run this python code:
61
 
62
  ```python
63
- python /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Large-SQuAD2 \\
64
- --do_eval \\
65
- --version_2_with_negative \\
66
- --per_device_eval_batch_size 8 \\
67
- --dataset_name squad_v2 \\
68
- --overwrite_output_dir \\
69
- --fp16 \\
70
  --output_dir out
71
  ```
72
 
73
- You don't need to download the SQuAD2 dataset. The code will download it from the HuggingFace datasets hub.
74
 
 
75
 
76
- Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
77
 
78
  # Acknowledgment
79
 
 
60
  - Run this python code:
61
 
62
  ```python
63
+ python /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Large-SQuAD2 \
64
+ --do_eval \
65
+ --version_2_with_negative \
66
+ --per_device_eval_batch_size 8 \
67
+ --dataset_name squad_v2 \
68
+ --overwrite_output_dir \
69
+ --fp16 \
70
  --output_dir out
71
  ```
72
 
73
+ - You don't need to download the SQuAD2 dataset. The code will download it from the HuggingFace datasets hub.
74
 
75
+ - Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
76
 
77
+ - We added examples to fine-tune BioM-ELECTRA-Large on SQuAD and BioASQ7B using TensorFlow and TPU here https://github.com/salrowili/BioM-Transformers/tree/main/examples . In this example we show that we achieve 88.22 score in SQuAD2.0 since Tensor Flow code has Layer-wise decay feature.
78
 
79
  # Acknowledgment
80