yoshitomo-matsubara's picture
Create README.md
1d587b2
|
raw
history blame
820 Bytes
metadata
language: en
tags:
  - qnli
  - glue
  - torchdistill
license: apache-2.0
datasets:
  - qnli
metrics:
  - accuracy

bert-large-uncased fine-tuned on QNLI dataset, using torchdistill and Google Colab.
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available here.
I submitted prediction files to the GLUE leaderboard, and the overall GLUE score was 79.1.