bert-base-uncased fine-tuned on QQP dataset, using torchdistill and Google Colab.
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available here.
I submitted prediction files to the GLUE leaderboard, and the overall GLUE score was 77.9.

Downloads last month
Hosted inference API
Text Classification
Mask token: [MASK]
This model can be loaded on the Inference API on-demand.