Edit model card

bert-large-uncased fine-tuned on SST-2 dataset, using torchdistill and Google Colab.
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available here.
I submitted prediction files to the GLUE leaderboard, and the overall GLUE score was 80.2.

Downloads last month
188
Hosted inference API
Text Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.

Dataset used to train yoshitomo-matsubara/bert-large-uncased-sst2