bert-base-uncased finetuned on MNLI
Model Details and Training Data
We used the pretrained model from bert-base-uncased and finetuned it on MultiNLI dataset.
The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
The evaluation results are mentioned in the table below.
- Downloads last month
This model can be loaded on the Inference API on-demand.