Edit model card

bert-base-uncased finetuned on MNLI

Model Details and Training Data

We used the pretrained model from bert-base-uncased and finetuned it on MultiNLI dataset.

The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).

Evaluation Results

The evaluation results are mentioned in the table below.

Test Corpus Accuracy
Matched 0.8456
Mismatched 0.8484
Downloads last month
824
Hosted inference API
Text Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.