Edit model card

DistilRoBERTa for NLI

Model description

This model can be used for Natural Language Inference (NLI) tasks. It is a version of roberta-base fine-tuned on multi_nli and english xnli.

Model Performance

The model's performance on NLI tasks is as follows:

  • Accuracy on MNLI validation matched: TODO
  • Accuracy on MNLI validation mismatched: TODO
Downloads last month
4
Safetensors
Model size
82.1M params
Tensor type
F32
·

Datasets used to train matekadlicsko/distilroberta-nli