XLM-RoBERTa large model whole word masking finetuned on SQuAD

Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets

Used QA Datasets

SQuAD + SberQuAD

SberQuAD original paper is here! Recommend to read!

Evaluation results

The results obtained are the following (SberQUaD):

f1 = 84.3
exact_match = 65.3
New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
922
Hosted inference API
Question Answering
This model can be loaded on the Inference API on-demand.