NaturalRoBERTa / README.md
tay-yozhik's picture
Update README.md
0e13f1c
metadata
license: apache-2.0
datasets:
  - tay-yozhik/NaturalText
language:
  - ru

NaturalRoBERTa

This is a pre-trained model of type RoBERTa. NaturalRoBERTa is built on a dataset obtained from open sources: three news sub-corpuses Taiga (Lenta.ru, Interfax, N+1) and Russian Wikipedia texts.

Evaluation

This model was evaluated on RussianSuperGLUE tests:

Task Result Metrics
LiDiRus 0,0 Matthews Correlation Coefficient
RCB 0,217 / 0,484 F1 / Accuracy
PARus 0,498 Accuracy
TERRa 0,487 Accuracy
RUSSE 0,587 Accuracy
RWSD 0,669 Accuracy