⚠️ Disclaimer: This model is in the early stages of development and may produce low-quality predictions. For better results, consider using the recommended Russian natural language inference models available here.
RuBERT-tiny-nli v0
This model is an initial attempt to fine-tune the RuBERT-tiny2 model for a two-way natural language inference task, utilizing the Russian Textual Entailment Recognition dataset. While it aims to enhance understanding of Russian text, its performance is currently limited.
Usage
How to run the model for NLI:
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = 'Marwolaeth/rubert-tiny-nli-terra-v0'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
if torch.cuda.is_available():
model.cuda()
# An example from the base model card
premise1 = 'Сократ - человек, а все люди смертны.'
hypothesis1 = 'Сократ никогда не умрёт.'
with torch.inference_mode():
prediction = model(
**tokenizer(premise1, hypothesis1, return_tensors='pt').to(model.device)
)
p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
print({v: p[k] for k, v in model.config.id2label.items()})
# {'not_entailment': 0.7698182, 'entailment': 0.23018183}
# An example concerning sentiments
premise2 = 'Я ненавижу желтые занавески'
hypothesis2 = 'Мне нравятся желтые занавески'
with torch.inference_mode():
prediction = model(
**tokenizer(premise2, hypothesis2, return_tensors='pt').to(model.device)
)
p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
print({v: p[k] for k, v in model.config.id2label.items()})
# {'not_entailment': 0.60584205, 'entailment': 0.3941579}
Model Performance Metrics
The following metrics summarize the performance of the model on the validation dataset:
Metric | Value |
---|---|
Validation Loss | 0.6261 |
Validation Accuracy | 66.78% |
Validation F1 Score | 66.67% |
Validation Precision | 66.67% |
Validation Recall | 66.67% |
Validation Runtime* | 0.7043 seconds |
Samples per Second* | 435.88 |
Steps per Second* | 14.20 |
*Using T4 GPU with Google Colab
- Downloads last month
- 45
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Marwolaeth/rubert-tiny-nli-terra-v0
Base model
cointegrated/rubert-tiny2Evaluation results
- Accuracy on TERRAvalidation set self-reported0.668
- F1 on TERRAvalidation set self-reported0.667
- Precision on TERRAvalidation set self-reported0.667
- Recall on TERRAvalidation set self-reported0.667