File size: 2,241 Bytes
7ce1532 e780d10 1d562d8 e780d10 1d562d8 023cc1c 2e0e8ec 667df0e 2e0e8ec fdf4854 2e0e8ec 4d6595c 2e0e8ec 6be8073 2e0e8ec 6be8073 2e0e8ec c0bd0e2 2e0e8ec c0bd0e2 2e0e8ec 4d6595c 2e0e8ec 1d562d8 2e0e8ec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: mit
language:
- ru
metrics:
- f1
- roc_auc
- precision
- recall
pipeline_tag: text-classification
tags:
- sentiment-analysis
- multi-class-classification
- sentiment analysis
- rubert
- sentiment
- bert
- russian
- multiclass
- classification
datasets:
- sismetanin/rureviews
- RuSentiment
- LinisCrowd2015
- LinisCrowd2016
- KaggleRussianNews
---
This is [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned for __sentiment classification__ of short __Russian__ texts.
The task is a __multi-class classification__ with the following labels:
```yaml
0: neutral
1: positive
2: negative
```
Label to Russian label:
```yaml
neutral: нейтральный
positive: позитивный
negative: негативный
```
## Usage
```python
from transformers import pipeline
model = pipeline(model="seara/rubert-base-cased-russian-sentiment")
model("Привет, ты мне нравишься!")
# [{'label': 'positive', 'score': 0.9818321466445923}]
```
## Dataset
This model was trained on the union of the following datasets:
- Kaggle Russian News Dataset
- Linis Crowd 2015
- Linis Crowd 2016
- RuReviews
- RuSentiment
An overview of the training data can be found on [S. Smetanin Github repository](https://github.com/sismetanin/sentiment-analysis-in-russian).
__Download links for all Russian sentiment datasets collected by Smetanin can be found in this [repository](https://github.com/searayeah/russian-sentiment-emotion-datasets).__
## Training
Training were done in this [project](https://github.com/searayeah/bert-russian-sentiment-emotion) with this parameters:
```yaml
tokenizer.max_length: 256
batch_size: 32
optimizer: adam
lr: 0.00001
weight_decay: 0
epochs: 2
```
Train/validation/test splits are 80%/10%/10%.
## Eval results (on test split)
| |neutral|positive|negative|macro avg|weighted avg|
|---------|-------|--------|--------|---------|------------|
|precision|0.72 |0.85 |0.75 |0.77 |0.77 |
|recall |0.75 |0.84 |0.72 |0.77 |0.77 |
|f1-score |0.73 |0.84 |0.73 |0.77 |0.77 |
|auc-roc |0.86 |0.96 |0.92 |0.91 |0.91 |
|support |5196 |3831 |3599 |12626 |12626 |
|