File size: 2,978 Bytes
4ad85b8
b31c78a
4ad85b8
 
 
 
 
 
 
 
b31c78a
4ad85b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01d80d4
4ad85b8
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language: ["ru"]
tags:
- russian
- classification
- toxicity
- multilabel
widget:
- text: "Иди ты нафиг!"
---
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of toxicity and inappropriateness for short informal Russian texts, such as comments in social networks. 

The problem is formulated as multilabel classification with the following classes:
- `non-toxic`: the text does NOT contain insults, obscenities, and threats, in the sense of the [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) competition.
- `insult`
- `obscenity`
- `threat`
- `dangerous`: the text is inappropriate, in the sense of [Babakov et.al.](https://arxiv.org/abs/2103.05345), i.e. it can harm the reputation of the speaker.

A text can be considered safe if it is BOTH `non-toxic` and NOT `dangerous`. 

## Usage

The function below estimates the probability that the text is either toxic OR dangerous:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_checkpoint = 'cointegrated/rubert-tiny-toxicity'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
    model.cuda()
    
def text2toxicity(text, aggregate=True):
    """ Calculate toxicity of a text (if aggregate=True) or a vector of toxicity aspects (if aggregate=False)"""
    with torch.no_grad():
        inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
        proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()
    if isinstance(text, str):
        proba = proba[0]
    if aggregate:
        return 1 - proba.T[0] * (1 - proba.T[-1])
    return proba

print(text2toxicity('я люблю нигеров', True))
# 0.57240640889815

print(text2toxicity('я люблю нигеров', False))
# [9.9336821e-01 6.1555761e-03 1.2781911e-03 9.2758919e-04 5.6955177e-01]

print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], True))
# [0.5724064  0.20111847]

print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], False))
# [[9.9336821e-01 6.1555761e-03 1.2781911e-03 9.2758919e-04 5.6955177e-01]
#  [9.9828428e-01 1.1138428e-03 1.1492912e-03 4.6551935e-04 1.9974548e-01]]
```

## Training

The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, learning rate of `1e-5`, and batch size of `128` for `5` epochs. The data was not filtered in any way. A text was considered inappropriate if its inappropritateness score was higher than 0.2. The per-label ROC AUC on the dev set is:
```
non-toxic  : 0.9909
insult     : 0.9882
obscenity  : 0.9824
threat     : 0.9868
dangerous  : 0.7758
```