amarkv commited on
Commit
2db9271
1 Parent(s): 0139ea5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -20,23 +20,23 @@ widget:
20
 
21
  # Training
22
 
23
- *Skoltech/russian-inappropriate-messages* was finetuned on a multiclass data with four classes (*check the exact mapping between idx and label in* `model.config`).
24
 
25
- 1) OK label -- the message is OK in context and does not intent to offend or somehow harm the reputation of a speaker.
26
- 2) Toxic label -- the message might be seen as a offensive one in given context.
27
- 3) Severe toxic label -- the message is offencive, full of anger and was written to provoke a fight or any other discomfort
28
- 4) Risks label -- the message touches on sensitive topics and can harm the reputation of the speaker (i.e. religion, politics)
29
 
30
- The model was finetuned on a soon-to-be-posted dataset of dialogs.
31
 
32
  # Evaluation results
33
 
34
  Model achieves the following results on the validation datasets (will be posted soon):
35
 
36
- | OK - F1-score | TOXIC - F1-score | SEVERE TOXIC - F1-score | RISKS - F1-score |
37
- |-------------------------|-------------------|----------------|------------------|
38
- | 0.896 | 0.348 | 0.490 | 0.591 |
39
- | 0.940 | 0.295 | 0.729 | 0.46 |
40
 
41
  # Use in transformers
42
 
 
20
 
21
  # Training
22
 
23
+ [*Skoltech/russian-inappropriate-messages*](https://huggingface.co/Skoltech/russian-inappropriate-messages) was finetuned on a multiclass data with four classes (*check the exact mapping between idx and label in* `model.config`).
24
 
25
+ 1) OK label the message is OK in context and does not intent to offend or somehow harm the reputation of a speaker.
26
+ 2) Toxic label the message might be seen as a offensive one in given context.
27
+ 3) Severe toxic label the message is offencive, full of anger and was written to provoke a fight or any other discomfort
28
+ 4) Risks label the message touches on sensitive topics and can harm the reputation of the speaker (i.e. religion, politics)
29
 
30
+ The model was finetuned on a soon-to-be-posted dialogs datasets.
31
 
32
  # Evaluation results
33
 
34
  Model achieves the following results on the validation datasets (will be posted soon):
35
 
36
+ || OK - F1-score | TOXIC - F1-score | SEVERE TOXIC - F1-score | RISKS - F1-score |
37
+ |---------|---------------|------------------|-------------------------|------------------|
38
+ |twitter | 0.896 | 0.348 | 0.490 | 0.591 |
39
+ |chats | 0.940 | 0.295 | 0.729 | 0.46 |
40
 
41
  # Use in transformers
42