File size: 4,756 Bytes
78a958b
 
9f4db8b
 
 
 
78a958b
9f4db8b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d05623
9f4db8b
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: bigscience-bloom-rail-1.0
language:
- fr
- en
pipeline_tag: text-classification
---


Bloomz-3b-guardrail
---------------------

We introduce the Bloomz-3b-guardrail model, which is a fine-tuning of the [Bloomz-3b-sft-chat](https://huggingface.co/cmarkea/bloomz-3b-sft-chat) model. This model is designed to detect the toxicity of a text in five modes:

* Obscene: Content that is offensive, indecent, or morally inappropriate, especially in relation to social norms or standards of decency.
* Sexual explicit: Content that presents explicit sexual aspects in a clear and detailed manner.
* Identity attack: Content that aims to attack, denigrate, or harass someone based on their identity, especially related to characteristics such as race, gender, sexual orientation, religion, ethnic origin, or other personal aspects.
* Insult: Offensive, disrespectful, or hurtful content used to attack or denigrate a person.
* Threat: Content that presents a direct threat to an individual.

Training
--------

The training dataset consists of 500k examples of comments in English and 500k comments in French (translated by Google Translate), each annotated with a toxicity severity gradient. The dataset used is provided by [Jigsaw](https://jigsaw.google.com/) as part of a Kaggle competition : [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification/data). Since the scores represent severity gradients, regression was preferred using the following loss function:
$$loss=l_{\mathrm{obscene}}+l_{\mathrm{sexual\_explicit}}+l_{\mathrm{identity\_attack}}+l_{\mathrm{insult}}+l_{\mathrm{threat}}$$
with
$$l_i=\frac{1}{\vert\mathcal{O}\vert}\sum_{o\in\mathcal{O}}\vert\mathrm{score}_{i,o}-\sigma(\mathrm{logit}_{i,o})\vert$$
Where sigma is the sigmoid function and O represents the set of learning observations.

Benchmark
---------

As the scores range from 0 to 1, a performance measure such as MAE or RMSE may be challenging to interpret. Therefore, Pearson's inter-correlation was chosen as a measure. Pearson's inter-correlation is a measure ranging from -1 to 1, where 0 represents no correlation, -1 represents perfect negative correlation, and 1 represents perfect positive correlation. The goal is to quantitatively measure the correlation between the model's scores and the scores assigned by judges for 750 comments not seen during training.

| Model                                                                         | Language | Obsecene (x100)         | Sexual explicit (x100)        | Identity attack (x100)        | Insult (x100)        | Threat (x100)        | Mean |
|-------------------------------------------------------------------------------|----------|:-----------------------:|-------------------------------|-------------------------------|----------------------|----------------------|------|
| [Bloomz-560m-guardrail](https://huggingface.co/cmarkea/bloomz-560m-guardrail) | French   | 62                      | 73                            | 73                            | 68                   | 61                   | 67   |
| [Bloomz-560m-guardrail](https://huggingface.co/cmarkea/bloomz-560m-guardrail) | English  | 63                      | 61                            | 63                            | 67                   | 55                   | 62   |
| [Bloomz-3b-guardrail](https://huggingface.co/cmarkea/bloomz-3b-guardrail)     | French   | 72                      | 82                            | 80                            | 78                   | 77                   | 78   |
| [Bloomz-3b-guardrail](https://huggingface.co/cmarkea/bloomz-3b-guardrail)     | English  | 76                      | 78                            | 77                            | 75                   | 79                   | 77   |

With a correlation of approximately 60 for the 560m model and approximately 80 for the 3b model, the output is highly correlated with the judges' scores.

How to Use Blommz-3b-guardrail
--------------------------------

The following example utilizes the API Pipeline of the Transformers library.

```python
from transformers import pipeline

guardrail = pipeline("text-classification", "cmarkea/bloomz-3b-guardrail")

list_text = [...]
result = guardrail(
    list_text,
    return_all_scores=True, # Crucial for assessing all modalities of toxicity!
    function_to_apply='sigmoid' # To ensure obtaining a score between 0 and 1!
)
```

Citation
--------

```bibtex
@online{DeBloomzGuard,
  AUTHOR = {Cyrile Delestre},
  ORGANIZATION = {Cr{\'e}dit Mutuel Ark{\'e}a},
  URL = {https://huggingface.co/cmarkea/bloomz-3b-guardrail},
  YEAR = {2023},
  KEYWORDS = {NLP ; Transformers ; LLM ; Bloomz},
}
```