File size: 4,826 Bytes
253445f a10d185 253445f a10d185 04eb9fb 253445f 3178e51 253445f 1a698d6 253445f 7e74878 253445f 7e74878 253445f 7e74878 e0009c4 7e74878 c82cd15 0046a92 7e74878 253445f 7e74878 253445f a10d185 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
language:
- en
license: apache-2.0
---
# Text Classification Toxicity
This model is a fined-tuned version of [MiniLMv2-L6-H384](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Large) on the on the [Jigsaw 1st Kaggle competition](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge) dataset using [unitary/toxic-bert](https://huggingface.co/unitary/toxic-bert) as teacher model.
The original unquantized model can be found [here](https://huggingface.co/minuva/MiniLMv2-toxic-jigsaw-lite).
The model contains two labels only (toxicity and severe toxicity). For the model with all labels refer to this [page](https://huggingface.co/minuva/MiniLMv2-toxic-jigsaw)
# Optimum
## Installation
Install from source:
```bash
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git
```
## Run the Model
```py
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
model = ORTModelForSequenceClassification.from_pretrained('minuva/MiniLMv2-toxic-jigsaw-lite-onnx', provider="CPUExecutionProvider")
tokenizer = AutoTokenizer.from_pretrained('minuva/MiniLMv2-toxic-jigsaw-lite-onnx', use_fast=True, model_max_length=256, truncation=True, padding='max_length')
pipe = pipeline(task='text-classification', model=model, tokenizer=tokenizer, )
texts = ["This is pure trash",]
pipe(texts)
# [{'label': 'toxic', 'score': 0.6553249955177307}]
```
# ONNX Runtime only
A lighter solution for deployment
## Installation
```bash
pip install tokenizers
pip install onnxruntime
git clone https://huggingface.co/minuva/MiniLMv2-toxic-jigsaw-lite-onnx
```
## Load the Model
```py
import os
import numpy as np
import json
from tokenizers import Tokenizer
from onnxruntime import InferenceSession
model_name = "minuva/MiniLMv2-toxic-jigsaw-lite-onnx"
tokenizer = Tokenizer.from_pretrained(model_name)
tokenizer.enable_padding()
tokenizer.enable_truncation(max_length=256)
batch_size = 16
texts = ["This is pure trash",]
outputs = []
model = InferenceSession("MiniLMv2-toxic-jigsaw-lite-onnx/model_optimized_quantized.onnx", providers=['CPUExecutionProvider'])
with open(os.path.join("MiniLMv2-toxic-jigsaw-lite-onnx", "config.json"), "r") as f:
config = json.load(f)
output_names = [output.name for output in model.get_outputs()]
input_names = [input.name for input in model.get_inputs()]
for subtexts in np.array_split(np.array(texts), len(texts) // batch_size + 1):
encodings = tokenizer.encode_batch(list(subtexts))
inputs = {
"input_ids": np.vstack(
[encoding.ids for encoding in encodings],
),
"attention_mask": np.vstack(
[encoding.attention_mask for encoding in encodings],
),
"token_type_ids": np.vstack(
[encoding.type_ids for encoding in encodings],
),
}
for input_name in input_names:
if input_name not in inputs:
raise ValueError(f"Input name {input_name} not found in inputs")
inputs = {input_name: inputs[input_name] for input_name in input_names}
output = np.squeeze(
np.stack(
model.run(output_names=output_names, input_feed=inputs)
),
axis=0,
)
outputs.append(output)
outputs = np.concatenate(outputs, axis=0)
scores = 1 / (1 + np.exp(-outputs))
results = []
for item in scores:
labels = []
scores = []
for idx, s in enumerate(item):
labels.append(config["id2label"][str(idx)])
scores.append(float(s))
results.append({"labels": labels, "scores": scores})
res = []
for result in results:
joined = list(zip(result['labels'], result['scores']))
max_score = max(joined, key=lambda x: x[1])
res.append(max_score)
res
# [('toxic', 0.6553249955177307)]
```
# Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 48
- eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- warmup_ratio: 0.1
# Metrics (comparison with teacher model)
| Teacher (params) | Student (params) | Set (metric) | Score (teacher) | Score (student) |
|--------------------|-------------|----------|--------| --------|
| unitary/toxic-bert (110M) | MiniLMv2-toxic-jigsaw-lite (23M) | Test (ROC_AUC) | 0.982677 | 0.9806 |
# Deployment
Check our [fast-nlp-text-toxicity repository](https://github.com/minuva/fast-nlp-text-toxicity) for a FastAPI and ONNX based server to deploy this model on CPU devices.
|