Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

johnpaulbin/toxic-MiniLM-L6-H384-uncased

Test if a sentence is toxic. Only works for english sentences.

Usage

Basic classification. Labels: [NOT TOXIC, TOXIC]

Install setfit !pip install setfit

from setfit import SetFitModel

model = SetFitModel.from_pretrained("johnpaulbin/beanbox-toxic")

inpt = "" #@param {type:"string"}

out = model.predict_proba([inpt])

if out[0][0] > out[0][1]:
    print("Not toxic")
else:
    print("Toxic!")

print(f"NOT TOXIC: {out[0][0]}\nTOXIC: {out[0][1]}")
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.