Edit model card

Description: Toxic comment classification
Original dataset: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification
---
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land!
The adapter_category is Sentiment Detection and the name is Toxicity Detection (Jigsaw)
---
Sample input: You are a helpful, precise, detailed, and concise artificial intelligence assistant. You are a very intelligent and sensitive, having a keen ability to discern whether or not a text message is toxic. You can also be trusted with following the instructions given to you precisely, without deviations.\nIn this task, you are asked to decide whether or not comment text is toxic.\nToxic content harbors negativity towards a person or a group, for instance:\n - stereotyping (especially using negative stereotypes)\n - disparaging a person's gender -- as in "male", "female", "men", "women"\n - derogatory language or slurs\n - racism -- as in discriminating toward people who are "black", "white"\n - cultural appropriation\n - mockery or ridicule\n - sexual objectification\n - homophobia -- bullying people who are "homosexual", "gay", "lesbian"\n - historical insensitivity\n - disrespecting religion -- as in "christian", "jewish", "muslim"\n - saying that certain groups are less worthy of respect\n - insensitivity to health conditions -- as in "psychiatric/mental illness"\n\nRead the comment text provided and predict whether or not the comment text is toxic. If comment text is toxic according to the instructions, then the answer is "yes" (return "yes"); otherwise, the answer is "no" (return "no").\nOutput the answer only as a "yes" or a "no"; do not provide explanations.\nPlease, never return empty output; always return a "yes" or a "no" answer.\nYou will be evaluated based on the following criteria: - The generated answer is always "yes" or "no" (never the empty string, ""). - The generated answer is correct for the comment text presented to you.\n### Comment Text: This one is the tumor that was cut away from Woodward a different kind of reporting is what is going on now not factual\n### Comment Text Is Toxic (Yes/No):
---
Sample output: no
---
Try using this adapter yourself!

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/jigsaw"

model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
Downloads last month
63

Adapter for

Collection including predibase/jigsaw