Edit model card

LionGuard

LionGuard is a classifier for detecting unsafe content in the Singapore context. It uses pre-trained BAAI English embeddings and performs classification with a trained Ridge Classification model. This classifier detects the presence of toxic content, defined as content that is rude, disrespectful, or profane, including the use of slurs.

Usage

  1. Install transformers , onnxruntime and huggingface_hub libraries.
pip install transformers onnxruntime huggingface_hub
  1. Run inference.
python inference.py '["Example text 1"]'
Downloads last month
8
Unable to determine this model’s pipeline type. Check the docs .