Edit model card

LionGuard

LionGuard is a classifier for detecting unsafe content in the Singapore context. It uses pre-trained BAAI English embeddings and performs classification with a trained Ridge Classification model. This classifier detects the presence of content encouraging public harm, defined as content that promotes, facilitates, or encourages harmful public acts, vice or organized crime.

Usage

  1. Install transformers , onnxruntime and huggingface_hub libraries.
pip install transformers onnxruntime huggingface_hub
  1. Run inference.
python inference.py '["Example text 1"]'
Downloads last month
11
Unable to determine this model’s pipeline type. Check the docs .