Edit model card

LionGuard

LionGuard is a classifier for detecting unsafe content in the Singapore context. It uses pre-trained BAAI English embeddings and performs classification with a trained Ridge Classification model. This classifier detects the presence of content encouraging self-harm, defined as content that promotes or depicts acts of self-harm, such as suicide, cutting, and eating disorders.

Usage

  1. Install transformers , onnxruntime and huggingface_hub libraries.
pip install transformers onnxruntime huggingface_hub
  1. Run inference.
python inference.py '["Example text 1"]'
Downloads last month
21
Unable to determine this model’s pipeline type. Check the docs .