LionGuard
LionGuard is a classifier for detecting unsafe content in the Singapore context. It uses pre-trained BAAI English embeddings and performs classification with a trained Ridge Classification model. This classifier detects the presence of content encouraging self-harm, defined as content that promotes or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
Usage
- Install
transformers
,onnxruntime
andhuggingface_hub
libraries.
pip install transformers onnxruntime huggingface_hub
- Run inference.
python inference.py '["Example text 1"]'
- Downloads last month
- 21
Unable to determine this model’s pipeline type. Check the
docs
.