LionGuard
LionGuard is a classifier for detecting unsafe content in the Singapore context. It uses pre-trained BAAI English embeddings and performs classification with a trained Ridge Classification model. This classifier detects the presence of content encouraging public harm, defined as content that promotes, facilitates, or encourages harmful public acts, vice or organized crime.
Usage
- Install
transformers
,onnxruntime
andhuggingface_hub
libraries.
pip install transformers onnxruntime huggingface_hub
- Run inference.
python inference.py '["Example text 1"]'
- Downloads last month
- 11
Unable to determine this model’s pipeline type. Check the
docs
.