Edit model card

LionGuard

LionGuard is a classifier for detecting unsafe content in the Singapore context. It uses pre-trained BAAI English embeddings and performs classification with a trained Ridge Classification model. This classifier detects the presence of hateful content, defined as content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.

Usage

  1. Install transformers , xgboost and huggingface_hub libraries.
pip install transformers xgboost huggingface_hub
  1. Run inference.
python inference.py '["Example text 1"]'
Downloads last month
13
Unable to determine this model’s pipeline type. Check the docs .