Content Classifier API
A FastAPI-based content classification service using an ONNX model for threat detection and sentiment analysis.
Features
- Content threat classification
- Sentiment analysis
- RESTful API with automatic documentation
- Health check endpoints
- Model information endpoints
- Docker support for easy deployment
API Endpoints
POST /predict
- Classify text contentGET /
- API statusGET /health
- Health checkGET /model-info
- Model informationGET /docs
- Interactive API documentation (Swagger)
Installation
- Install dependencies:
pip install -r requirements.txt
- Run the application:
python app.py
The API will be available at http://localhost:8000
Usage
Example Request
curl -X POST "http://localhost:8000/predict" \
-H "Content-Type: application/json" \
-d '{"text": "This is a sample text to classify"}'
Example Response
{
"is_threat": false,
"final_confidence": 0.75,
"threat_prediction": 0.25,
"sentiment_analysis": {
"label": "POSITIVE",
"score": 0.5
},
"onnx_prediction": {
"threat_probability": 0.25,
"raw_output": [[0.75, 0.25]]
},
"models_used": ["contextClassifier.onnx"],
"raw_predictions": {
"onnx": {
"threat_probability": 0.25,
"raw_output": [[0.75, 0.25]]
},
"sentiment": {
"label": "POSITIVE",
"score": 0.5
}
}
}
Docker Deployment
- Build the Docker image:
docker build -t content-classifier .
- Run the container:
docker run -p 8000:8000 content-classifier
Hugging Face Spaces Deployment
To deploy on Hugging Face Spaces:
- Create a new Space on Hugging Face
- Upload all files to your Space repository
- The Space will automatically build and deploy
Model Requirements
The ONNX model should accept text inputs and return classification predictions. You may need to adjust the preprocessing and postprocessing functions in app.py
based on your specific model requirements.
Configuration
You can modify the following in app.py
:
MODEL_PATH
: Path to your ONNX model filemax_length
: Maximum text length for processing- Preprocessing and postprocessing logic based on your model's requirements
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support