Content Classifier API

A FastAPI-based content classification service using an ONNX model for threat detection and sentiment analysis.

Features

  • Content threat classification
  • Sentiment analysis
  • RESTful API with automatic documentation
  • Health check endpoints
  • Model information endpoints
  • Docker support for easy deployment

API Endpoints

  • POST /predict - Classify text content
  • GET / - API status
  • GET /health - Health check
  • GET /model-info - Model information
  • GET /docs - Interactive API documentation (Swagger)

Installation

  1. Install dependencies:
pip install -r requirements.txt
  1. Run the application:
python app.py

The API will be available at http://localhost:8000

Usage

Example Request

curl -X POST "http://localhost:8000/predict" \
     -H "Content-Type: application/json" \
     -d '{"text": "This is a sample text to classify"}'

Example Response

{
    "is_threat": false,
    "final_confidence": 0.75,
    "threat_prediction": 0.25,
    "sentiment_analysis": {
        "label": "POSITIVE",
        "score": 0.5
    },
    "onnx_prediction": {
        "threat_probability": 0.25,
        "raw_output": [[0.75, 0.25]]
    },
    "models_used": ["contextClassifier.onnx"],
    "raw_predictions": {
        "onnx": {
            "threat_probability": 0.25,
            "raw_output": [[0.75, 0.25]]
        },
        "sentiment": {
            "label": "POSITIVE",
            "score": 0.5
        }
    }
}

Docker Deployment

  1. Build the Docker image:
docker build -t content-classifier .
  1. Run the container:
docker run -p 8000:8000 content-classifier

Hugging Face Spaces Deployment

To deploy on Hugging Face Spaces:

  1. Create a new Space on Hugging Face
  2. Upload all files to your Space repository
  3. The Space will automatically build and deploy

Model Requirements

The ONNX model should accept text inputs and return classification predictions. You may need to adjust the preprocessing and postprocessing functions in app.py based on your specific model requirements.

Configuration

You can modify the following in app.py:

  • MODEL_PATH: Path to your ONNX model file
  • max_length: Maximum text length for processing
  • Preprocessing and postprocessing logic based on your model's requirements
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support