contextClassifier / README.md
parthraninga's picture
Upload 8 files
c1f3888 verified
metadata
title: Content Classifier
emoji: ๐Ÿ”
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
app_port: 7860

Content Classifier API

A FastAPI-based content classification service using an ONNX model for threat detection and sentiment analysis.

Features

  • Content threat classification
  • Sentiment analysis
  • RESTful API with automatic documentation
  • Health check endpoints
  • Model information endpoints
  • Docker support for easy deployment

API Endpoints

  • POST /predict - Classify text content
  • GET / - API status
  • GET /health - Health check
  • GET /model-info - Model information
  • GET /docs - Interactive API documentation (Swagger)

Installation

  1. Install dependencies:
pip install -r requirements.txt
  1. Run the application:
python app.py

The API will be available at http://localhost:8000

Usage

Example Request

curl -X POST "http://localhost:8000/predict" \
     -H "Content-Type: application/json" \
     -d '{"text": "This is a sample text to classify"}'

Example Response

{
    "is_threat": false,
    "final_confidence": 0.75,
    "threat_prediction": 0.25,
    "sentiment_analysis": {
        "label": "POSITIVE",
        "score": 0.5
    },
    "onnx_prediction": {
        "threat_probability": 0.25,
        "raw_output": [[0.75, 0.25]]
    },
    "models_used": ["contextClassifier.onnx"],
    "raw_predictions": {
        "onnx": {
            "threat_probability": 0.25,
            "raw_output": [[0.75, 0.25]]
        },
        "sentiment": {
            "label": "POSITIVE",
            "score": 0.5
        }
    }
}

Docker Deployment

  1. Build the Docker image:
docker build -t content-classifier .
  1. Run the container:
docker run -p 8000:8000 content-classifier

Hugging Face Spaces Deployment

To deploy on Hugging Face Spaces:

  1. Create a new Space on Hugging Face
  2. Upload all files to your Space repository
  3. The Space will automatically build and deploy

Model Requirements

The ONNX model should accept text inputs and return classification predictions. You may need to adjust the preprocessing and postprocessing functions in app.py based on your specific model requirements.

Configuration

You can modify the following in app.py:

  • MODEL_PATH: Path to your ONNX model file
  • max_length: Maximum text length for processing
  • Preprocessing and postprocessing logic based on your model's requirements