FeatureLab / README.md
VitalyVorobyev's picture
dl_adapters from dexined and superpoint
aaa448c
---
title: FeatureLab
emoji: 🐠
colorFrom: gray
colorTo: pink
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
license: mit
short_description: Minimal feature-detection with Classical and Deep Learning
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# FeatureLab Mini β€” Classic & DL Detectors
FeatureLab now exposes a production-friendly layout: FastAPI serves the detector runtime over HTTP/WebSocket while Gradio rides on top for internal demos.
## Runtime Overview
```
FastAPI (/v1/detect/*) <-- shared numpy/CV runtime --> Gradio UI (/)
```
- **Classical path**: Canny, Harris, Probabilistic Hough, Line Segment Detector (LSD), contour-based ellipse fitting.
- **Deep path**: ONNX models (HED, SuperPoint, SOLD2, etc.) auto-loaded from `./models`.
- **Responses**: base64 PNG overlays, rich feature metadata, timings, model info.
## Run locally
```bash
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python app.py # FastAPI + Gradio on http://localhost:7860
```
## HTTP API
- `POST /v1/detect/edges|corners|lines|ellipses`
- Body:
```json
{
"image": "<base64 png/jpeg>",
"params": {
"canny_low": 50,
"canny_high": 150,
"line_detector": "lsd",
"...": "..."
},
"mode": "classical|dl|both",
"compare": false,
"dl_model": "hed.onnx"
}
```
- Response:
```json
{
"overlay": "<png base64>",
"overlays": { "classical": "...", "dl": "..." },
"features": { "classical": {...}, "dl": {...} },
"timings": { "classical": 7.2, "dl": 18.5, "total": 25.7 },
"fps_estimate": 38.9,
"model": { "name": "opencv-classical", "version": "4.10.0" },
"models": { "classical": {...}, "dl": {...} }
}
```
- Classical line detector toggle: set `params.line_detector` to `"lsd"` to run OpenCV's Line Segment Detector instead of Probabilistic Hough.
- Multipart uploads: `POST /v1/detect/<detector>/upload` with `file`, optional `params` (JSON string), `mode`, `compare`, `dl_model`.
## WebSocket API
- Connect to `/v1/detect/stream`.
- Send JSON payloads with the same shape as HTTP.
- Receive the detection response for each frame β€” suitable for webcam or live sources.
## Gradio Demo
- Still bundled for quick experiments (webcam capture, parameter sliders).
- Fully decoupled: the UI calls the same runtime, so React/Tauri front-ends can swap in later without touching detector code.
## Deploying
- Hugging Face Spaces (Gradio) still works β€” FastAPI runs inside the Space process.
- For container/desktop targets, run `uvicorn app:app` or embed the FastAPI router into your existing service.
GPU/Core ML acceleration (ONNX) is optional; drop models into `./models` to enable DL paths. Continuous upgrades toward Core ML / PyTorch backends can reuse the same API surface.