Spaces:
Running
on
Zero
Running
on
Zero
| title: Ai Fast Image Server | |
| emoji: π | |
| colorFrom: gray | |
| colorTo: blue | |
| sdk: gradio | |
| sdk_version: 5.47.2 | |
| app_file: app.py | |
| license: mit | |
| pinned: false | |
| suggested_hardware: a10g-small | |
| duplicated_from: hysts/SD-XL | |
| load_balancing_strategy: random | |
| # AI Fast Image Server | |
| A lightweight Gradio app that serves fast **text-to-image** generation using either: | |
| - **SDXL Base 1.0 + LCM** (default), or | |
| - **SSD-1B + LCM LoRA** (enable via a flag in `app.py`) | |
| The app targets **very few inference steps** (e.g., 4) for speed while keeping good image quality. It falls back to **CPU** automatically if CUDA isnβt available. | |
| --- | |
| ## Features | |
| - β‘ **Fast sampling** with **LCM** schedulers | |
| - π **Deterministic** results via seed | |
| - π₯οΈ **Auto GPU/CPU** selection (no brittle `nvidia-smi` checks) | |
| - π Optional **secret token** gate to prevent abuse | |
| - π§© Switch between **SDXL** and **SSD-1B+LCM LoRA** with a flag | |
| --- | |
| ## Requirements | |
| Dependencies are pinned for compatibility (notably `diffusers==0.23.0` + `huggingface_hub==0.14.1`): | |
| ```txt | |
| accelerate==0.24.1 | |
| diffusers==0.23.0 | |
| gradio==3.39.0 | |
| huggingface_hub==0.14.1 | |
| invisible-watermark==0.2.0 | |
| Pillow==10.1.0 | |
| torch==2.1.0 | |
| transformers==4.35.0 | |
| safetensors==0.4.0 | |
| numpy>=1.23 | |
| ipython |