rushagentrush / backend /README.md
adityaverma977
Update README files for HF router workflow
b2c8834
metadata
title: RUSH AGENTS RUSH Backend
emoji: 🔥
colorFrom: red
colorTo: yellow
sdk: docker
sdk_version: latest
python_version: '3.11'
pinned: false

Rush Agents Rush Backend

FastAPI server driving the fire-suppression simulation.

What It Does

  • Accepts model selections and starts a new simulation.
  • Places a fire on the map and generates water wells.
  • Runs the tick-based AI loop with coalition voting, movement, and extinguishing.
  • Streams state updates and events over WebSockets.

Key Endpoints

  • GET /wake - health and readiness check
  • GET /available-models - list available models for the UI
  • POST /start-simulation - create a new simulation
  • POST /place-fire - place the fire and spawn water sources
  • WS /ws/{simulation_id} - stream live simulation ticks

Environment Variables

  • HUGGINGFACE_API_TOKEN or HF_API_TOKEN: Required for Hugging Face router model calls.
  • ALLOWED_ORIGINS: CORS whitelist.

Local Run

cd backend
pip install -r requirements.txt
python -m uvicorn app.main:app --reload --port 8000

Notes

  • Simulation state is in memory.
  • Fire growth, extinguish rate, and movement are tuned in app/simulation.py.
  • Model decisions are generated in app/groq_client.py through https://router.huggingface.co/v1/chat/completions.
  • /available-models is backed by app/hf_spaces.py and filters a preferred model list against the live Hugging Face router catalog.
  • This backend/ app is the local development copy; the Hugging Face Space runtime uses the root app/ package.