Spaces:
Running
title: RADHA
emoji: π
colorFrom: purple
colorTo: pink
sdk: docker
pinned: false
R.A.D.H.A - Responsive And Deeply Human Assistant
An intelligent AI assistant built with FastAPI, LangChain, Groq AI, and a modern glass-morphism web UI. RADHA provides two chat modes (General and Realtime with web search), streaming responses, text-to-speech, voice input, and learns from your personal data files. Everything runs on one server with one command.
Table of Contents
- Quick Start
- Features
- How It Works (Full Workflow)
- Architecture
- Project Structure
- API Endpoints
- Configuration
- Technologies Used
- Frontend Guide
- Troubleshooting
- Developer
Quick Start
Prerequisites
- Python 3.10+ with pip
- OS: Windows, macOS, or Linux
- API Keys (set in
.envfile):GROQ_API_KEY(required) - Get from https://console.groq.com
You can use multiple Groq API keys (GROQ_API_KEY_2,GROQ_API_KEY_3, ...) for automatic fallback when one hits rate limits or fails.TAVILY_API_KEY(optional, for Realtime mode) - Get from https://tavily.com
Installation
Clone or download this repository.
Install dependencies:
pip install -r requirements.txt
- Create a
.envfile in the project root:
GROQ_API_KEY=your_groq_api_key_here
# Optional: multiple keys for fallback when one hits rate limit
# GROQ_API_KEY_2=second_key
# GROQ_API_KEY_3=third_key
TAVILY_API_KEY=your_tavily_api_key_here
# Optional
GROQ_MODEL=llama-3.3-70b-versatile
ASSISTANT_NAME=Radha
RADHA_USER_TITLE=Sir
TTS_VOICE=en-IN-NeerjaNeural
TTS_RATE=+22%
- Start the server:
python run.py
- Open in browser: http://localhost:8000
That's it. The server hosts both the API and the frontend on port 8000.
Features
Chat Modes
- General Mode: Pure LLM responses using Groq AI. Uses your learning data and conversation history as context. No internet access.
- Realtime Mode: Searches the web via Tavily before answering. Smart query extraction converts messy conversational text into focused search queries. Uses advanced search depth with AI-synthesized answers.
Text-to-Speech (TTS)
- Server-side TTS using
edge-tts(Microsoft Edge's free cloud TTS, no API key needed). - Audio is generated on the server and streamed inline with text chunks via SSE.
- Sentences are detected in real time as text streams in, converted to speech in background threads (ThreadPoolExecutor), and sent to the client as base64 MP3.
- The client plays audio segments sequentially in a queue β speech starts as soon as the first sentence is ready, not after the full response.
- Works on all devices including iOS (uses a persistent
<audio>element with AudioContext unlock).
Voice Input
- Browser-native speech recognition (Web Speech API).
- Speak your question, and it auto-sends when you finish.
Learning System
- Put
.txtfiles indatabase/learning_data/with any personal information, preferences, or context. - Past conversations are saved as JSON in
database/chats_data/. - At startup, all learning data and past chats are chunked, embedded with HuggingFace sentence-transformers, and stored in a FAISS vector index.
- For each question, only the most relevant chunks are retrieved (semantic search) and sent to the LLM. This keeps token usage bounded no matter how much data you add.
Session Persistence
- Conversations are saved to disk after each message and survive server restarts.
- General and Realtime modes share the same session, so context carries over between modes.
Multi-Key API Fallback
- Configure multiple Groq API keys (
GROQ_API_KEY,GROQ_API_KEY_2,GROQ_API_KEY_3, ...). - Primary-first: every request tries the first key. If it fails (rate limit, timeout), the next key is tried automatically.
- Each key gets one retry for transient failures before falling back.
Frontend
- Dark glass-morphism UI with animated WebGL orb in the background.
- The orb animates when the AI is speaking (TTS playing) and stays subtle when idle.
- Responsive: works on desktop, tablets, and mobile (including iOS safe area handling).
- No build tools, no frameworks β vanilla HTML/CSS/JS.
How It Works (Full Workflow)
This section explains the complete journey of a user's message from the moment they press Send to the moment they hear the AI speak.
Step 1: User Sends a Message
The user types a question (or speaks it via voice input) and presses Send. The frontend (script.js) does the following:
- Captures the text from the textarea.
- Adds the user's message bubble to the chat UI.
- Shows a typing indicator (three bouncing dots).
- If TTS is enabled, unlocks the audio context (required on iOS for programmatic playback).
- Sends a
POSTrequest to the backend with{ message, session_id, tts }.
The endpoint depends on the mode:
- General:
POST /chat/stream - Realtime:
POST /chat/realtime/stream
Step 2: Backend Receives the Request (app/main.py)
FastAPI validates the request body using the ChatRequest Pydantic model (checks message length 1-32,000 chars). The endpoint handler:
- Gets or creates a session via
ChatService.get_or_create_session(). - Calls
ChatService.process_message_stream()(general) orprocess_realtime_message_stream()(realtime), which returns a chunk iterator. - Wraps the iterator in
_stream_generator()and returns aStreamingResponsewithmedia_type="text/event-stream".
Step 3: Session Management (app/services/chat_service.py)
ChatService manages all conversation state:
- If no
session_idis provided, generates a new UUID. - If a
session_idis provided, checks in-memory first, then tries loading from disk (database/chats_data/chat_{id}.json). - Validates the session ID (no path traversal, max 255 chars).
- Adds the user's message to the session's message list.
- Formats conversation history into
(user, assistant)pairs, capped atMAX_CHAT_HISTORY_TURNS(default 20) to keep the prompt within token limits.
Step 4: Context Retrieval (app/services/vector_store.py)
Before generating a response, the system retrieves relevant context:
- The user's question is embedded into a vector using the HuggingFace sentence-transformers model (runs locally, no API key needed).
- FAISS performs a nearest-neighbor search against the vector store (which contains chunks from learning data
.txtfiles and past conversations). - The top 10 most similar chunks are returned.
- These chunks are escaped (curly braces doubled for LangChain) and added to the system message.
Step 5a: General Mode (app/services/groq_service.py)
For general chat:
_build_prompt_and_messages()assembles the system message:- Base personality prompt (from
config.py) - Current date and time
- Retrieved context chunks from the vector store
- General mode addendum ("answer from your knowledge, no web search")
- Base personality prompt (from
- The prompt is sent to Groq AI via LangChain's
ChatGroqwith streaming enabled. - Tokens arrive one by one and are yielded as an iterator.
- If the first API key fails (rate limit, timeout), the system automatically tries the next key.
Step 5b: Realtime Mode (app/services/realtime_service.py)
For realtime chat, three additional steps happen before calling Groq:
Query Extraction: A fast LLM call (with
max_tokens=50,temperature=0) converts the user's raw conversational text into a clean search query. Example: "tell me about that website I mentioned" becomes "Radha for Everyone website". It uses the last 3 conversation turns to resolve references like "that", "him", "it".Tavily Web Search: The clean query is sent to Tavily's advanced search API:
search_depth="advanced"for thorough resultsinclude_answer=Trueso Tavily's AI synthesizes a direct answer- Up to 7 results with relevance scores
Result Formatting: Search results are structured with clear headers:
- AI-synthesized answer (marked as primary source)
- Individual sources with title, content, URL, and relevance score
These results are injected into the system message before the Realtime mode addendum (which explicitly instructs the LLM to USE the search data).
Step 6: Streaming with Inline TTS (app/main.py - _stream_generator)
The _stream_generator function is the core of the streaming + TTS pipeline:
Text chunks are yielded immediately as SSE events (
data: {"chunk": "...", "done": false}). The frontend displays them in real time β TTS never blocks text display.If TTS is enabled, the generator also: a. Accumulates text in a buffer. b. Splits the buffer into sentences at punctuation boundaries (
. ! ? , ; :). c. Merges short fragments to avoid choppy speech. d. Submits each sentence to aThreadPoolExecutor(4 workers) for background TTS generation viaedge-tts. e. Checks the front of the audio queue for completed TTS jobs and yields them asdata: {"audio": "<base64 MP3>"}events β in order, without blocking.When the LLM stream ends, any remaining buffered text is flushed and all pending TTS futures are awaited (with a 15-second timeout per sentence).
Final event:
data: {"chunk": "", "done": true, "session_id": "..."}.
Step 7: Frontend Receives the Stream (frontend/script.js)
The frontend reads the SSE stream with fetch() + ReadableStream:
- Text chunks (
data.chunk): Appended to the message bubble in real time. A blinking cursor appears during streaming. - Audio events (
data.audio): Passed toTTSPlayer.enqueue(), which adds the base64 MP3 to a playback queue. - Done event (
data.done): Streaming is complete. The cursor is removed.
Step 8: TTS Playback (frontend/script.js - TTSPlayer)
The TTSPlayer manages audio playback:
enqueue(base64Audio)adds audio to the queue and starts_playLoop()if not already running._playLoop()plays segments sequentially: converts base64 to a data URL, sets it as the<audio>element's source, plays it, and waits foronendedbefore playing the next segment.- When audio starts playing, the orb's
.speakingclass and WebGL animation are activated. - When all segments finish (or the user mutes TTS), the orb returns to its idle state.
Step 9: Session Save (app/services/chat_service.py)
After the stream completes:
- The full assistant response (accumulated from all chunks) is saved in the session.
- The session is written to
database/chats_data/chat_{id}.json. - During streaming, the session is also saved every 5 chunks for durability.
Step 10: Next Startup
When the server restarts:
- All
.txtfiles indatabase/learning_data/are loaded. - All
.jsonfiles indatabase/chats_data/(past conversations) are loaded. - Everything is chunked, embedded, and indexed in the FAISS vector store.
- New conversations benefit from all previous context.
Architecture
User (Browser)
|
| HTTP POST (JSON) + SSE response stream
v
+--------------------------------------------------+
| FastAPI Application (app/main.py) |
| - CORS middleware |
| - Timing middleware (logs all requests) |
| - _stream_generator (SSE + inline TTS) |
+--------------------------------------------------+
| |
v v
+------------------+ +------------------------+
| ChatService | | TTS Thread Pool |
| (chat_service) | | (4 workers, edge-tts) |
| - Sessions | +------------------------+
| - History |
| - Disk I/O |
+------------------+
|
v
+------------------+ +------------------------+
| GroqService | | RealtimeGroqService |
| (groq_service) | | (realtime_service) |
| - General chat | | - Query extraction |
| - Multi-key | | - Tavily web search |
| - LangChain | | - Extends GroqService |
+------------------+ +------------------------+
| |
v v
+--------------------------------------------------+
| VectorStoreService (vector_store.py) |
| - FAISS index (learning data + past chats) |
| - HuggingFace embeddings (local, no API key) |
| - Semantic search: returns top-k chunks |
+--------------------------------------------------+
|
v
+--------------------------------------------------+
| Groq Cloud API (LLM inference) |
| - llama-3.3-70b-versatile (or configured model) |
| - Primary-first multi-key fallback |
+--------------------------------------------------+
Project Structure
RADHA/
βββ frontend/ # Web UI (vanilla HTML/CSS/JS, no build tools)
β βββ index.html # Single-page app structure
β βββ style.css # Dark glass-morphism theme, responsive
β βββ script.js # Chat logic, SSE streaming, TTS player, voice input
β βββ orb.js # WebGL animated orb renderer (GLSL shaders)
β
βββ app/ # Backend (FastAPI)
β βββ __init__.py
β βββ main.py # FastAPI app, all endpoints, inline TTS, SSE streaming
β βββ models.py # Pydantic models (ChatRequest, ChatResponse, etc.)
β βββ services/
β β βββ __init__.py
β β βββ chat_service.py # Session management, message storage, disk persistence
β β βββ groq_service.py # General chat: LangChain + Groq LLM + multi-key fallback
β β βββ realtime_service.py # Realtime chat: query extraction + Tavily search + Groq
β β βββ vector_store.py # FAISS vector index, embeddings, semantic retrieval
β βββ utils/
β βββ __init__.py
β βββ retry.py # Retry with exponential backoff (for API calls)
β βββ time_info.py # Current date/time for the system prompt
β
βββ database/ # Auto-created on first run
β βββ learning_data/ # Your .txt files (personal info, preferences, etc.)
β βββ chats_data/ # Saved conversations as JSON
β βββ vector_store/ # FAISS index files
β
βββ config.py # All settings: API keys, paths, system prompt, TTS config
βββ run.py # Entry point: python run.py
βββ requirements.txt # Python dependencies
βββ .env # Your API keys (not committed to git)
βββ README.md # This file
API Endpoints
POST /chat
General chat (non-streaming). Returns full response at once.
POST /chat/stream
General chat with streaming. Returns Server-Sent Events.
POST /chat/realtime
Realtime chat (non-streaming). Searches the web first, then responds.
POST /chat/realtime/stream
Realtime chat with streaming. Web search + SSE streaming.
Request body (all chat endpoints):
{
"message": "What is Python?",
"session_id": "optional-uuid",
"tts": true
}
message(required): 1-32,000 characters.session_id(optional): omit to create a new session; include to continue an existing one.tts(optional, default false): set totrueto receive inline audio events in the stream.
SSE stream format:
data: {"session_id": "uuid-here", "chunk": "", "done": false}
data: {"chunk": "Hello", "done": false}
data: {"chunk": ", how", "done": false}
data: {"audio": "<base64 MP3>", "sentence": "Hello, how can I help?"}
data: {"chunk": "", "done": true, "session_id": "uuid-here"}
Non-streaming response:
{
"response": "Python is a high-level programming language...",
"session_id": "uuid-here"
}
GET /chat/history/{session_id}
Returns all messages for a session.
GET /health
Health check. Returns status of all services.
POST /tts
Standalone TTS endpoint. Send {"text": "Hello"}, receive streamed MP3 audio.
GET /
Redirects to /app/ (the frontend).
GET /api
Returns list of available endpoints.
Configuration
Environment Variables (.env)
| Variable | Required | Default | Description |
|---|---|---|---|
GROQ_API_KEY |
Yes | - | Primary Groq API key |
GROQ_API_KEY_2, _3, ... |
No | - | Additional keys for fallback |
TAVILY_API_KEY |
No | - | Tavily search API key (for Realtime mode) |
GROQ_MODEL |
No | llama-3.3-70b-versatile |
LLM model name |
ASSISTANT_NAME |
No | Radha |
Assistant's name |
RADHA_USER_TITLE |
No | - | How to address the user (e.g. "Sir") |
TTS_VOICE |
No | en-IN-NeerjaNeural |
Edge TTS voice (run edge-tts --list-voices to see all) |
TTS_RATE |
No | +22% |
Speech speed adjustment |
System Prompt
The assistant's personality is defined in config.py. Key sections:
- Role: conversational face of the system; does not claim to have completed actions unless the result is visible
- Answering Quality: instructed to be specific, use context/search results, never give vague answers
- Tone: warm, intelligent, concise, witty
- Formatting: no asterisks, no emojis, no markdown, plain text only
Learning Data
Add .txt files to database/learning_data/:
- Files are loaded and indexed at startup.
- Only relevant chunks are sent to the LLM per question (not the full text).
- Restart the server after adding new files.
Multiple Groq API Keys
You can use multiple Groq API keys for automatic fallback. Set GROQ_API_KEY (required) and optionally GROQ_API_KEY_2, GROQ_API_KEY_3, etc. in your .env:
GROQ_API_KEY=first_key
GROQ_API_KEY_2=second_key
GROQ_API_KEY_3=third_key
Every request tries the first key first. If it fails (rate limit, timeout, or error), the next key is tried automatically. Each key has its own daily limit on Groq's free tier, so multiple keys give you more capacity.
Technologies Used
Backend
| Technology | Purpose |
|---|---|
| FastAPI | Web framework, async endpoints, SSE streaming |
| LangChain | LLM orchestration, prompt templates, message formatting |
| Groq AI | LLM inference (Llama 3.3 70B, extremely fast) |
| Tavily | AI-optimized web search with answer synthesis |
| FAISS | Vector similarity search for context retrieval |
| HuggingFace | Local embeddings (sentence-transformers/all-MiniLM-L6-v2) |
| edge-tts | Server-side text-to-speech (Microsoft Edge, free, no API key) |
| Pydantic | Request/response validation |
| Uvicorn | ASGI server |
Frontend
| Technology | Purpose |
|---|---|
| Vanilla JS | Chat logic, SSE streaming, TTS playback queue |
| WebGL/GLSL | Animated orb (simplex noise, procedural lighting) |
| Web Speech API | Browser-native speech-to-text |
| CSS Glass-morphism | Dark translucent panels with backdrop blur |
| Poppins (Google Fonts) | Typography |
Frontend Guide
Modes
- General: Click "General" in the header. Uses the LLM's knowledge + your learning data. No internet.
- Realtime: Click "Realtime" in the header. Searches the web first, then answers with fresh information.
TTS (Text-to-Speech)
- Click the speaker icon to enable/disable TTS.
- When enabled, the AI speaks its response as it streams in.
- Click again to mute mid-speech (stops immediately, orb returns to idle).
Voice Input
- Click the microphone icon to start listening.
- Speak your question. It auto-sends when you finish.
- Click again to cancel.
Orb Animation
- Idle: Subtle glow (35% opacity), slowly rotating.
- Speaking (TTS active): Full brightness, pulsing scale animation.
- The orb only animates when TTS audio is playing, not during text streaming.
Quick Chips
On the welcome screen, click any chip ("What can you do?", "Open YouTube", etc.) to send a preset message.
Troubleshooting
Server won't start
- Ensure
GROQ_API_KEYis set in.env. - Run
pip install -r requirements.txtto install all dependencies. - Check that port 8000 is not in use.
"Offline" status in the UI
- The server is not running. Start it with
python run.py. - Check the terminal for error messages.
Realtime mode gives generic answers
- Ensure
TAVILY_API_KEYis set in.envand is valid. - Check the server logs for
[TAVILY]entries to see if search is working. - The query extraction LLM call should appear as
[REALTIME] Query extraction:in logs.
TTS not working
- Make sure TTS is enabled (speaker icon should be highlighted purple).
- On iOS: TTS requires a user interaction first (tap the speaker button before sending a message).
- Check server logs for
[TTS-INLINE]errors.
Vector store errors
- Delete
database/vector_store/and restart β the index rebuilds automatically. - Check that
database/directories exist and are writable.
Template variable errors
- Likely caused by
{or}in learning data files. The system escapes these automatically, but if you see errors, check your.txtfiles.
Performance
The server logs [TIMING] entries for every operation:
| Log Entry | What It Measures |
|---|---|
session_get_or_create |
Session lookup (memory/disk/new) |
vector_db |
Vector store retrieval |
tavily_search |
Web search (Realtime only) |
groq_api |
Full Groq API call |
first_chunk |
Time to first streaming token |
groq_stream_total |
Total stream duration + chunk count |
save_session_json |
Session save to disk |
Typical latencies:
- General mode first token: 0.3-1s
- Realtime mode first token: 2-5s (includes query extraction + web search)
- TTS first audio: ~1s after first sentence completes
Security Notes
- Session IDs are validated against path traversal (
..,/,\). - API keys are stored in
.env(never in code). - CORS allows all origins (
*) since this is a single-user server. - No authentication β add it if deploying for multiple users.
Developer
R.A.D.H.A was developed by Aditya Yadav.
π License
MIT License
Made with β€οΈ by **Aditya Yadav **
Start chatting: python run.py then open http://localhost:8000