diff --git a/Claude_Code/.dockerignore b/.dockerignore similarity index 100% rename from Claude_Code/.dockerignore rename to .dockerignore diff --git a/Claude_Code/.gitignore b/.gitignore similarity index 100% rename from Claude_Code/.gitignore rename to .gitignore diff --git a/Claude_Code/.python-version b/.python-version similarity index 100% rename from Claude_Code/.python-version rename to .python-version diff --git a/Claude_Code/AGENTS.md b/AGENTS.md similarity index 100% rename from Claude_Code/AGENTS.md rename to AGENTS.md diff --git a/Claude_Code/CLAUDE.md b/CLAUDE.md similarity index 100% rename from Claude_Code/CLAUDE.md rename to CLAUDE.md diff --git a/Claude_Code/.gitattributes b/Claude_Code/.gitattributes deleted file mode 100644 index a6344aac8c09253b3b630fb776ae94478aa0275b..0000000000000000000000000000000000000000 --- a/Claude_Code/.gitattributes +++ /dev/null @@ -1,35 +0,0 @@ -*.7z filter=lfs diff=lfs merge=lfs -text -*.arrow filter=lfs diff=lfs merge=lfs -text -*.bin filter=lfs diff=lfs merge=lfs -text -*.bz2 filter=lfs diff=lfs merge=lfs -text -*.ckpt filter=lfs diff=lfs merge=lfs -text -*.ftz filter=lfs diff=lfs merge=lfs -text -*.gz filter=lfs diff=lfs merge=lfs -text -*.h5 filter=lfs diff=lfs merge=lfs -text -*.joblib filter=lfs diff=lfs merge=lfs -text -*.lfs.* filter=lfs diff=lfs merge=lfs -text -*.mlmodel filter=lfs diff=lfs merge=lfs -text -*.model filter=lfs diff=lfs merge=lfs -text -*.msgpack filter=lfs diff=lfs merge=lfs -text -*.npy filter=lfs diff=lfs merge=lfs -text -*.npz filter=lfs diff=lfs merge=lfs -text -*.onnx filter=lfs diff=lfs merge=lfs -text -*.ot filter=lfs diff=lfs merge=lfs -text -*.parquet filter=lfs diff=lfs merge=lfs -text -*.pb filter=lfs diff=lfs merge=lfs -text -*.pickle filter=lfs diff=lfs merge=lfs -text -*.pkl filter=lfs diff=lfs merge=lfs -text -*.pt filter=lfs diff=lfs merge=lfs -text -*.pth filter=lfs diff=lfs merge=lfs -text -*.rar filter=lfs diff=lfs merge=lfs -text -*.safetensors filter=lfs diff=lfs merge=lfs -text -saved_model/**/* filter=lfs diff=lfs merge=lfs -text -*.tar.* filter=lfs diff=lfs merge=lfs -text -*.tar filter=lfs diff=lfs merge=lfs -text -*.tflite filter=lfs diff=lfs merge=lfs -text -*.tgz filter=lfs diff=lfs merge=lfs -text -*.wasm filter=lfs diff=lfs merge=lfs -text -*.xz filter=lfs diff=lfs merge=lfs -text -*.zip filter=lfs diff=lfs merge=lfs -text -*.zst filter=lfs diff=lfs merge=lfs -text -*tfevents* filter=lfs diff=lfs merge=lfs -text diff --git a/Claude_Code/README.md b/Claude_Code/README.md deleted file mode 100644 index 46eee19327fef01d83af53395aa143ab2e200b71..0000000000000000000000000000000000000000 --- a/Claude_Code/README.md +++ /dev/null @@ -1,588 +0,0 @@ ---- -title: Claude Code -emoji: 🤖 -colorFrom: indigo -colorTo: blue -sdk: docker -app_port: 7860 -pinned: false ---- - -
- -# 🤖 Free Claude Code - -### Use Claude Code CLI & VSCode for free. No Anthropic API key required. - -[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://opensource.org/licenses/MIT) -[![Python 3.12](https://img.shields.io/badge/python-3.12-3776ab.svg?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/downloads/) -[![uv](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json&style=for-the-badge)](https://github.com/astral-sh/uv) -[![Tested with Pytest](https://img.shields.io/badge/testing-Pytest-00c0ff.svg?style=for-the-badge)](https://github.com/Alishahryar1/free-claude-code/actions/workflows/tests.yml) -[![Type checking: Ty](https://img.shields.io/badge/type%20checking-ty-ffcc00.svg?style=for-the-badge)](https://pypi.org/project/ty/) -[![Code style: Ruff](https://img.shields.io/badge/code%20formatting-ruff-f5a623.svg?style=for-the-badge)](https://github.com/astral-sh/ruff) -[![Logging: Loguru](https://img.shields.io/badge/logging-loguru-4ecdc4.svg?style=for-the-badge)](https://github.com/Delgan/loguru) - -A lightweight proxy that routes Claude Code's Anthropic API calls to **NVIDIA NIM** (40 req/min free), **OpenRouter** (hundreds of models), **LM Studio** (fully local), or **llama.cpp** (local with Anthropic endpoints). - -[Quick Start](#quick-start) · [Providers](#providers) · [Discord Bot](#discord-bot) · [Configuration](#configuration) · [Development](#development) · [Contributing](#contributing) - ---- - -
- -
- Free Claude Code in action -

Claude Code running via NVIDIA NIM, completely free

-
- -## Features - -| Feature | Description | -| -------------------------- | ----------------------------------------------------------------------------------------------- | -| **Zero Cost** | 40 req/min free on NVIDIA NIM. Free models on OpenRouter. Fully local with LM Studio | -| **Drop-in Replacement** | Set 2 env vars. No modifications to Claude Code CLI or VSCode extension needed | -| **4 Providers** | NVIDIA NIM, OpenRouter (hundreds of models), LM Studio (local), llama.cpp (`llama-server`) | -| **Per-Model Mapping** | Route Opus / Sonnet / Haiku to different models and providers. Mix providers freely | -| **Thinking Token Support** | Parses `` tags and `reasoning_content` into native Claude thinking blocks | -| **Heuristic Tool Parser** | Models outputting tool calls as text are auto-parsed into structured tool use | -| **Request Optimization** | 5 categories of trivial API calls intercepted locally, saving quota and latency | -| **Smart Rate Limiting** | Proactive rolling-window throttle + reactive 429 exponential backoff + optional concurrency cap | -| **Discord / Telegram Bot** | Remote autonomous coding with tree-based threading, session persistence, and live progress | -| **Subagent Control** | Task tool interception forces `run_in_background=False`. No runaway subagents | -| **Extensible** | Clean `BaseProvider` and `MessagingPlatform` ABCs. Add new providers or platforms easily | - -## Quick Start - -### Prerequisites - -1. Get an API key (or use LM Studio / llama.cpp locally): - - **NVIDIA NIM**: [build.nvidia.com/settings/api-keys](https://build.nvidia.com/settings/api-keys) - - **OpenRouter**: [openrouter.ai/keys](https://openrouter.ai/keys) - - **LM Studio**: No API key needed. Run locally with [LM Studio](https://lmstudio.ai) - - **llama.cpp**: No API key needed. Run `llama-server` locally. -2. Install [Claude Code](https://github.com/anthropics/claude-code) -3. Install [uv](https://github.com/astral-sh/uv) (or `uv self update` if already installed) - -### Clone & Configure - -```bash -git clone https://github.com/Alishahryar1/free-claude-code.git -cd free-claude-code -cp .env.example .env -``` - -Choose your provider and edit `.env`: - -
-NVIDIA NIM (40 req/min free, recommended) - -```dotenv -NVIDIA_NIM_API_KEY="nvapi-your-key-here" - -MODEL_OPUS="nvidia_nim/z-ai/glm4.7" -MODEL_SONNET="nvidia_nim/moonshotai/kimi-k2-thinking" -MODEL_HAIKU="nvidia_nim/stepfun-ai/step-3.5-flash" -MODEL="nvidia_nim/z-ai/glm4.7" # fallback - -# Enable for thinking models (kimi, nemotron). Leave false for others (e.g. Mistral). -NIM_ENABLE_THINKING=true -``` - -
- -
-OpenRouter (hundreds of models) - -```dotenv -OPENROUTER_API_KEY="sk-or-your-key-here" - -MODEL_OPUS="open_router/deepseek/deepseek-r1-0528:free" -MODEL_SONNET="open_router/openai/gpt-oss-120b:free" -MODEL_HAIKU="open_router/stepfun/step-3.5-flash:free" -MODEL="open_router/stepfun/step-3.5-flash:free" # fallback -``` - -
- -
-LM Studio (fully local, no API key) - -```dotenv -MODEL_OPUS="lmstudio/unsloth/MiniMax-M2.5-GGUF" -MODEL_SONNET="lmstudio/unsloth/Qwen3.5-35B-A3B-GGUF" -MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF" -MODEL="lmstudio/unsloth/GLM-4.7-Flash-GGUF" # fallback -``` - -
- -
-llama.cpp (fully local, no API key) - -```dotenv -LLAMACPP_BASE_URL="http://localhost:8080/v1" - -MODEL_OPUS="llamacpp/local-model" -MODEL_SONNET="llamacpp/local-model" -MODEL_HAIKU="llamacpp/local-model" -MODEL="llamacpp/local-model" -``` - -
- -
-Mix providers - -Each `MODEL_*` variable can use a different provider. `MODEL` is the fallback for unrecognized Claude models. - -```dotenv -NVIDIA_NIM_API_KEY="nvapi-your-key-here" -OPENROUTER_API_KEY="sk-or-your-key-here" - -MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5" -MODEL_SONNET="open_router/deepseek/deepseek-r1-0528:free" -MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF" -MODEL="nvidia_nim/z-ai/glm4.7" # fallback -``` - -
- -
-Optional Authentication (restrict access to your proxy) - -Set `ANTHROPIC_AUTH_TOKEN` in `.env` to require clients to authenticate: - -```dotenv -ANTHROPIC_AUTH_TOKEN="your-secret-token-here" -``` - -**How it works:** -- If `ANTHROPIC_AUTH_TOKEN` is empty (default), no authentication is required (backward compatible) -- If set, clients must provide the same token via the `ANTHROPIC_AUTH_TOKEN` header -- For private Hugging Face Spaces, query auth is supported as `?psw=token`, `?psw:token`, or `?psw%3Atoken` -- The `claude-pick` script automatically reads the token from `.env` if configured - -**Example usage:** -```bash -# With authentication -ANTHROPIC_AUTH_TOKEN="your-secret-token-here" \ -ANTHROPIC_BASE_URL="http://localhost:8082" claude - -# Hugging Face private Space (query auth in URL) -ANTHROPIC_API_KEY="Jack@188" \ -ANTHROPIC_BASE_URL="https://.hf.space?psw:Jack%40188" claude - -# claude-pick automatically uses the configured token -claude-pick -``` - -Note: `HEAD /` returning `405 Method Not Allowed` means auth already passed; only `GET /` is implemented. - -Use this feature if: -- Running the proxy on a public network -- Sharing the server with others but restricting access -- Wanting an additional layer of security - -
- -### Run It - -**Terminal 1:** Start the proxy server: - -```bash -uv run uvicorn server:app --host 0.0.0.0 --port 8082 -``` - -**Terminal 2:** Run Claude Code: - -#### Powershell -```powershell -$env:ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; $env:ANTHROPIC_API_KEY="Jack@188"; claude -``` -#### Bash -```bash -export ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; export ANTHROPIC_API_KEY="Jack@188"; claude -``` - -That's it! Claude Code now uses your configured provider for free. - -### One-Click Factory Reset (Space Admin) - -Open the admin page: - -- Local: `http://localhost:8082/admin/factory-reset?psw:Jack%40188` -- Space: `https://.hf.space/admin/factory-reset?psw:Jack%40188` - -Click **Factory Restart** to clear runtime cache + workspace data and restart the server. - -
-VSCode Extension Setup - -1. Start the proxy server (same as above). -2. Open Settings (`Ctrl + ,`) and search for `claude-code.environmentVariables`. -3. Click **Edit in settings.json** and add: - -```json -"claudeCode.environmentVariables": [ - { "name": "ANTHROPIC_BASE_URL", "value": "http://localhost:8082" }, - { "name": "ANTHROPIC_AUTH_TOKEN", "value": "freecc" } -] -``` - -4. Reload extensions. -5. **If you see the login screen**: Click **Anthropic Console**, then authorize. The extension will start working. You may be redirected to buy credits in the browser; ignore it — the extension already works. - -To switch back to Anthropic models, comment out the added block and reload extensions. - -
- -
-Multi-Model Support (Model Picker) - -`claude-pick` is an interactive model selector that lets you choose any model from your active provider each time you launch Claude, without editing `MODEL` in `.env`. - -https://github.com/user-attachments/assets/9a33c316-90f8-4418-9650-97e7d33ad645 - -**1. Install [fzf](https://github.com/junegunn/fzf)**: - -```bash -brew install fzf # macOS/Linux -``` - -**2. Add the alias to `~/.zshrc` or `~/.bashrc`:** - -```bash -alias claude-pick="/absolute/path/to/free-claude-code/claude-pick" -``` - -Then reload your shell (`source ~/.zshrc` or `source ~/.bashrc`) and run `claude-pick`. - -**Or use a fixed model alias** (no picker needed): - -```bash -alias claude-kimi='ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc:moonshotai/kimi-k2.5" claude' -``` - -
- -### Install as a Package (no clone needed) - -```bash -uv tool install git+https://github.com/Alishahryar1/free-claude-code.git -fcc-init # creates ~/.config/free-claude-code/.env from the built-in template -``` - -Edit `~/.config/free-claude-code/.env` with your API keys and model names, then: - -```bash -free-claude-code # starts the server -``` - -> To update: `uv tool upgrade free-claude-code` - ---- - -## How It Works - -``` -┌─────────────────┐ ┌──────────────────────┐ ┌──────────────────┐ -│ Claude Code │───────>│ Free Claude Code │───────>│ LLM Provider │ -│ CLI / VSCode │<───────│ Proxy (:8082) │<───────│ NIM / OR / LMS │ -└─────────────────┘ └──────────────────────┘ └──────────────────┘ - Anthropic API OpenAI-compatible - format (SSE) format (SSE) -``` - -- **Transparent proxy**: Claude Code sends standard Anthropic API requests; the proxy forwards them to your configured provider -- **Per-model routing**: Opus / Sonnet / Haiku requests resolve to their model-specific backend, with `MODEL` as fallback -- **Request optimization**: 5 categories of trivial requests (quota probes, title generation, prefix detection, suggestions, filepath extraction) are intercepted and responded to locally without using API quota -- **Format translation**: Requests are translated from Anthropic format to the provider's OpenAI-compatible format and streamed back -- **Thinking tokens**: `` tags and `reasoning_content` fields are converted into native Claude thinking blocks - ---- - -## Providers - -| Provider | Cost | Rate Limit | Best For | -| -------------- | ------------ | ---------- | ------------------------------------ | -| **NVIDIA NIM** | Free | 40 req/min | Daily driver, generous free tier | -| **OpenRouter** | Free / Paid | Varies | Model variety, fallback options | -| **LM Studio** | Free (local) | Unlimited | Privacy, offline use, no rate limits | -| **llama.cpp** | Free (local) | Unlimited | Lightweight local inference engine | - -Models use a prefix format: `provider_prefix/model/name`. An invalid prefix causes an error. - -| Provider | `MODEL` prefix | API Key Variable | Default Base URL | -| ---------- | ----------------- | -------------------- | ----------------------------- | -| NVIDIA NIM | `nvidia_nim/...` | `NVIDIA_NIM_API_KEY` | `integrate.api.nvidia.com/v1` | -| OpenRouter | `open_router/...` | `OPENROUTER_API_KEY` | `openrouter.ai/api/v1` | -| LM Studio | `lmstudio/...` | (none) | `localhost:1234/v1` | -| llama.cpp | `llamacpp/...` | (none) | `localhost:8080/v1` | - -
-NVIDIA NIM models - -Popular models (full list in [`nvidia_nim_models.json`](nvidia_nim_models.json)): - -- `nvidia_nim/minimaxai/minimax-m2.5` -- `nvidia_nim/qwen/qwen3.5-397b-a17b` -- `nvidia_nim/z-ai/glm5` -- `nvidia_nim/moonshotai/kimi-k2.5` -- `nvidia_nim/stepfun-ai/step-3.5-flash` - -Browse: [build.nvidia.com](https://build.nvidia.com/explore/discover) · Update list: `curl "https://integrate.api.nvidia.com/v1/models" > nvidia_nim_models.json` - -
- -
-OpenRouter models - -Popular free models: - -- `open_router/arcee-ai/trinity-large-preview:free` -- `open_router/stepfun/step-3.5-flash:free` -- `open_router/deepseek/deepseek-r1-0528:free` -- `open_router/openai/gpt-oss-120b:free` - -Browse: [openrouter.ai/models](https://openrouter.ai/models) · [Free models](https://openrouter.ai/collections/free-models) - -
- -
-LM Studio models - -Run models locally with [LM Studio](https://lmstudio.ai). Load a model in the Chat or Developer tab, then set `MODEL` to its identifier. - -Examples with native tool-use support: - -- `LiquidAI/LFM2-24B-A2B-GGUF` -- `unsloth/MiniMax-M2.5-GGUF` -- `unsloth/GLM-4.7-Flash-GGUF` -- `unsloth/Qwen3.5-35B-A3B-GGUF` - -Browse: [model.lmstudio.ai](https://model.lmstudio.ai) - -
- -
-llama.cpp models - -Run models locally using `llama-server`. Ensure you have a tool-capable GGUF. Set `MODEL` to whatever arbitrary name you'd like (e.g. `llamacpp/my-model`), as `llama-server` ignores the model name when run via `/v1/messages`. - -See the Unsloth docs for detailed instructions and capable models: -[https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b](https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b) - -
- ---- - -## Discord Bot - -Control Claude Code remotely from Discord (or Telegram). Send tasks, watch live progress, and manage multiple concurrent sessions. - -**Capabilities:** - -- Tree-based message threading: reply to a message to fork the conversation -- Session persistence across server restarts -- Live streaming of thinking tokens, tool calls, and results -- Unlimited concurrent Claude CLI sessions (concurrency controlled by `PROVIDER_MAX_CONCURRENCY`) -- Voice notes: send voice messages; they are transcribed and processed as regular prompts -- Commands: `/stop` (cancel a task; reply to a message to stop only that task), `/clear` (reset all sessions, or reply to clear a branch), `/stats` - -### Setup - -1. **Create a Discord Bot**: Go to [Discord Developer Portal](https://discord.com/developers/applications), create an application, add a bot, and copy the token. Enable **Message Content Intent** under Bot settings. - -2. **Edit `.env`:** - -```dotenv -MESSAGING_PLATFORM="discord" -DISCORD_BOT_TOKEN="your_discord_bot_token" -ALLOWED_DISCORD_CHANNELS="123456789,987654321" -``` - -> Enable Developer Mode in Discord (Settings → Advanced), then right-click a channel and "Copy ID". Comma-separate multiple channels. If empty, no channels are allowed. - -3. **Configure the workspace** (where Claude will operate): - -```dotenv -CLAUDE_WORKSPACE="./agent_workspace" -ALLOWED_DIR="C:/Users/yourname/projects" -``` - -4. **Start the server:** - -```bash -uv run uvicorn server:app --host 0.0.0.0 --port 8082 -``` - -5. **Invite the bot** via OAuth2 URL Generator (scopes: `bot`, permissions: Read Messages, Send Messages, Manage Messages, Read Message History). - -### Telegram - -Set `MESSAGING_PLATFORM=telegram` and configure: - -```dotenv -TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrSTUvwxYZ" -ALLOWED_TELEGRAM_USER_ID="your_telegram_user_id" -``` - -Get a token from [@BotFather](https://t.me/BotFather); find your user ID via [@userinfobot](https://t.me/userinfobot). - -### Voice Notes - -Send voice messages on Discord or Telegram; they are transcribed and processed as regular prompts. - -| Backend | Description | API Key | -| --------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------------- | -| **Local Whisper** (default) | [Hugging Face Whisper](https://huggingface.co/openai/whisper-large-v3-turbo) — free, offline, CUDA compatible | not required | -| **NVIDIA NIM** | Whisper/Parakeet models via gRPC | `NVIDIA_NIM_API_KEY` | - -**Install the voice extras:** - -```bash -# If you cloned the repo: -uv sync --extra voice_local # Local Whisper -uv sync --extra voice # NVIDIA NIM -uv sync --extra voice --extra voice_local # Both - -# If you installed as a package (no clone): -uv tool install "free-claude-code[voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git" -uv tool install "free-claude-code[voice] @ git+https://github.com/Alishahryar1/free-claude-code.git" -uv tool install "free-claude-code[voice,voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git" -``` - -Configure via `WHISPER_DEVICE` (`cpu` | `cuda` | `nvidia_nim`) and `WHISPER_MODEL`. See the [Configuration](#configuration) table for all voice variables and supported model values. - ---- - -## Configuration - -### Core - -| Variable | Description | Default | -| -------------------- | --------------------------------------------------------------------- | ------------------------------------------------- | -| `MODEL` | Fallback model (`provider/model/name` format; invalid prefix → error) | `nvidia_nim/stepfun-ai/step-3.5-flash` | -| `MODEL_OPUS` | Model for Claude Opus requests (falls back to `MODEL`) | `nvidia_nim/z-ai/glm4.7` | -| `MODEL_SONNET` | Model for Claude Sonnet requests (falls back to `MODEL`) | `open_router/arcee-ai/trinity-large-preview:free` | -| `MODEL_HAIKU` | Model for Claude Haiku requests (falls back to `MODEL`) | `open_router/stepfun/step-3.5-flash:free` | -| `NVIDIA_NIM_API_KEY` | NVIDIA API key | required for NIM | -| `NIM_ENABLE_THINKING` | Send `chat_template_kwargs` + `reasoning_budget` on NIM requests. Enable for thinking models (kimi, nemotron); leave `false` for others (e.g. Mistral) | `false` | -| `OPENROUTER_API_KEY` | OpenRouter API key | required for OpenRouter | -| `LM_STUDIO_BASE_URL` | LM Studio server URL | `http://localhost:1234/v1` | -| `LLAMACPP_BASE_URL` | llama.cpp server URL | `http://localhost:8080/v1` | - -### Rate Limiting & Timeouts - -| Variable | Description | Default | -| -------------------------- | ----------------------------------------- | ------- | -| `PROVIDER_RATE_LIMIT` | LLM API requests per window | `40` | -| `PROVIDER_RATE_WINDOW` | Rate limit window (seconds) | `60` | -| `PROVIDER_MAX_CONCURRENCY` | Max simultaneous open provider streams | `5` | -| `HTTP_READ_TIMEOUT` | Read timeout for provider requests (s) | `120` | -| `HTTP_WRITE_TIMEOUT` | Write timeout for provider requests (s) | `10` | -| `HTTP_CONNECT_TIMEOUT` | Connect timeout for provider requests (s) | `2` | - -### Messaging & Voice - -| Variable | Description | Default | -| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- | -| `MESSAGING_PLATFORM` | `discord` or `telegram` | `discord` | -| `DISCORD_BOT_TOKEN` | Discord bot token | `""` | -| `ALLOWED_DISCORD_CHANNELS` | Comma-separated channel IDs (empty = none allowed) | `""` | -| `TELEGRAM_BOT_TOKEN` | Telegram bot token | `""` | -| `ALLOWED_TELEGRAM_USER_ID` | Allowed Telegram user ID | `""` | -| `CLAUDE_WORKSPACE` | Directory where the agent operates | `./agent_workspace` | -| `ALLOWED_DIR` | Allowed directories for the agent | `""` | -| `MESSAGING_RATE_LIMIT` | Messaging messages per window | `1` | -| `MESSAGING_RATE_WINDOW` | Messaging window (seconds) | `1` | -| `VOICE_NOTE_ENABLED` | Enable voice note handling | `true` | -| `WHISPER_DEVICE` | `cpu` \| `cuda` \| `nvidia_nim` | `cpu` | -| `WHISPER_MODEL` | Whisper model (local: `tiny`/`base`/`small`/`medium`/`large-v2`/`large-v3`/`large-v3-turbo`; NIM: `openai/whisper-large-v3`, `nvidia/parakeet-ctc-1.1b-asr`, etc.) | `base` | -| `HF_TOKEN` | Hugging Face token for faster downloads (local Whisper, optional) | — | - -
-Advanced: Request optimization flags - -These are enabled by default and intercept trivial Claude Code requests locally to save API quota. - -| Variable | Description | Default | -| --------------------------------- | ------------------------------ | ------- | -| `FAST_PREFIX_DETECTION` | Enable fast prefix detection | `true` | -| `ENABLE_NETWORK_PROBE_MOCK` | Mock network probe requests | `true` | -| `ENABLE_TITLE_GENERATION_SKIP` | Skip title generation requests | `true` | -| `ENABLE_SUGGESTION_MODE_SKIP` | Skip suggestion mode requests | `true` | -| `ENABLE_FILEPATH_EXTRACTION_MOCK` | Mock filepath extraction | `true` | - -
- -See [`.env.example`](.env.example) for all supported parameters. - ---- - -## Development - -### Project Structure - -``` -free-claude-code/ -├── server.py # Entry point -├── api/ # FastAPI routes, request detection, optimization handlers -├── providers/ # BaseProvider, OpenAICompatibleProvider, NIM, OpenRouter, LM Studio, llamacpp -│ └── common/ # Shared utils (SSE builder, message converter, parsers, error mapping) -├── messaging/ # MessagingPlatform ABC + Discord/Telegram bots, session management -├── config/ # Settings, NIM config, logging -├── cli/ # CLI session and process management -└── tests/ # Pytest test suite -``` - -### Commands - -```bash -uv run ruff format # Format code -uv run ruff check # Lint -uv run ty check # Type checking -uv run pytest # Run tests -``` - -### Extending - -**Adding an OpenAI-compatible provider** (Groq, Together AI, etc.) — extend `OpenAICompatibleProvider`: - -```python -from providers.openai_compat import OpenAICompatibleProvider -from providers.base import ProviderConfig - -class MyProvider(OpenAICompatibleProvider): - def __init__(self, config: ProviderConfig): - super().__init__(config, provider_name="MYPROVIDER", - base_url="https://api.example.com/v1", api_key=config.api_key) -``` - -**Adding a fully custom provider** — extend `BaseProvider` directly and implement `stream_response()`. - -**Adding a messaging platform** — extend `MessagingPlatform` in `messaging/` and implement `start()`, `stop()`, `send_message()`, `edit_message()`, and `on_message()`. - ---- - -## Contributing - -- Report bugs or suggest features via [Issues](https://github.com/Alishahryar1/free-claude-code/issues) -- Add new LLM providers (Groq, Together AI, etc.) -- Add new messaging platforms (Slack, etc.) -- Improve test coverage -- Not accepting Docker integration PRs for now - -```bash -git checkout -b my-feature -uv run ruff format && uv run ruff check && uv run ty check && uv run pytest -# Open a pull request -``` - ---- - -## License - -MIT License. See [LICENSE](LICENSE) for details. - -Built with [FastAPI](https://fastapi.tiangolo.com/), [OpenAI Python SDK](https://github.com/openai/openai-python), [discord.py](https://github.com/Rapptz/discord.py), and [python-telegram-bot](https://python-telegram-bot.org/). diff --git a/Claude_Code/Dockerfile b/Dockerfile similarity index 100% rename from Claude_Code/Dockerfile rename to Dockerfile diff --git a/README.md b/README.md index 4fb3a8c7a9fe4424e9696569684d560cb6a0e05c..46eee19327fef01d83af53395aa143ab2e200b71 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,588 @@ --- title: Claude Code -emoji: 📚 -colorFrom: green -colorTo: purple +emoji: 🤖 +colorFrom: indigo +colorTo: blue sdk: docker +app_port: 7860 pinned: false -license: mit --- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference +
+ +# 🤖 Free Claude Code + +### Use Claude Code CLI & VSCode for free. No Anthropic API key required. + +[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://opensource.org/licenses/MIT) +[![Python 3.12](https://img.shields.io/badge/python-3.12-3776ab.svg?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/downloads/) +[![uv](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json&style=for-the-badge)](https://github.com/astral-sh/uv) +[![Tested with Pytest](https://img.shields.io/badge/testing-Pytest-00c0ff.svg?style=for-the-badge)](https://github.com/Alishahryar1/free-claude-code/actions/workflows/tests.yml) +[![Type checking: Ty](https://img.shields.io/badge/type%20checking-ty-ffcc00.svg?style=for-the-badge)](https://pypi.org/project/ty/) +[![Code style: Ruff](https://img.shields.io/badge/code%20formatting-ruff-f5a623.svg?style=for-the-badge)](https://github.com/astral-sh/ruff) +[![Logging: Loguru](https://img.shields.io/badge/logging-loguru-4ecdc4.svg?style=for-the-badge)](https://github.com/Delgan/loguru) + +A lightweight proxy that routes Claude Code's Anthropic API calls to **NVIDIA NIM** (40 req/min free), **OpenRouter** (hundreds of models), **LM Studio** (fully local), or **llama.cpp** (local with Anthropic endpoints). + +[Quick Start](#quick-start) · [Providers](#providers) · [Discord Bot](#discord-bot) · [Configuration](#configuration) · [Development](#development) · [Contributing](#contributing) + +--- + +
+ +
+ Free Claude Code in action +

Claude Code running via NVIDIA NIM, completely free

+
+ +## Features + +| Feature | Description | +| -------------------------- | ----------------------------------------------------------------------------------------------- | +| **Zero Cost** | 40 req/min free on NVIDIA NIM. Free models on OpenRouter. Fully local with LM Studio | +| **Drop-in Replacement** | Set 2 env vars. No modifications to Claude Code CLI or VSCode extension needed | +| **4 Providers** | NVIDIA NIM, OpenRouter (hundreds of models), LM Studio (local), llama.cpp (`llama-server`) | +| **Per-Model Mapping** | Route Opus / Sonnet / Haiku to different models and providers. Mix providers freely | +| **Thinking Token Support** | Parses `` tags and `reasoning_content` into native Claude thinking blocks | +| **Heuristic Tool Parser** | Models outputting tool calls as text are auto-parsed into structured tool use | +| **Request Optimization** | 5 categories of trivial API calls intercepted locally, saving quota and latency | +| **Smart Rate Limiting** | Proactive rolling-window throttle + reactive 429 exponential backoff + optional concurrency cap | +| **Discord / Telegram Bot** | Remote autonomous coding with tree-based threading, session persistence, and live progress | +| **Subagent Control** | Task tool interception forces `run_in_background=False`. No runaway subagents | +| **Extensible** | Clean `BaseProvider` and `MessagingPlatform` ABCs. Add new providers or platforms easily | + +## Quick Start + +### Prerequisites + +1. Get an API key (or use LM Studio / llama.cpp locally): + - **NVIDIA NIM**: [build.nvidia.com/settings/api-keys](https://build.nvidia.com/settings/api-keys) + - **OpenRouter**: [openrouter.ai/keys](https://openrouter.ai/keys) + - **LM Studio**: No API key needed. Run locally with [LM Studio](https://lmstudio.ai) + - **llama.cpp**: No API key needed. Run `llama-server` locally. +2. Install [Claude Code](https://github.com/anthropics/claude-code) +3. Install [uv](https://github.com/astral-sh/uv) (or `uv self update` if already installed) + +### Clone & Configure + +```bash +git clone https://github.com/Alishahryar1/free-claude-code.git +cd free-claude-code +cp .env.example .env +``` + +Choose your provider and edit `.env`: + +
+NVIDIA NIM (40 req/min free, recommended) + +```dotenv +NVIDIA_NIM_API_KEY="nvapi-your-key-here" + +MODEL_OPUS="nvidia_nim/z-ai/glm4.7" +MODEL_SONNET="nvidia_nim/moonshotai/kimi-k2-thinking" +MODEL_HAIKU="nvidia_nim/stepfun-ai/step-3.5-flash" +MODEL="nvidia_nim/z-ai/glm4.7" # fallback + +# Enable for thinking models (kimi, nemotron). Leave false for others (e.g. Mistral). +NIM_ENABLE_THINKING=true +``` + +
+ +
+OpenRouter (hundreds of models) + +```dotenv +OPENROUTER_API_KEY="sk-or-your-key-here" + +MODEL_OPUS="open_router/deepseek/deepseek-r1-0528:free" +MODEL_SONNET="open_router/openai/gpt-oss-120b:free" +MODEL_HAIKU="open_router/stepfun/step-3.5-flash:free" +MODEL="open_router/stepfun/step-3.5-flash:free" # fallback +``` + +
+ +
+LM Studio (fully local, no API key) + +```dotenv +MODEL_OPUS="lmstudio/unsloth/MiniMax-M2.5-GGUF" +MODEL_SONNET="lmstudio/unsloth/Qwen3.5-35B-A3B-GGUF" +MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF" +MODEL="lmstudio/unsloth/GLM-4.7-Flash-GGUF" # fallback +``` + +
+ +
+llama.cpp (fully local, no API key) + +```dotenv +LLAMACPP_BASE_URL="http://localhost:8080/v1" + +MODEL_OPUS="llamacpp/local-model" +MODEL_SONNET="llamacpp/local-model" +MODEL_HAIKU="llamacpp/local-model" +MODEL="llamacpp/local-model" +``` + +
+ +
+Mix providers + +Each `MODEL_*` variable can use a different provider. `MODEL` is the fallback for unrecognized Claude models. + +```dotenv +NVIDIA_NIM_API_KEY="nvapi-your-key-here" +OPENROUTER_API_KEY="sk-or-your-key-here" + +MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5" +MODEL_SONNET="open_router/deepseek/deepseek-r1-0528:free" +MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF" +MODEL="nvidia_nim/z-ai/glm4.7" # fallback +``` + +
+ +
+Optional Authentication (restrict access to your proxy) + +Set `ANTHROPIC_AUTH_TOKEN` in `.env` to require clients to authenticate: + +```dotenv +ANTHROPIC_AUTH_TOKEN="your-secret-token-here" +``` + +**How it works:** +- If `ANTHROPIC_AUTH_TOKEN` is empty (default), no authentication is required (backward compatible) +- If set, clients must provide the same token via the `ANTHROPIC_AUTH_TOKEN` header +- For private Hugging Face Spaces, query auth is supported as `?psw=token`, `?psw:token`, or `?psw%3Atoken` +- The `claude-pick` script automatically reads the token from `.env` if configured + +**Example usage:** +```bash +# With authentication +ANTHROPIC_AUTH_TOKEN="your-secret-token-here" \ +ANTHROPIC_BASE_URL="http://localhost:8082" claude + +# Hugging Face private Space (query auth in URL) +ANTHROPIC_API_KEY="Jack@188" \ +ANTHROPIC_BASE_URL="https://.hf.space?psw:Jack%40188" claude + +# claude-pick automatically uses the configured token +claude-pick +``` + +Note: `HEAD /` returning `405 Method Not Allowed` means auth already passed; only `GET /` is implemented. + +Use this feature if: +- Running the proxy on a public network +- Sharing the server with others but restricting access +- Wanting an additional layer of security + +
+ +### Run It + +**Terminal 1:** Start the proxy server: + +```bash +uv run uvicorn server:app --host 0.0.0.0 --port 8082 +``` + +**Terminal 2:** Run Claude Code: + +#### Powershell +```powershell +$env:ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; $env:ANTHROPIC_API_KEY="Jack@188"; claude +``` +#### Bash +```bash +export ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; export ANTHROPIC_API_KEY="Jack@188"; claude +``` + +That's it! Claude Code now uses your configured provider for free. + +### One-Click Factory Reset (Space Admin) + +Open the admin page: + +- Local: `http://localhost:8082/admin/factory-reset?psw:Jack%40188` +- Space: `https://.hf.space/admin/factory-reset?psw:Jack%40188` + +Click **Factory Restart** to clear runtime cache + workspace data and restart the server. + +
+VSCode Extension Setup + +1. Start the proxy server (same as above). +2. Open Settings (`Ctrl + ,`) and search for `claude-code.environmentVariables`. +3. Click **Edit in settings.json** and add: + +```json +"claudeCode.environmentVariables": [ + { "name": "ANTHROPIC_BASE_URL", "value": "http://localhost:8082" }, + { "name": "ANTHROPIC_AUTH_TOKEN", "value": "freecc" } +] +``` + +4. Reload extensions. +5. **If you see the login screen**: Click **Anthropic Console**, then authorize. The extension will start working. You may be redirected to buy credits in the browser; ignore it — the extension already works. + +To switch back to Anthropic models, comment out the added block and reload extensions. + +
+ +
+Multi-Model Support (Model Picker) + +`claude-pick` is an interactive model selector that lets you choose any model from your active provider each time you launch Claude, without editing `MODEL` in `.env`. + +https://github.com/user-attachments/assets/9a33c316-90f8-4418-9650-97e7d33ad645 + +**1. Install [fzf](https://github.com/junegunn/fzf)**: + +```bash +brew install fzf # macOS/Linux +``` + +**2. Add the alias to `~/.zshrc` or `~/.bashrc`:** + +```bash +alias claude-pick="/absolute/path/to/free-claude-code/claude-pick" +``` + +Then reload your shell (`source ~/.zshrc` or `source ~/.bashrc`) and run `claude-pick`. + +**Or use a fixed model alias** (no picker needed): + +```bash +alias claude-kimi='ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc:moonshotai/kimi-k2.5" claude' +``` + +
+ +### Install as a Package (no clone needed) + +```bash +uv tool install git+https://github.com/Alishahryar1/free-claude-code.git +fcc-init # creates ~/.config/free-claude-code/.env from the built-in template +``` + +Edit `~/.config/free-claude-code/.env` with your API keys and model names, then: + +```bash +free-claude-code # starts the server +``` + +> To update: `uv tool upgrade free-claude-code` + +--- + +## How It Works + +``` +┌─────────────────┐ ┌──────────────────────┐ ┌──────────────────┐ +│ Claude Code │───────>│ Free Claude Code │───────>│ LLM Provider │ +│ CLI / VSCode │<───────│ Proxy (:8082) │<───────│ NIM / OR / LMS │ +└─────────────────┘ └──────────────────────┘ └──────────────────┘ + Anthropic API OpenAI-compatible + format (SSE) format (SSE) +``` + +- **Transparent proxy**: Claude Code sends standard Anthropic API requests; the proxy forwards them to your configured provider +- **Per-model routing**: Opus / Sonnet / Haiku requests resolve to their model-specific backend, with `MODEL` as fallback +- **Request optimization**: 5 categories of trivial requests (quota probes, title generation, prefix detection, suggestions, filepath extraction) are intercepted and responded to locally without using API quota +- **Format translation**: Requests are translated from Anthropic format to the provider's OpenAI-compatible format and streamed back +- **Thinking tokens**: `` tags and `reasoning_content` fields are converted into native Claude thinking blocks + +--- + +## Providers + +| Provider | Cost | Rate Limit | Best For | +| -------------- | ------------ | ---------- | ------------------------------------ | +| **NVIDIA NIM** | Free | 40 req/min | Daily driver, generous free tier | +| **OpenRouter** | Free / Paid | Varies | Model variety, fallback options | +| **LM Studio** | Free (local) | Unlimited | Privacy, offline use, no rate limits | +| **llama.cpp** | Free (local) | Unlimited | Lightweight local inference engine | + +Models use a prefix format: `provider_prefix/model/name`. An invalid prefix causes an error. + +| Provider | `MODEL` prefix | API Key Variable | Default Base URL | +| ---------- | ----------------- | -------------------- | ----------------------------- | +| NVIDIA NIM | `nvidia_nim/...` | `NVIDIA_NIM_API_KEY` | `integrate.api.nvidia.com/v1` | +| OpenRouter | `open_router/...` | `OPENROUTER_API_KEY` | `openrouter.ai/api/v1` | +| LM Studio | `lmstudio/...` | (none) | `localhost:1234/v1` | +| llama.cpp | `llamacpp/...` | (none) | `localhost:8080/v1` | + +
+NVIDIA NIM models + +Popular models (full list in [`nvidia_nim_models.json`](nvidia_nim_models.json)): + +- `nvidia_nim/minimaxai/minimax-m2.5` +- `nvidia_nim/qwen/qwen3.5-397b-a17b` +- `nvidia_nim/z-ai/glm5` +- `nvidia_nim/moonshotai/kimi-k2.5` +- `nvidia_nim/stepfun-ai/step-3.5-flash` + +Browse: [build.nvidia.com](https://build.nvidia.com/explore/discover) · Update list: `curl "https://integrate.api.nvidia.com/v1/models" > nvidia_nim_models.json` + +
+ +
+OpenRouter models + +Popular free models: + +- `open_router/arcee-ai/trinity-large-preview:free` +- `open_router/stepfun/step-3.5-flash:free` +- `open_router/deepseek/deepseek-r1-0528:free` +- `open_router/openai/gpt-oss-120b:free` + +Browse: [openrouter.ai/models](https://openrouter.ai/models) · [Free models](https://openrouter.ai/collections/free-models) + +
+ +
+LM Studio models + +Run models locally with [LM Studio](https://lmstudio.ai). Load a model in the Chat or Developer tab, then set `MODEL` to its identifier. + +Examples with native tool-use support: + +- `LiquidAI/LFM2-24B-A2B-GGUF` +- `unsloth/MiniMax-M2.5-GGUF` +- `unsloth/GLM-4.7-Flash-GGUF` +- `unsloth/Qwen3.5-35B-A3B-GGUF` + +Browse: [model.lmstudio.ai](https://model.lmstudio.ai) + +
+ +
+llama.cpp models + +Run models locally using `llama-server`. Ensure you have a tool-capable GGUF. Set `MODEL` to whatever arbitrary name you'd like (e.g. `llamacpp/my-model`), as `llama-server` ignores the model name when run via `/v1/messages`. + +See the Unsloth docs for detailed instructions and capable models: +[https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b](https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b) + +
+ +--- + +## Discord Bot + +Control Claude Code remotely from Discord (or Telegram). Send tasks, watch live progress, and manage multiple concurrent sessions. + +**Capabilities:** + +- Tree-based message threading: reply to a message to fork the conversation +- Session persistence across server restarts +- Live streaming of thinking tokens, tool calls, and results +- Unlimited concurrent Claude CLI sessions (concurrency controlled by `PROVIDER_MAX_CONCURRENCY`) +- Voice notes: send voice messages; they are transcribed and processed as regular prompts +- Commands: `/stop` (cancel a task; reply to a message to stop only that task), `/clear` (reset all sessions, or reply to clear a branch), `/stats` + +### Setup + +1. **Create a Discord Bot**: Go to [Discord Developer Portal](https://discord.com/developers/applications), create an application, add a bot, and copy the token. Enable **Message Content Intent** under Bot settings. + +2. **Edit `.env`:** + +```dotenv +MESSAGING_PLATFORM="discord" +DISCORD_BOT_TOKEN="your_discord_bot_token" +ALLOWED_DISCORD_CHANNELS="123456789,987654321" +``` + +> Enable Developer Mode in Discord (Settings → Advanced), then right-click a channel and "Copy ID". Comma-separate multiple channels. If empty, no channels are allowed. + +3. **Configure the workspace** (where Claude will operate): + +```dotenv +CLAUDE_WORKSPACE="./agent_workspace" +ALLOWED_DIR="C:/Users/yourname/projects" +``` + +4. **Start the server:** + +```bash +uv run uvicorn server:app --host 0.0.0.0 --port 8082 +``` + +5. **Invite the bot** via OAuth2 URL Generator (scopes: `bot`, permissions: Read Messages, Send Messages, Manage Messages, Read Message History). + +### Telegram + +Set `MESSAGING_PLATFORM=telegram` and configure: + +```dotenv +TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrSTUvwxYZ" +ALLOWED_TELEGRAM_USER_ID="your_telegram_user_id" +``` + +Get a token from [@BotFather](https://t.me/BotFather); find your user ID via [@userinfobot](https://t.me/userinfobot). + +### Voice Notes + +Send voice messages on Discord or Telegram; they are transcribed and processed as regular prompts. + +| Backend | Description | API Key | +| --------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------------- | +| **Local Whisper** (default) | [Hugging Face Whisper](https://huggingface.co/openai/whisper-large-v3-turbo) — free, offline, CUDA compatible | not required | +| **NVIDIA NIM** | Whisper/Parakeet models via gRPC | `NVIDIA_NIM_API_KEY` | + +**Install the voice extras:** + +```bash +# If you cloned the repo: +uv sync --extra voice_local # Local Whisper +uv sync --extra voice # NVIDIA NIM +uv sync --extra voice --extra voice_local # Both + +# If you installed as a package (no clone): +uv tool install "free-claude-code[voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git" +uv tool install "free-claude-code[voice] @ git+https://github.com/Alishahryar1/free-claude-code.git" +uv tool install "free-claude-code[voice,voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git" +``` + +Configure via `WHISPER_DEVICE` (`cpu` | `cuda` | `nvidia_nim`) and `WHISPER_MODEL`. See the [Configuration](#configuration) table for all voice variables and supported model values. + +--- + +## Configuration + +### Core + +| Variable | Description | Default | +| -------------------- | --------------------------------------------------------------------- | ------------------------------------------------- | +| `MODEL` | Fallback model (`provider/model/name` format; invalid prefix → error) | `nvidia_nim/stepfun-ai/step-3.5-flash` | +| `MODEL_OPUS` | Model for Claude Opus requests (falls back to `MODEL`) | `nvidia_nim/z-ai/glm4.7` | +| `MODEL_SONNET` | Model for Claude Sonnet requests (falls back to `MODEL`) | `open_router/arcee-ai/trinity-large-preview:free` | +| `MODEL_HAIKU` | Model for Claude Haiku requests (falls back to `MODEL`) | `open_router/stepfun/step-3.5-flash:free` | +| `NVIDIA_NIM_API_KEY` | NVIDIA API key | required for NIM | +| `NIM_ENABLE_THINKING` | Send `chat_template_kwargs` + `reasoning_budget` on NIM requests. Enable for thinking models (kimi, nemotron); leave `false` for others (e.g. Mistral) | `false` | +| `OPENROUTER_API_KEY` | OpenRouter API key | required for OpenRouter | +| `LM_STUDIO_BASE_URL` | LM Studio server URL | `http://localhost:1234/v1` | +| `LLAMACPP_BASE_URL` | llama.cpp server URL | `http://localhost:8080/v1` | + +### Rate Limiting & Timeouts + +| Variable | Description | Default | +| -------------------------- | ----------------------------------------- | ------- | +| `PROVIDER_RATE_LIMIT` | LLM API requests per window | `40` | +| `PROVIDER_RATE_WINDOW` | Rate limit window (seconds) | `60` | +| `PROVIDER_MAX_CONCURRENCY` | Max simultaneous open provider streams | `5` | +| `HTTP_READ_TIMEOUT` | Read timeout for provider requests (s) | `120` | +| `HTTP_WRITE_TIMEOUT` | Write timeout for provider requests (s) | `10` | +| `HTTP_CONNECT_TIMEOUT` | Connect timeout for provider requests (s) | `2` | + +### Messaging & Voice + +| Variable | Description | Default | +| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- | +| `MESSAGING_PLATFORM` | `discord` or `telegram` | `discord` | +| `DISCORD_BOT_TOKEN` | Discord bot token | `""` | +| `ALLOWED_DISCORD_CHANNELS` | Comma-separated channel IDs (empty = none allowed) | `""` | +| `TELEGRAM_BOT_TOKEN` | Telegram bot token | `""` | +| `ALLOWED_TELEGRAM_USER_ID` | Allowed Telegram user ID | `""` | +| `CLAUDE_WORKSPACE` | Directory where the agent operates | `./agent_workspace` | +| `ALLOWED_DIR` | Allowed directories for the agent | `""` | +| `MESSAGING_RATE_LIMIT` | Messaging messages per window | `1` | +| `MESSAGING_RATE_WINDOW` | Messaging window (seconds) | `1` | +| `VOICE_NOTE_ENABLED` | Enable voice note handling | `true` | +| `WHISPER_DEVICE` | `cpu` \| `cuda` \| `nvidia_nim` | `cpu` | +| `WHISPER_MODEL` | Whisper model (local: `tiny`/`base`/`small`/`medium`/`large-v2`/`large-v3`/`large-v3-turbo`; NIM: `openai/whisper-large-v3`, `nvidia/parakeet-ctc-1.1b-asr`, etc.) | `base` | +| `HF_TOKEN` | Hugging Face token for faster downloads (local Whisper, optional) | — | + +
+Advanced: Request optimization flags + +These are enabled by default and intercept trivial Claude Code requests locally to save API quota. + +| Variable | Description | Default | +| --------------------------------- | ------------------------------ | ------- | +| `FAST_PREFIX_DETECTION` | Enable fast prefix detection | `true` | +| `ENABLE_NETWORK_PROBE_MOCK` | Mock network probe requests | `true` | +| `ENABLE_TITLE_GENERATION_SKIP` | Skip title generation requests | `true` | +| `ENABLE_SUGGESTION_MODE_SKIP` | Skip suggestion mode requests | `true` | +| `ENABLE_FILEPATH_EXTRACTION_MOCK` | Mock filepath extraction | `true` | + +
+ +See [`.env.example`](.env.example) for all supported parameters. + +--- + +## Development + +### Project Structure + +``` +free-claude-code/ +├── server.py # Entry point +├── api/ # FastAPI routes, request detection, optimization handlers +├── providers/ # BaseProvider, OpenAICompatibleProvider, NIM, OpenRouter, LM Studio, llamacpp +│ └── common/ # Shared utils (SSE builder, message converter, parsers, error mapping) +├── messaging/ # MessagingPlatform ABC + Discord/Telegram bots, session management +├── config/ # Settings, NIM config, logging +├── cli/ # CLI session and process management +└── tests/ # Pytest test suite +``` + +### Commands + +```bash +uv run ruff format # Format code +uv run ruff check # Lint +uv run ty check # Type checking +uv run pytest # Run tests +``` + +### Extending + +**Adding an OpenAI-compatible provider** (Groq, Together AI, etc.) — extend `OpenAICompatibleProvider`: + +```python +from providers.openai_compat import OpenAICompatibleProvider +from providers.base import ProviderConfig + +class MyProvider(OpenAICompatibleProvider): + def __init__(self, config: ProviderConfig): + super().__init__(config, provider_name="MYPROVIDER", + base_url="https://api.example.com/v1", api_key=config.api_key) +``` + +**Adding a fully custom provider** — extend `BaseProvider` directly and implement `stream_response()`. + +**Adding a messaging platform** — extend `MessagingPlatform` in `messaging/` and implement `start()`, `stop()`, `send_message()`, `edit_message()`, and `on_message()`. + +--- + +## Contributing + +- Report bugs or suggest features via [Issues](https://github.com/Alishahryar1/free-claude-code/issues) +- Add new LLM providers (Groq, Together AI, etc.) +- Add new messaging platforms (Slack, etc.) +- Improve test coverage +- Not accepting Docker integration PRs for now + +```bash +git checkout -b my-feature +uv run ruff format && uv run ruff check && uv run ty check && uv run pytest +# Open a pull request +``` + +--- + +## License + +MIT License. See [LICENSE](LICENSE) for details. + +Built with [FastAPI](https://fastapi.tiangolo.com/), [OpenAI Python SDK](https://github.com/openai/openai-python), [discord.py](https://github.com/Rapptz/discord.py), and [python-telegram-bot](https://python-telegram-bot.org/). diff --git a/Claude_Code/api/__init__.py b/api/__init__.py similarity index 100% rename from Claude_Code/api/__init__.py rename to api/__init__.py diff --git a/Claude_Code/api/app.py b/api/app.py similarity index 100% rename from Claude_Code/api/app.py rename to api/app.py diff --git a/Claude_Code/api/command_utils.py b/api/command_utils.py similarity index 100% rename from Claude_Code/api/command_utils.py rename to api/command_utils.py diff --git a/Claude_Code/api/dependencies.py b/api/dependencies.py similarity index 100% rename from Claude_Code/api/dependencies.py rename to api/dependencies.py diff --git a/Claude_Code/api/detection.py b/api/detection.py similarity index 100% rename from Claude_Code/api/detection.py rename to api/detection.py diff --git a/Claude_Code/api/models/__init__.py b/api/models/__init__.py similarity index 100% rename from Claude_Code/api/models/__init__.py rename to api/models/__init__.py diff --git a/Claude_Code/api/models/anthropic.py b/api/models/anthropic.py similarity index 100% rename from Claude_Code/api/models/anthropic.py rename to api/models/anthropic.py diff --git a/Claude_Code/api/models/responses.py b/api/models/responses.py similarity index 100% rename from Claude_Code/api/models/responses.py rename to api/models/responses.py diff --git a/Claude_Code/api/optimization_handlers.py b/api/optimization_handlers.py similarity index 100% rename from Claude_Code/api/optimization_handlers.py rename to api/optimization_handlers.py diff --git a/Claude_Code/api/request_utils.py b/api/request_utils.py similarity index 100% rename from Claude_Code/api/request_utils.py rename to api/request_utils.py diff --git a/Claude_Code/api/routes.py b/api/routes.py similarity index 100% rename from Claude_Code/api/routes.py rename to api/routes.py diff --git a/Claude_Code/claude-pick b/claude-pick similarity index 100% rename from Claude_Code/claude-pick rename to claude-pick diff --git a/Claude_Code/cli/__init__.py b/cli/__init__.py similarity index 100% rename from Claude_Code/cli/__init__.py rename to cli/__init__.py diff --git a/Claude_Code/cli/entrypoints.py b/cli/entrypoints.py similarity index 100% rename from Claude_Code/cli/entrypoints.py rename to cli/entrypoints.py diff --git a/Claude_Code/cli/manager.py b/cli/manager.py similarity index 100% rename from Claude_Code/cli/manager.py rename to cli/manager.py diff --git a/Claude_Code/cli/process_registry.py b/cli/process_registry.py similarity index 100% rename from Claude_Code/cli/process_registry.py rename to cli/process_registry.py diff --git a/Claude_Code/cli/session.py b/cli/session.py similarity index 100% rename from Claude_Code/cli/session.py rename to cli/session.py diff --git a/Claude_Code/config/__init__.py b/config/__init__.py similarity index 100% rename from Claude_Code/config/__init__.py rename to config/__init__.py diff --git a/Claude_Code/config/env.example b/config/env.example similarity index 100% rename from Claude_Code/config/env.example rename to config/env.example diff --git a/Claude_Code/config/logging_config.py b/config/logging_config.py similarity index 100% rename from Claude_Code/config/logging_config.py rename to config/logging_config.py diff --git a/Claude_Code/config/nim.py b/config/nim.py similarity index 100% rename from Claude_Code/config/nim.py rename to config/nim.py diff --git a/Claude_Code/config/settings.py b/config/settings.py similarity index 100% rename from Claude_Code/config/settings.py rename to config/settings.py diff --git a/Claude_Code/messaging/__init__.py b/messaging/__init__.py similarity index 100% rename from Claude_Code/messaging/__init__.py rename to messaging/__init__.py diff --git a/Claude_Code/messaging/commands.py b/messaging/commands.py similarity index 100% rename from Claude_Code/messaging/commands.py rename to messaging/commands.py diff --git a/Claude_Code/messaging/event_parser.py b/messaging/event_parser.py similarity index 100% rename from Claude_Code/messaging/event_parser.py rename to messaging/event_parser.py diff --git a/Claude_Code/messaging/handler.py b/messaging/handler.py similarity index 100% rename from Claude_Code/messaging/handler.py rename to messaging/handler.py diff --git a/Claude_Code/messaging/limiter.py b/messaging/limiter.py similarity index 100% rename from Claude_Code/messaging/limiter.py rename to messaging/limiter.py diff --git a/Claude_Code/messaging/models.py b/messaging/models.py similarity index 100% rename from Claude_Code/messaging/models.py rename to messaging/models.py diff --git a/Claude_Code/messaging/platforms/__init__.py b/messaging/platforms/__init__.py similarity index 100% rename from Claude_Code/messaging/platforms/__init__.py rename to messaging/platforms/__init__.py diff --git a/Claude_Code/messaging/platforms/base.py b/messaging/platforms/base.py similarity index 100% rename from Claude_Code/messaging/platforms/base.py rename to messaging/platforms/base.py diff --git a/Claude_Code/messaging/platforms/discord.py b/messaging/platforms/discord.py similarity index 100% rename from Claude_Code/messaging/platforms/discord.py rename to messaging/platforms/discord.py diff --git a/Claude_Code/messaging/platforms/factory.py b/messaging/platforms/factory.py similarity index 100% rename from Claude_Code/messaging/platforms/factory.py rename to messaging/platforms/factory.py diff --git a/Claude_Code/messaging/platforms/telegram.py b/messaging/platforms/telegram.py similarity index 100% rename from Claude_Code/messaging/platforms/telegram.py rename to messaging/platforms/telegram.py diff --git a/Claude_Code/messaging/rendering/__init__.py b/messaging/rendering/__init__.py similarity index 100% rename from Claude_Code/messaging/rendering/__init__.py rename to messaging/rendering/__init__.py diff --git a/Claude_Code/messaging/rendering/discord_markdown.py b/messaging/rendering/discord_markdown.py similarity index 100% rename from Claude_Code/messaging/rendering/discord_markdown.py rename to messaging/rendering/discord_markdown.py diff --git a/Claude_Code/messaging/rendering/telegram_markdown.py b/messaging/rendering/telegram_markdown.py similarity index 100% rename from Claude_Code/messaging/rendering/telegram_markdown.py rename to messaging/rendering/telegram_markdown.py diff --git a/Claude_Code/messaging/session.py b/messaging/session.py similarity index 100% rename from Claude_Code/messaging/session.py rename to messaging/session.py diff --git a/Claude_Code/messaging/transcript.py b/messaging/transcript.py similarity index 100% rename from Claude_Code/messaging/transcript.py rename to messaging/transcript.py diff --git a/Claude_Code/messaging/transcription.py b/messaging/transcription.py similarity index 100% rename from Claude_Code/messaging/transcription.py rename to messaging/transcription.py diff --git a/Claude_Code/messaging/trees/__init__.py b/messaging/trees/__init__.py similarity index 100% rename from Claude_Code/messaging/trees/__init__.py rename to messaging/trees/__init__.py diff --git a/Claude_Code/messaging/trees/data.py b/messaging/trees/data.py similarity index 100% rename from Claude_Code/messaging/trees/data.py rename to messaging/trees/data.py diff --git a/Claude_Code/messaging/trees/processor.py b/messaging/trees/processor.py similarity index 100% rename from Claude_Code/messaging/trees/processor.py rename to messaging/trees/processor.py diff --git a/Claude_Code/messaging/trees/queue_manager.py b/messaging/trees/queue_manager.py similarity index 100% rename from Claude_Code/messaging/trees/queue_manager.py rename to messaging/trees/queue_manager.py diff --git a/Claude_Code/messaging/trees/repository.py b/messaging/trees/repository.py similarity index 100% rename from Claude_Code/messaging/trees/repository.py rename to messaging/trees/repository.py diff --git a/Claude_Code/nvidia_nim_models.json b/nvidia_nim_models.json similarity index 100% rename from Claude_Code/nvidia_nim_models.json rename to nvidia_nim_models.json diff --git a/Claude_Code/providers/__init__.py b/providers/__init__.py similarity index 100% rename from Claude_Code/providers/__init__.py rename to providers/__init__.py diff --git a/Claude_Code/providers/base.py b/providers/base.py similarity index 100% rename from Claude_Code/providers/base.py rename to providers/base.py diff --git a/Claude_Code/providers/common/__init__.py b/providers/common/__init__.py similarity index 100% rename from Claude_Code/providers/common/__init__.py rename to providers/common/__init__.py diff --git a/Claude_Code/providers/common/error_mapping.py b/providers/common/error_mapping.py similarity index 100% rename from Claude_Code/providers/common/error_mapping.py rename to providers/common/error_mapping.py diff --git a/Claude_Code/providers/common/heuristic_tool_parser.py b/providers/common/heuristic_tool_parser.py similarity index 100% rename from Claude_Code/providers/common/heuristic_tool_parser.py rename to providers/common/heuristic_tool_parser.py diff --git a/Claude_Code/providers/common/message_converter.py b/providers/common/message_converter.py similarity index 100% rename from Claude_Code/providers/common/message_converter.py rename to providers/common/message_converter.py diff --git a/Claude_Code/providers/common/sse_builder.py b/providers/common/sse_builder.py similarity index 100% rename from Claude_Code/providers/common/sse_builder.py rename to providers/common/sse_builder.py diff --git a/Claude_Code/providers/common/text.py b/providers/common/text.py similarity index 100% rename from Claude_Code/providers/common/text.py rename to providers/common/text.py diff --git a/Claude_Code/providers/common/think_parser.py b/providers/common/think_parser.py similarity index 100% rename from Claude_Code/providers/common/think_parser.py rename to providers/common/think_parser.py diff --git a/Claude_Code/providers/common/utils.py b/providers/common/utils.py similarity index 100% rename from Claude_Code/providers/common/utils.py rename to providers/common/utils.py diff --git a/Claude_Code/providers/exceptions.py b/providers/exceptions.py similarity index 100% rename from Claude_Code/providers/exceptions.py rename to providers/exceptions.py diff --git a/Claude_Code/providers/llamacpp/__init__.py b/providers/llamacpp/__init__.py similarity index 100% rename from Claude_Code/providers/llamacpp/__init__.py rename to providers/llamacpp/__init__.py diff --git a/Claude_Code/providers/llamacpp/client.py b/providers/llamacpp/client.py similarity index 100% rename from Claude_Code/providers/llamacpp/client.py rename to providers/llamacpp/client.py diff --git a/Claude_Code/providers/lmstudio/__init__.py b/providers/lmstudio/__init__.py similarity index 100% rename from Claude_Code/providers/lmstudio/__init__.py rename to providers/lmstudio/__init__.py diff --git a/Claude_Code/providers/lmstudio/client.py b/providers/lmstudio/client.py similarity index 100% rename from Claude_Code/providers/lmstudio/client.py rename to providers/lmstudio/client.py diff --git a/Claude_Code/providers/nvidia_nim/__init__.py b/providers/nvidia_nim/__init__.py similarity index 100% rename from Claude_Code/providers/nvidia_nim/__init__.py rename to providers/nvidia_nim/__init__.py diff --git a/Claude_Code/providers/nvidia_nim/client.py b/providers/nvidia_nim/client.py similarity index 100% rename from Claude_Code/providers/nvidia_nim/client.py rename to providers/nvidia_nim/client.py diff --git a/Claude_Code/providers/nvidia_nim/request.py b/providers/nvidia_nim/request.py similarity index 100% rename from Claude_Code/providers/nvidia_nim/request.py rename to providers/nvidia_nim/request.py diff --git a/Claude_Code/providers/open_router/__init__.py b/providers/open_router/__init__.py similarity index 100% rename from Claude_Code/providers/open_router/__init__.py rename to providers/open_router/__init__.py diff --git a/Claude_Code/providers/open_router/client.py b/providers/open_router/client.py similarity index 100% rename from Claude_Code/providers/open_router/client.py rename to providers/open_router/client.py diff --git a/Claude_Code/providers/open_router/request.py b/providers/open_router/request.py similarity index 100% rename from Claude_Code/providers/open_router/request.py rename to providers/open_router/request.py diff --git a/Claude_Code/providers/openai_compat.py b/providers/openai_compat.py similarity index 100% rename from Claude_Code/providers/openai_compat.py rename to providers/openai_compat.py diff --git a/Claude_Code/providers/rate_limit.py b/providers/rate_limit.py similarity index 100% rename from Claude_Code/providers/rate_limit.py rename to providers/rate_limit.py diff --git a/Claude_Code/pyproject.toml b/pyproject.toml similarity index 100% rename from Claude_Code/pyproject.toml rename to pyproject.toml diff --git a/Claude_Code/requirements.txt b/requirements.txt similarity index 100% rename from Claude_Code/requirements.txt rename to requirements.txt diff --git a/Claude_Code/server.py b/server.py similarity index 100% rename from Claude_Code/server.py rename to server.py diff --git a/Claude_Code/tests/api/test_api.py b/tests/api/test_api.py similarity index 100% rename from Claude_Code/tests/api/test_api.py rename to tests/api/test_api.py diff --git a/Claude_Code/tests/api/test_app_lifespan_and_errors.py b/tests/api/test_app_lifespan_and_errors.py similarity index 100% rename from Claude_Code/tests/api/test_app_lifespan_and_errors.py rename to tests/api/test_app_lifespan_and_errors.py diff --git a/Claude_Code/tests/api/test_auth.py b/tests/api/test_auth.py similarity index 100% rename from Claude_Code/tests/api/test_auth.py rename to tests/api/test_auth.py diff --git a/Claude_Code/tests/api/test_dependencies.py b/tests/api/test_dependencies.py similarity index 100% rename from Claude_Code/tests/api/test_dependencies.py rename to tests/api/test_dependencies.py diff --git a/Claude_Code/tests/api/test_detection.py b/tests/api/test_detection.py similarity index 100% rename from Claude_Code/tests/api/test_detection.py rename to tests/api/test_detection.py diff --git a/Claude_Code/tests/api/test_models_validators.py b/tests/api/test_models_validators.py similarity index 100% rename from Claude_Code/tests/api/test_models_validators.py rename to tests/api/test_models_validators.py diff --git a/Claude_Code/tests/api/test_optimization_handlers.py b/tests/api/test_optimization_handlers.py similarity index 100% rename from Claude_Code/tests/api/test_optimization_handlers.py rename to tests/api/test_optimization_handlers.py diff --git a/Claude_Code/tests/api/test_request_utils.py b/tests/api/test_request_utils.py similarity index 100% rename from Claude_Code/tests/api/test_request_utils.py rename to tests/api/test_request_utils.py diff --git a/Claude_Code/tests/api/test_request_utils_filepaths_and_suggestions.py b/tests/api/test_request_utils_filepaths_and_suggestions.py similarity index 100% rename from Claude_Code/tests/api/test_request_utils_filepaths_and_suggestions.py rename to tests/api/test_request_utils_filepaths_and_suggestions.py diff --git a/Claude_Code/tests/api/test_response_models.py b/tests/api/test_response_models.py similarity index 100% rename from Claude_Code/tests/api/test_response_models.py rename to tests/api/test_response_models.py diff --git a/Claude_Code/tests/api/test_routes_optimizations.py b/tests/api/test_routes_optimizations.py similarity index 100% rename from Claude_Code/tests/api/test_routes_optimizations.py rename to tests/api/test_routes_optimizations.py diff --git a/Claude_Code/tests/api/test_server_module.py b/tests/api/test_server_module.py similarity index 100% rename from Claude_Code/tests/api/test_server_module.py rename to tests/api/test_server_module.py diff --git a/Claude_Code/tests/cli/test_cli.py b/tests/cli/test_cli.py similarity index 100% rename from Claude_Code/tests/cli/test_cli.py rename to tests/cli/test_cli.py diff --git a/Claude_Code/tests/cli/test_cli_manager_edge_cases.py b/tests/cli/test_cli_manager_edge_cases.py similarity index 100% rename from Claude_Code/tests/cli/test_cli_manager_edge_cases.py rename to tests/cli/test_cli_manager_edge_cases.py diff --git a/Claude_Code/tests/cli/test_entrypoints.py b/tests/cli/test_entrypoints.py similarity index 100% rename from Claude_Code/tests/cli/test_entrypoints.py rename to tests/cli/test_entrypoints.py diff --git a/Claude_Code/tests/cli/test_process_registry.py b/tests/cli/test_process_registry.py similarity index 100% rename from Claude_Code/tests/cli/test_process_registry.py rename to tests/cli/test_process_registry.py diff --git a/Claude_Code/tests/config/test_config.py b/tests/config/test_config.py similarity index 100% rename from Claude_Code/tests/config/test_config.py rename to tests/config/test_config.py diff --git a/Claude_Code/tests/config/test_logging_config.py b/tests/config/test_logging_config.py similarity index 100% rename from Claude_Code/tests/config/test_logging_config.py rename to tests/config/test_logging_config.py diff --git a/Claude_Code/tests/conftest.py b/tests/conftest.py similarity index 100% rename from Claude_Code/tests/conftest.py rename to tests/conftest.py diff --git a/Claude_Code/tests/messaging/test_discord_markdown.py b/tests/messaging/test_discord_markdown.py similarity index 100% rename from Claude_Code/tests/messaging/test_discord_markdown.py rename to tests/messaging/test_discord_markdown.py diff --git a/Claude_Code/tests/messaging/test_discord_platform.py b/tests/messaging/test_discord_platform.py similarity index 100% rename from Claude_Code/tests/messaging/test_discord_platform.py rename to tests/messaging/test_discord_platform.py diff --git a/Claude_Code/tests/messaging/test_event_parser.py b/tests/messaging/test_event_parser.py similarity index 100% rename from Claude_Code/tests/messaging/test_event_parser.py rename to tests/messaging/test_event_parser.py diff --git a/Claude_Code/tests/messaging/test_extract_text.py b/tests/messaging/test_extract_text.py similarity index 100% rename from Claude_Code/tests/messaging/test_extract_text.py rename to tests/messaging/test_extract_text.py diff --git a/Claude_Code/tests/messaging/test_handler.py b/tests/messaging/test_handler.py similarity index 100% rename from Claude_Code/tests/messaging/test_handler.py rename to tests/messaging/test_handler.py diff --git a/Claude_Code/tests/messaging/test_handler_context_isolation.py b/tests/messaging/test_handler_context_isolation.py similarity index 100% rename from Claude_Code/tests/messaging/test_handler_context_isolation.py rename to tests/messaging/test_handler_context_isolation.py diff --git a/Claude_Code/tests/messaging/test_handler_format.py b/tests/messaging/test_handler_format.py similarity index 100% rename from Claude_Code/tests/messaging/test_handler_format.py rename to tests/messaging/test_handler_format.py diff --git a/Claude_Code/tests/messaging/test_handler_integration.py b/tests/messaging/test_handler_integration.py similarity index 100% rename from Claude_Code/tests/messaging/test_handler_integration.py rename to tests/messaging/test_handler_integration.py diff --git a/Claude_Code/tests/messaging/test_handler_markdown_and_status_edges.py b/tests/messaging/test_handler_markdown_and_status_edges.py similarity index 100% rename from Claude_Code/tests/messaging/test_handler_markdown_and_status_edges.py rename to tests/messaging/test_handler_markdown_and_status_edges.py diff --git a/Claude_Code/tests/messaging/test_limiter.py b/tests/messaging/test_limiter.py similarity index 100% rename from Claude_Code/tests/messaging/test_limiter.py rename to tests/messaging/test_limiter.py diff --git a/Claude_Code/tests/messaging/test_messaging.py b/tests/messaging/test_messaging.py similarity index 100% rename from Claude_Code/tests/messaging/test_messaging.py rename to tests/messaging/test_messaging.py diff --git a/Claude_Code/tests/messaging/test_messaging_factory.py b/tests/messaging/test_messaging_factory.py similarity index 100% rename from Claude_Code/tests/messaging/test_messaging_factory.py rename to tests/messaging/test_messaging_factory.py diff --git a/Claude_Code/tests/messaging/test_reliability.py b/tests/messaging/test_reliability.py similarity index 100% rename from Claude_Code/tests/messaging/test_reliability.py rename to tests/messaging/test_reliability.py diff --git a/Claude_Code/tests/messaging/test_restart_reply_restore.py b/tests/messaging/test_restart_reply_restore.py similarity index 100% rename from Claude_Code/tests/messaging/test_restart_reply_restore.py rename to tests/messaging/test_restart_reply_restore.py diff --git a/Claude_Code/tests/messaging/test_robust_formatting.py b/tests/messaging/test_robust_formatting.py similarity index 100% rename from Claude_Code/tests/messaging/test_robust_formatting.py rename to tests/messaging/test_robust_formatting.py diff --git a/Claude_Code/tests/messaging/test_session_store_edge_cases.py b/tests/messaging/test_session_store_edge_cases.py similarity index 100% rename from Claude_Code/tests/messaging/test_session_store_edge_cases.py rename to tests/messaging/test_session_store_edge_cases.py diff --git a/Claude_Code/tests/messaging/test_telegram.py b/tests/messaging/test_telegram.py similarity index 100% rename from Claude_Code/tests/messaging/test_telegram.py rename to tests/messaging/test_telegram.py diff --git a/Claude_Code/tests/messaging/test_telegram_edge_cases.py b/tests/messaging/test_telegram_edge_cases.py similarity index 100% rename from Claude_Code/tests/messaging/test_telegram_edge_cases.py rename to tests/messaging/test_telegram_edge_cases.py diff --git a/Claude_Code/tests/messaging/test_transcript.py b/tests/messaging/test_transcript.py similarity index 100% rename from Claude_Code/tests/messaging/test_transcript.py rename to tests/messaging/test_transcript.py diff --git a/Claude_Code/tests/messaging/test_transcription.py b/tests/messaging/test_transcription.py similarity index 100% rename from Claude_Code/tests/messaging/test_transcription.py rename to tests/messaging/test_transcription.py diff --git a/Claude_Code/tests/messaging/test_tree_concurrency.py b/tests/messaging/test_tree_concurrency.py similarity index 100% rename from Claude_Code/tests/messaging/test_tree_concurrency.py rename to tests/messaging/test_tree_concurrency.py diff --git a/Claude_Code/tests/messaging/test_tree_processor.py b/tests/messaging/test_tree_processor.py similarity index 100% rename from Claude_Code/tests/messaging/test_tree_processor.py rename to tests/messaging/test_tree_processor.py diff --git a/Claude_Code/tests/messaging/test_tree_queue.py b/tests/messaging/test_tree_queue.py similarity index 100% rename from Claude_Code/tests/messaging/test_tree_queue.py rename to tests/messaging/test_tree_queue.py diff --git a/Claude_Code/tests/messaging/test_tree_repository.py b/tests/messaging/test_tree_repository.py similarity index 100% rename from Claude_Code/tests/messaging/test_tree_repository.py rename to tests/messaging/test_tree_repository.py diff --git a/Claude_Code/tests/messaging/test_voice_handlers.py b/tests/messaging/test_voice_handlers.py similarity index 100% rename from Claude_Code/tests/messaging/test_voice_handlers.py rename to tests/messaging/test_voice_handlers.py diff --git a/Claude_Code/tests/providers/test_converter.py b/tests/providers/test_converter.py similarity index 100% rename from Claude_Code/tests/providers/test_converter.py rename to tests/providers/test_converter.py diff --git a/Claude_Code/tests/providers/test_error_mapping.py b/tests/providers/test_error_mapping.py similarity index 100% rename from Claude_Code/tests/providers/test_error_mapping.py rename to tests/providers/test_error_mapping.py diff --git a/Claude_Code/tests/providers/test_llamacpp.py b/tests/providers/test_llamacpp.py similarity index 100% rename from Claude_Code/tests/providers/test_llamacpp.py rename to tests/providers/test_llamacpp.py diff --git a/Claude_Code/tests/providers/test_lmstudio.py b/tests/providers/test_lmstudio.py similarity index 100% rename from Claude_Code/tests/providers/test_lmstudio.py rename to tests/providers/test_lmstudio.py diff --git a/Claude_Code/tests/providers/test_nvidia_nim.py b/tests/providers/test_nvidia_nim.py similarity index 100% rename from Claude_Code/tests/providers/test_nvidia_nim.py rename to tests/providers/test_nvidia_nim.py diff --git a/Claude_Code/tests/providers/test_nvidia_nim_request.py b/tests/providers/test_nvidia_nim_request.py similarity index 100% rename from Claude_Code/tests/providers/test_nvidia_nim_request.py rename to tests/providers/test_nvidia_nim_request.py diff --git a/Claude_Code/tests/providers/test_open_router.py b/tests/providers/test_open_router.py similarity index 100% rename from Claude_Code/tests/providers/test_open_router.py rename to tests/providers/test_open_router.py diff --git a/Claude_Code/tests/providers/test_parsers.py b/tests/providers/test_parsers.py similarity index 100% rename from Claude_Code/tests/providers/test_parsers.py rename to tests/providers/test_parsers.py diff --git a/Claude_Code/tests/providers/test_provider_rate_limit.py b/tests/providers/test_provider_rate_limit.py similarity index 100% rename from Claude_Code/tests/providers/test_provider_rate_limit.py rename to tests/providers/test_provider_rate_limit.py diff --git a/Claude_Code/tests/providers/test_sse_builder.py b/tests/providers/test_sse_builder.py similarity index 100% rename from Claude_Code/tests/providers/test_sse_builder.py rename to tests/providers/test_sse_builder.py diff --git a/Claude_Code/tests/providers/test_streaming_errors.py b/tests/providers/test_streaming_errors.py similarity index 100% rename from Claude_Code/tests/providers/test_streaming_errors.py rename to tests/providers/test_streaming_errors.py diff --git a/Claude_Code/tests/providers/test_subagent_interception.py b/tests/providers/test_subagent_interception.py similarity index 100% rename from Claude_Code/tests/providers/test_subagent_interception.py rename to tests/providers/test_subagent_interception.py diff --git a/Claude_Code/uv.lock b/uv.lock similarity index 100% rename from Claude_Code/uv.lock rename to uv.lock