Spaces:
Running
Running
Jainish1808 commited on
Commit Β·
bf177ff
1
Parent(s): 00a2010
Move project files to repository root for Hugging Face Space
Browse filesThis view is limited to 50 files because it contains too many changes. Β See raw diff
- Claude_Code/.dockerignore β .dockerignore +0 -0
- Claude_Code/.gitignore β .gitignore +0 -0
- Claude_Code/.python-version β .python-version +0 -0
- Claude_Code/AGENTS.md β AGENTS.md +0 -0
- Claude_Code/CLAUDE.md β CLAUDE.md +0 -0
- Claude_Code/.gitattributes +0 -35
- Claude_Code/README.md +0 -588
- Claude_Code/Dockerfile β Dockerfile +0 -0
- README.md +582 -5
- {Claude_Code/api β api}/__init__.py +0 -0
- {Claude_Code/api β api}/app.py +0 -0
- {Claude_Code/api β api}/command_utils.py +0 -0
- {Claude_Code/api β api}/dependencies.py +0 -0
- {Claude_Code/api β api}/detection.py +0 -0
- {Claude_Code/api β api}/models/__init__.py +0 -0
- {Claude_Code/api β api}/models/anthropic.py +0 -0
- {Claude_Code/api β api}/models/responses.py +0 -0
- {Claude_Code/api β api}/optimization_handlers.py +0 -0
- {Claude_Code/api β api}/request_utils.py +0 -0
- {Claude_Code/api β api}/routes.py +0 -0
- Claude_Code/claude-pick β claude-pick +0 -0
- {Claude_Code/cli β cli}/__init__.py +0 -0
- {Claude_Code/cli β cli}/entrypoints.py +0 -0
- {Claude_Code/cli β cli}/manager.py +0 -0
- {Claude_Code/cli β cli}/process_registry.py +0 -0
- {Claude_Code/cli β cli}/session.py +0 -0
- {Claude_Code/config β config}/__init__.py +0 -0
- {Claude_Code/config β config}/env.example +0 -0
- {Claude_Code/config β config}/logging_config.py +0 -0
- {Claude_Code/config β config}/nim.py +0 -0
- {Claude_Code/config β config}/settings.py +0 -0
- {Claude_Code/messaging β messaging}/__init__.py +0 -0
- {Claude_Code/messaging β messaging}/commands.py +0 -0
- {Claude_Code/messaging β messaging}/event_parser.py +0 -0
- {Claude_Code/messaging β messaging}/handler.py +0 -0
- {Claude_Code/messaging β messaging}/limiter.py +0 -0
- {Claude_Code/messaging β messaging}/models.py +0 -0
- {Claude_Code/messaging β messaging}/platforms/__init__.py +0 -0
- {Claude_Code/messaging β messaging}/platforms/base.py +0 -0
- {Claude_Code/messaging β messaging}/platforms/discord.py +0 -0
- {Claude_Code/messaging β messaging}/platforms/factory.py +0 -0
- {Claude_Code/messaging β messaging}/platforms/telegram.py +0 -0
- {Claude_Code/messaging β messaging}/rendering/__init__.py +0 -0
- {Claude_Code/messaging β messaging}/rendering/discord_markdown.py +0 -0
- {Claude_Code/messaging β messaging}/rendering/telegram_markdown.py +0 -0
- {Claude_Code/messaging β messaging}/session.py +0 -0
- {Claude_Code/messaging β messaging}/transcript.py +0 -0
- {Claude_Code/messaging β messaging}/transcription.py +0 -0
- {Claude_Code/messaging β messaging}/trees/__init__.py +0 -0
- {Claude_Code/messaging β messaging}/trees/data.py +0 -0
Claude_Code/.dockerignore β .dockerignore
RENAMED
|
File without changes
|
Claude_Code/.gitignore β .gitignore
RENAMED
|
File without changes
|
Claude_Code/.python-version β .python-version
RENAMED
|
File without changes
|
Claude_Code/AGENTS.md β AGENTS.md
RENAMED
|
File without changes
|
Claude_Code/CLAUDE.md β CLAUDE.md
RENAMED
|
File without changes
|
Claude_Code/.gitattributes
DELETED
|
@@ -1,35 +0,0 @@
|
|
| 1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Claude_Code/README.md
DELETED
|
@@ -1,588 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
title: Claude Code
|
| 3 |
-
emoji: π€
|
| 4 |
-
colorFrom: indigo
|
| 5 |
-
colorTo: blue
|
| 6 |
-
sdk: docker
|
| 7 |
-
app_port: 7860
|
| 8 |
-
pinned: false
|
| 9 |
-
---
|
| 10 |
-
|
| 11 |
-
<div align="center">
|
| 12 |
-
|
| 13 |
-
# π€ Free Claude Code
|
| 14 |
-
|
| 15 |
-
### Use Claude Code CLI & VSCode for free. No Anthropic API key required.
|
| 16 |
-
|
| 17 |
-
[](https://opensource.org/licenses/MIT)
|
| 18 |
-
[](https://www.python.org/downloads/)
|
| 19 |
-
[](https://github.com/astral-sh/uv)
|
| 20 |
-
[](https://github.com/Alishahryar1/free-claude-code/actions/workflows/tests.yml)
|
| 21 |
-
[](https://pypi.org/project/ty/)
|
| 22 |
-
[](https://github.com/astral-sh/ruff)
|
| 23 |
-
[](https://github.com/Delgan/loguru)
|
| 24 |
-
|
| 25 |
-
A lightweight proxy that routes Claude Code's Anthropic API calls to **NVIDIA NIM** (40 req/min free), **OpenRouter** (hundreds of models), **LM Studio** (fully local), or **llama.cpp** (local with Anthropic endpoints).
|
| 26 |
-
|
| 27 |
-
[Quick Start](#quick-start) Β· [Providers](#providers) Β· [Discord Bot](#discord-bot) Β· [Configuration](#configuration) Β· [Development](#development) Β· [Contributing](#contributing)
|
| 28 |
-
|
| 29 |
-
---
|
| 30 |
-
|
| 31 |
-
</div>
|
| 32 |
-
|
| 33 |
-
<div align="center">
|
| 34 |
-
<img src="pic.png" alt="Free Claude Code in action" width="700">
|
| 35 |
-
<p><em>Claude Code running via NVIDIA NIM, completely free</em></p>
|
| 36 |
-
</div>
|
| 37 |
-
|
| 38 |
-
## Features
|
| 39 |
-
|
| 40 |
-
| Feature | Description |
|
| 41 |
-
| -------------------------- | ----------------------------------------------------------------------------------------------- |
|
| 42 |
-
| **Zero Cost** | 40 req/min free on NVIDIA NIM. Free models on OpenRouter. Fully local with LM Studio |
|
| 43 |
-
| **Drop-in Replacement** | Set 2 env vars. No modifications to Claude Code CLI or VSCode extension needed |
|
| 44 |
-
| **4 Providers** | NVIDIA NIM, OpenRouter (hundreds of models), LM Studio (local), llama.cpp (`llama-server`) |
|
| 45 |
-
| **Per-Model Mapping** | Route Opus / Sonnet / Haiku to different models and providers. Mix providers freely |
|
| 46 |
-
| **Thinking Token Support** | Parses `<think>` tags and `reasoning_content` into native Claude thinking blocks |
|
| 47 |
-
| **Heuristic Tool Parser** | Models outputting tool calls as text are auto-parsed into structured tool use |
|
| 48 |
-
| **Request Optimization** | 5 categories of trivial API calls intercepted locally, saving quota and latency |
|
| 49 |
-
| **Smart Rate Limiting** | Proactive rolling-window throttle + reactive 429 exponential backoff + optional concurrency cap |
|
| 50 |
-
| **Discord / Telegram Bot** | Remote autonomous coding with tree-based threading, session persistence, and live progress |
|
| 51 |
-
| **Subagent Control** | Task tool interception forces `run_in_background=False`. No runaway subagents |
|
| 52 |
-
| **Extensible** | Clean `BaseProvider` and `MessagingPlatform` ABCs. Add new providers or platforms easily |
|
| 53 |
-
|
| 54 |
-
## Quick Start
|
| 55 |
-
|
| 56 |
-
### Prerequisites
|
| 57 |
-
|
| 58 |
-
1. Get an API key (or use LM Studio / llama.cpp locally):
|
| 59 |
-
- **NVIDIA NIM**: [build.nvidia.com/settings/api-keys](https://build.nvidia.com/settings/api-keys)
|
| 60 |
-
- **OpenRouter**: [openrouter.ai/keys](https://openrouter.ai/keys)
|
| 61 |
-
- **LM Studio**: No API key needed. Run locally with [LM Studio](https://lmstudio.ai)
|
| 62 |
-
- **llama.cpp**: No API key needed. Run `llama-server` locally.
|
| 63 |
-
2. Install [Claude Code](https://github.com/anthropics/claude-code)
|
| 64 |
-
3. Install [uv](https://github.com/astral-sh/uv) (or `uv self update` if already installed)
|
| 65 |
-
|
| 66 |
-
### Clone & Configure
|
| 67 |
-
|
| 68 |
-
```bash
|
| 69 |
-
git clone https://github.com/Alishahryar1/free-claude-code.git
|
| 70 |
-
cd free-claude-code
|
| 71 |
-
cp .env.example .env
|
| 72 |
-
```
|
| 73 |
-
|
| 74 |
-
Choose your provider and edit `.env`:
|
| 75 |
-
|
| 76 |
-
<details>
|
| 77 |
-
<summary><b>NVIDIA NIM</b> (40 req/min free, recommended)</summary>
|
| 78 |
-
|
| 79 |
-
```dotenv
|
| 80 |
-
NVIDIA_NIM_API_KEY="nvapi-your-key-here"
|
| 81 |
-
|
| 82 |
-
MODEL_OPUS="nvidia_nim/z-ai/glm4.7"
|
| 83 |
-
MODEL_SONNET="nvidia_nim/moonshotai/kimi-k2-thinking"
|
| 84 |
-
MODEL_HAIKU="nvidia_nim/stepfun-ai/step-3.5-flash"
|
| 85 |
-
MODEL="nvidia_nim/z-ai/glm4.7" # fallback
|
| 86 |
-
|
| 87 |
-
# Enable for thinking models (kimi, nemotron). Leave false for others (e.g. Mistral).
|
| 88 |
-
NIM_ENABLE_THINKING=true
|
| 89 |
-
```
|
| 90 |
-
|
| 91 |
-
</details>
|
| 92 |
-
|
| 93 |
-
<details>
|
| 94 |
-
<summary><b>OpenRouter</b> (hundreds of models)</summary>
|
| 95 |
-
|
| 96 |
-
```dotenv
|
| 97 |
-
OPENROUTER_API_KEY="sk-or-your-key-here"
|
| 98 |
-
|
| 99 |
-
MODEL_OPUS="open_router/deepseek/deepseek-r1-0528:free"
|
| 100 |
-
MODEL_SONNET="open_router/openai/gpt-oss-120b:free"
|
| 101 |
-
MODEL_HAIKU="open_router/stepfun/step-3.5-flash:free"
|
| 102 |
-
MODEL="open_router/stepfun/step-3.5-flash:free" # fallback
|
| 103 |
-
```
|
| 104 |
-
|
| 105 |
-
</details>
|
| 106 |
-
|
| 107 |
-
<details>
|
| 108 |
-
<summary><b>LM Studio</b> (fully local, no API key)</summary>
|
| 109 |
-
|
| 110 |
-
```dotenv
|
| 111 |
-
MODEL_OPUS="lmstudio/unsloth/MiniMax-M2.5-GGUF"
|
| 112 |
-
MODEL_SONNET="lmstudio/unsloth/Qwen3.5-35B-A3B-GGUF"
|
| 113 |
-
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
|
| 114 |
-
MODEL="lmstudio/unsloth/GLM-4.7-Flash-GGUF" # fallback
|
| 115 |
-
```
|
| 116 |
-
|
| 117 |
-
</details>
|
| 118 |
-
|
| 119 |
-
<details>
|
| 120 |
-
<summary><b>llama.cpp</b> (fully local, no API key)</summary>
|
| 121 |
-
|
| 122 |
-
```dotenv
|
| 123 |
-
LLAMACPP_BASE_URL="http://localhost:8080/v1"
|
| 124 |
-
|
| 125 |
-
MODEL_OPUS="llamacpp/local-model"
|
| 126 |
-
MODEL_SONNET="llamacpp/local-model"
|
| 127 |
-
MODEL_HAIKU="llamacpp/local-model"
|
| 128 |
-
MODEL="llamacpp/local-model"
|
| 129 |
-
```
|
| 130 |
-
|
| 131 |
-
</details>
|
| 132 |
-
|
| 133 |
-
<details>
|
| 134 |
-
<summary><b>Mix providers</b></summary>
|
| 135 |
-
|
| 136 |
-
Each `MODEL_*` variable can use a different provider. `MODEL` is the fallback for unrecognized Claude models.
|
| 137 |
-
|
| 138 |
-
```dotenv
|
| 139 |
-
NVIDIA_NIM_API_KEY="nvapi-your-key-here"
|
| 140 |
-
OPENROUTER_API_KEY="sk-or-your-key-here"
|
| 141 |
-
|
| 142 |
-
MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5"
|
| 143 |
-
MODEL_SONNET="open_router/deepseek/deepseek-r1-0528:free"
|
| 144 |
-
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
|
| 145 |
-
MODEL="nvidia_nim/z-ai/glm4.7" # fallback
|
| 146 |
-
```
|
| 147 |
-
|
| 148 |
-
</details>
|
| 149 |
-
|
| 150 |
-
<details>
|
| 151 |
-
<summary><b>Optional Authentication</b> (restrict access to your proxy)</summary>
|
| 152 |
-
|
| 153 |
-
Set `ANTHROPIC_AUTH_TOKEN` in `.env` to require clients to authenticate:
|
| 154 |
-
|
| 155 |
-
```dotenv
|
| 156 |
-
ANTHROPIC_AUTH_TOKEN="your-secret-token-here"
|
| 157 |
-
```
|
| 158 |
-
|
| 159 |
-
**How it works:**
|
| 160 |
-
- If `ANTHROPIC_AUTH_TOKEN` is empty (default), no authentication is required (backward compatible)
|
| 161 |
-
- If set, clients must provide the same token via the `ANTHROPIC_AUTH_TOKEN` header
|
| 162 |
-
- For private Hugging Face Spaces, query auth is supported as `?psw=token`, `?psw:token`, or `?psw%3Atoken`
|
| 163 |
-
- The `claude-pick` script automatically reads the token from `.env` if configured
|
| 164 |
-
|
| 165 |
-
**Example usage:**
|
| 166 |
-
```bash
|
| 167 |
-
# With authentication
|
| 168 |
-
ANTHROPIC_AUTH_TOKEN="your-secret-token-here" \
|
| 169 |
-
ANTHROPIC_BASE_URL="http://localhost:8082" claude
|
| 170 |
-
|
| 171 |
-
# Hugging Face private Space (query auth in URL)
|
| 172 |
-
ANTHROPIC_API_KEY="Jack@188" \
|
| 173 |
-
ANTHROPIC_BASE_URL="https://<your-space>.hf.space?psw:Jack%40188" claude
|
| 174 |
-
|
| 175 |
-
# claude-pick automatically uses the configured token
|
| 176 |
-
claude-pick
|
| 177 |
-
```
|
| 178 |
-
|
| 179 |
-
Note: `HEAD /` returning `405 Method Not Allowed` means auth already passed; only `GET /` is implemented.
|
| 180 |
-
|
| 181 |
-
Use this feature if:
|
| 182 |
-
- Running the proxy on a public network
|
| 183 |
-
- Sharing the server with others but restricting access
|
| 184 |
-
- Wanting an additional layer of security
|
| 185 |
-
|
| 186 |
-
</details>
|
| 187 |
-
|
| 188 |
-
### Run It
|
| 189 |
-
|
| 190 |
-
**Terminal 1:** Start the proxy server:
|
| 191 |
-
|
| 192 |
-
```bash
|
| 193 |
-
uv run uvicorn server:app --host 0.0.0.0 --port 8082
|
| 194 |
-
```
|
| 195 |
-
|
| 196 |
-
**Terminal 2:** Run Claude Code:
|
| 197 |
-
|
| 198 |
-
#### Powershell
|
| 199 |
-
```powershell
|
| 200 |
-
$env:ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; $env:ANTHROPIC_API_KEY="Jack@188"; claude
|
| 201 |
-
```
|
| 202 |
-
#### Bash
|
| 203 |
-
```bash
|
| 204 |
-
export ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; export ANTHROPIC_API_KEY="Jack@188"; claude
|
| 205 |
-
```
|
| 206 |
-
|
| 207 |
-
That's it! Claude Code now uses your configured provider for free.
|
| 208 |
-
|
| 209 |
-
### One-Click Factory Reset (Space Admin)
|
| 210 |
-
|
| 211 |
-
Open the admin page:
|
| 212 |
-
|
| 213 |
-
- Local: `http://localhost:8082/admin/factory-reset?psw:Jack%40188`
|
| 214 |
-
- Space: `https://<your-space>.hf.space/admin/factory-reset?psw:Jack%40188`
|
| 215 |
-
|
| 216 |
-
Click **Factory Restart** to clear runtime cache + workspace data and restart the server.
|
| 217 |
-
|
| 218 |
-
<details>
|
| 219 |
-
<summary><b>VSCode Extension Setup</b></summary>
|
| 220 |
-
|
| 221 |
-
1. Start the proxy server (same as above).
|
| 222 |
-
2. Open Settings (`Ctrl + ,`) and search for `claude-code.environmentVariables`.
|
| 223 |
-
3. Click **Edit in settings.json** and add:
|
| 224 |
-
|
| 225 |
-
```json
|
| 226 |
-
"claudeCode.environmentVariables": [
|
| 227 |
-
{ "name": "ANTHROPIC_BASE_URL", "value": "http://localhost:8082" },
|
| 228 |
-
{ "name": "ANTHROPIC_AUTH_TOKEN", "value": "freecc" }
|
| 229 |
-
]
|
| 230 |
-
```
|
| 231 |
-
|
| 232 |
-
4. Reload extensions.
|
| 233 |
-
5. **If you see the login screen**: Click **Anthropic Console**, then authorize. The extension will start working. You may be redirected to buy credits in the browser; ignore it β the extension already works.
|
| 234 |
-
|
| 235 |
-
To switch back to Anthropic models, comment out the added block and reload extensions.
|
| 236 |
-
|
| 237 |
-
</details>
|
| 238 |
-
|
| 239 |
-
<details>
|
| 240 |
-
<summary><b>Multi-Model Support (Model Picker)</b></summary>
|
| 241 |
-
|
| 242 |
-
`claude-pick` is an interactive model selector that lets you choose any model from your active provider each time you launch Claude, without editing `MODEL` in `.env`.
|
| 243 |
-
|
| 244 |
-
https://github.com/user-attachments/assets/9a33c316-90f8-4418-9650-97e7d33ad645
|
| 245 |
-
|
| 246 |
-
**1. Install [fzf](https://github.com/junegunn/fzf)**:
|
| 247 |
-
|
| 248 |
-
```bash
|
| 249 |
-
brew install fzf # macOS/Linux
|
| 250 |
-
```
|
| 251 |
-
|
| 252 |
-
**2. Add the alias to `~/.zshrc` or `~/.bashrc`:**
|
| 253 |
-
|
| 254 |
-
```bash
|
| 255 |
-
alias claude-pick="/absolute/path/to/free-claude-code/claude-pick"
|
| 256 |
-
```
|
| 257 |
-
|
| 258 |
-
Then reload your shell (`source ~/.zshrc` or `source ~/.bashrc`) and run `claude-pick`.
|
| 259 |
-
|
| 260 |
-
**Or use a fixed model alias** (no picker needed):
|
| 261 |
-
|
| 262 |
-
```bash
|
| 263 |
-
alias claude-kimi='ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc:moonshotai/kimi-k2.5" claude'
|
| 264 |
-
```
|
| 265 |
-
|
| 266 |
-
</details>
|
| 267 |
-
|
| 268 |
-
### Install as a Package (no clone needed)
|
| 269 |
-
|
| 270 |
-
```bash
|
| 271 |
-
uv tool install git+https://github.com/Alishahryar1/free-claude-code.git
|
| 272 |
-
fcc-init # creates ~/.config/free-claude-code/.env from the built-in template
|
| 273 |
-
```
|
| 274 |
-
|
| 275 |
-
Edit `~/.config/free-claude-code/.env` with your API keys and model names, then:
|
| 276 |
-
|
| 277 |
-
```bash
|
| 278 |
-
free-claude-code # starts the server
|
| 279 |
-
```
|
| 280 |
-
|
| 281 |
-
> To update: `uv tool upgrade free-claude-code`
|
| 282 |
-
|
| 283 |
-
---
|
| 284 |
-
|
| 285 |
-
## How It Works
|
| 286 |
-
|
| 287 |
-
```
|
| 288 |
-
βββββββββββββββββββ ββββββββββββββββββββββββ ββββββββββββββββββββ
|
| 289 |
-
β Claude Code ββββββββ>β Free Claude Code ββββββββ>β LLM Provider β
|
| 290 |
-
β CLI / VSCode β<ββββββββ Proxy (:8082) β<ββββββββ NIM / OR / LMS β
|
| 291 |
-
βββββββββββββββββββ ββββββββββββββββββββββββ ββββββββββββββββββββ
|
| 292 |
-
Anthropic API OpenAI-compatible
|
| 293 |
-
format (SSE) format (SSE)
|
| 294 |
-
```
|
| 295 |
-
|
| 296 |
-
- **Transparent proxy**: Claude Code sends standard Anthropic API requests; the proxy forwards them to your configured provider
|
| 297 |
-
- **Per-model routing**: Opus / Sonnet / Haiku requests resolve to their model-specific backend, with `MODEL` as fallback
|
| 298 |
-
- **Request optimization**: 5 categories of trivial requests (quota probes, title generation, prefix detection, suggestions, filepath extraction) are intercepted and responded to locally without using API quota
|
| 299 |
-
- **Format translation**: Requests are translated from Anthropic format to the provider's OpenAI-compatible format and streamed back
|
| 300 |
-
- **Thinking tokens**: `<think>` tags and `reasoning_content` fields are converted into native Claude thinking blocks
|
| 301 |
-
|
| 302 |
-
---
|
| 303 |
-
|
| 304 |
-
## Providers
|
| 305 |
-
|
| 306 |
-
| Provider | Cost | Rate Limit | Best For |
|
| 307 |
-
| -------------- | ------------ | ---------- | ------------------------------------ |
|
| 308 |
-
| **NVIDIA NIM** | Free | 40 req/min | Daily driver, generous free tier |
|
| 309 |
-
| **OpenRouter** | Free / Paid | Varies | Model variety, fallback options |
|
| 310 |
-
| **LM Studio** | Free (local) | Unlimited | Privacy, offline use, no rate limits |
|
| 311 |
-
| **llama.cpp** | Free (local) | Unlimited | Lightweight local inference engine |
|
| 312 |
-
|
| 313 |
-
Models use a prefix format: `provider_prefix/model/name`. An invalid prefix causes an error.
|
| 314 |
-
|
| 315 |
-
| Provider | `MODEL` prefix | API Key Variable | Default Base URL |
|
| 316 |
-
| ---------- | ----------------- | -------------------- | ----------------------------- |
|
| 317 |
-
| NVIDIA NIM | `nvidia_nim/...` | `NVIDIA_NIM_API_KEY` | `integrate.api.nvidia.com/v1` |
|
| 318 |
-
| OpenRouter | `open_router/...` | `OPENROUTER_API_KEY` | `openrouter.ai/api/v1` |
|
| 319 |
-
| LM Studio | `lmstudio/...` | (none) | `localhost:1234/v1` |
|
| 320 |
-
| llama.cpp | `llamacpp/...` | (none) | `localhost:8080/v1` |
|
| 321 |
-
|
| 322 |
-
<details>
|
| 323 |
-
<summary><b>NVIDIA NIM models</b></summary>
|
| 324 |
-
|
| 325 |
-
Popular models (full list in [`nvidia_nim_models.json`](nvidia_nim_models.json)):
|
| 326 |
-
|
| 327 |
-
- `nvidia_nim/minimaxai/minimax-m2.5`
|
| 328 |
-
- `nvidia_nim/qwen/qwen3.5-397b-a17b`
|
| 329 |
-
- `nvidia_nim/z-ai/glm5`
|
| 330 |
-
- `nvidia_nim/moonshotai/kimi-k2.5`
|
| 331 |
-
- `nvidia_nim/stepfun-ai/step-3.5-flash`
|
| 332 |
-
|
| 333 |
-
Browse: [build.nvidia.com](https://build.nvidia.com/explore/discover) Β· Update list: `curl "https://integrate.api.nvidia.com/v1/models" > nvidia_nim_models.json`
|
| 334 |
-
|
| 335 |
-
</details>
|
| 336 |
-
|
| 337 |
-
<details>
|
| 338 |
-
<summary><b>OpenRouter models</b></summary>
|
| 339 |
-
|
| 340 |
-
Popular free models:
|
| 341 |
-
|
| 342 |
-
- `open_router/arcee-ai/trinity-large-preview:free`
|
| 343 |
-
- `open_router/stepfun/step-3.5-flash:free`
|
| 344 |
-
- `open_router/deepseek/deepseek-r1-0528:free`
|
| 345 |
-
- `open_router/openai/gpt-oss-120b:free`
|
| 346 |
-
|
| 347 |
-
Browse: [openrouter.ai/models](https://openrouter.ai/models) Β· [Free models](https://openrouter.ai/collections/free-models)
|
| 348 |
-
|
| 349 |
-
</details>
|
| 350 |
-
|
| 351 |
-
<details>
|
| 352 |
-
<summary><b>LM Studio models</b></summary>
|
| 353 |
-
|
| 354 |
-
Run models locally with [LM Studio](https://lmstudio.ai). Load a model in the Chat or Developer tab, then set `MODEL` to its identifier.
|
| 355 |
-
|
| 356 |
-
Examples with native tool-use support:
|
| 357 |
-
|
| 358 |
-
- `LiquidAI/LFM2-24B-A2B-GGUF`
|
| 359 |
-
- `unsloth/MiniMax-M2.5-GGUF`
|
| 360 |
-
- `unsloth/GLM-4.7-Flash-GGUF`
|
| 361 |
-
- `unsloth/Qwen3.5-35B-A3B-GGUF`
|
| 362 |
-
|
| 363 |
-
Browse: [model.lmstudio.ai](https://model.lmstudio.ai)
|
| 364 |
-
|
| 365 |
-
</details>
|
| 366 |
-
|
| 367 |
-
<details>
|
| 368 |
-
<summary><b>llama.cpp models</b></summary>
|
| 369 |
-
|
| 370 |
-
Run models locally using `llama-server`. Ensure you have a tool-capable GGUF. Set `MODEL` to whatever arbitrary name you'd like (e.g. `llamacpp/my-model`), as `llama-server` ignores the model name when run via `/v1/messages`.
|
| 371 |
-
|
| 372 |
-
See the Unsloth docs for detailed instructions and capable models:
|
| 373 |
-
[https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b](https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b)
|
| 374 |
-
|
| 375 |
-
</details>
|
| 376 |
-
|
| 377 |
-
---
|
| 378 |
-
|
| 379 |
-
## Discord Bot
|
| 380 |
-
|
| 381 |
-
Control Claude Code remotely from Discord (or Telegram). Send tasks, watch live progress, and manage multiple concurrent sessions.
|
| 382 |
-
|
| 383 |
-
**Capabilities:**
|
| 384 |
-
|
| 385 |
-
- Tree-based message threading: reply to a message to fork the conversation
|
| 386 |
-
- Session persistence across server restarts
|
| 387 |
-
- Live streaming of thinking tokens, tool calls, and results
|
| 388 |
-
- Unlimited concurrent Claude CLI sessions (concurrency controlled by `PROVIDER_MAX_CONCURRENCY`)
|
| 389 |
-
- Voice notes: send voice messages; they are transcribed and processed as regular prompts
|
| 390 |
-
- Commands: `/stop` (cancel a task; reply to a message to stop only that task), `/clear` (reset all sessions, or reply to clear a branch), `/stats`
|
| 391 |
-
|
| 392 |
-
### Setup
|
| 393 |
-
|
| 394 |
-
1. **Create a Discord Bot**: Go to [Discord Developer Portal](https://discord.com/developers/applications), create an application, add a bot, and copy the token. Enable **Message Content Intent** under Bot settings.
|
| 395 |
-
|
| 396 |
-
2. **Edit `.env`:**
|
| 397 |
-
|
| 398 |
-
```dotenv
|
| 399 |
-
MESSAGING_PLATFORM="discord"
|
| 400 |
-
DISCORD_BOT_TOKEN="your_discord_bot_token"
|
| 401 |
-
ALLOWED_DISCORD_CHANNELS="123456789,987654321"
|
| 402 |
-
```
|
| 403 |
-
|
| 404 |
-
> Enable Developer Mode in Discord (Settings β Advanced), then right-click a channel and "Copy ID". Comma-separate multiple channels. If empty, no channels are allowed.
|
| 405 |
-
|
| 406 |
-
3. **Configure the workspace** (where Claude will operate):
|
| 407 |
-
|
| 408 |
-
```dotenv
|
| 409 |
-
CLAUDE_WORKSPACE="./agent_workspace"
|
| 410 |
-
ALLOWED_DIR="C:/Users/yourname/projects"
|
| 411 |
-
```
|
| 412 |
-
|
| 413 |
-
4. **Start the server:**
|
| 414 |
-
|
| 415 |
-
```bash
|
| 416 |
-
uv run uvicorn server:app --host 0.0.0.0 --port 8082
|
| 417 |
-
```
|
| 418 |
-
|
| 419 |
-
5. **Invite the bot** via OAuth2 URL Generator (scopes: `bot`, permissions: Read Messages, Send Messages, Manage Messages, Read Message History).
|
| 420 |
-
|
| 421 |
-
### Telegram
|
| 422 |
-
|
| 423 |
-
Set `MESSAGING_PLATFORM=telegram` and configure:
|
| 424 |
-
|
| 425 |
-
```dotenv
|
| 426 |
-
TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrSTUvwxYZ"
|
| 427 |
-
ALLOWED_TELEGRAM_USER_ID="your_telegram_user_id"
|
| 428 |
-
```
|
| 429 |
-
|
| 430 |
-
Get a token from [@BotFather](https://t.me/BotFather); find your user ID via [@userinfobot](https://t.me/userinfobot).
|
| 431 |
-
|
| 432 |
-
### Voice Notes
|
| 433 |
-
|
| 434 |
-
Send voice messages on Discord or Telegram; they are transcribed and processed as regular prompts.
|
| 435 |
-
|
| 436 |
-
| Backend | Description | API Key |
|
| 437 |
-
| --------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------------- |
|
| 438 |
-
| **Local Whisper** (default) | [Hugging Face Whisper](https://huggingface.co/openai/whisper-large-v3-turbo) β free, offline, CUDA compatible | not required |
|
| 439 |
-
| **NVIDIA NIM** | Whisper/Parakeet models via gRPC | `NVIDIA_NIM_API_KEY` |
|
| 440 |
-
|
| 441 |
-
**Install the voice extras:**
|
| 442 |
-
|
| 443 |
-
```bash
|
| 444 |
-
# If you cloned the repo:
|
| 445 |
-
uv sync --extra voice_local # Local Whisper
|
| 446 |
-
uv sync --extra voice # NVIDIA NIM
|
| 447 |
-
uv sync --extra voice --extra voice_local # Both
|
| 448 |
-
|
| 449 |
-
# If you installed as a package (no clone):
|
| 450 |
-
uv tool install "free-claude-code[voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git"
|
| 451 |
-
uv tool install "free-claude-code[voice] @ git+https://github.com/Alishahryar1/free-claude-code.git"
|
| 452 |
-
uv tool install "free-claude-code[voice,voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git"
|
| 453 |
-
```
|
| 454 |
-
|
| 455 |
-
Configure via `WHISPER_DEVICE` (`cpu` | `cuda` | `nvidia_nim`) and `WHISPER_MODEL`. See the [Configuration](#configuration) table for all voice variables and supported model values.
|
| 456 |
-
|
| 457 |
-
---
|
| 458 |
-
|
| 459 |
-
## Configuration
|
| 460 |
-
|
| 461 |
-
### Core
|
| 462 |
-
|
| 463 |
-
| Variable | Description | Default |
|
| 464 |
-
| -------------------- | --------------------------------------------------------------------- | ------------------------------------------------- |
|
| 465 |
-
| `MODEL` | Fallback model (`provider/model/name` format; invalid prefix β error) | `nvidia_nim/stepfun-ai/step-3.5-flash` |
|
| 466 |
-
| `MODEL_OPUS` | Model for Claude Opus requests (falls back to `MODEL`) | `nvidia_nim/z-ai/glm4.7` |
|
| 467 |
-
| `MODEL_SONNET` | Model for Claude Sonnet requests (falls back to `MODEL`) | `open_router/arcee-ai/trinity-large-preview:free` |
|
| 468 |
-
| `MODEL_HAIKU` | Model for Claude Haiku requests (falls back to `MODEL`) | `open_router/stepfun/step-3.5-flash:free` |
|
| 469 |
-
| `NVIDIA_NIM_API_KEY` | NVIDIA API key | required for NIM |
|
| 470 |
-
| `NIM_ENABLE_THINKING` | Send `chat_template_kwargs` + `reasoning_budget` on NIM requests. Enable for thinking models (kimi, nemotron); leave `false` for others (e.g. Mistral) | `false` |
|
| 471 |
-
| `OPENROUTER_API_KEY` | OpenRouter API key | required for OpenRouter |
|
| 472 |
-
| `LM_STUDIO_BASE_URL` | LM Studio server URL | `http://localhost:1234/v1` |
|
| 473 |
-
| `LLAMACPP_BASE_URL` | llama.cpp server URL | `http://localhost:8080/v1` |
|
| 474 |
-
|
| 475 |
-
### Rate Limiting & Timeouts
|
| 476 |
-
|
| 477 |
-
| Variable | Description | Default |
|
| 478 |
-
| -------------------------- | ----------------------------------------- | ------- |
|
| 479 |
-
| `PROVIDER_RATE_LIMIT` | LLM API requests per window | `40` |
|
| 480 |
-
| `PROVIDER_RATE_WINDOW` | Rate limit window (seconds) | `60` |
|
| 481 |
-
| `PROVIDER_MAX_CONCURRENCY` | Max simultaneous open provider streams | `5` |
|
| 482 |
-
| `HTTP_READ_TIMEOUT` | Read timeout for provider requests (s) | `120` |
|
| 483 |
-
| `HTTP_WRITE_TIMEOUT` | Write timeout for provider requests (s) | `10` |
|
| 484 |
-
| `HTTP_CONNECT_TIMEOUT` | Connect timeout for provider requests (s) | `2` |
|
| 485 |
-
|
| 486 |
-
### Messaging & Voice
|
| 487 |
-
|
| 488 |
-
| Variable | Description | Default |
|
| 489 |
-
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- |
|
| 490 |
-
| `MESSAGING_PLATFORM` | `discord` or `telegram` | `discord` |
|
| 491 |
-
| `DISCORD_BOT_TOKEN` | Discord bot token | `""` |
|
| 492 |
-
| `ALLOWED_DISCORD_CHANNELS` | Comma-separated channel IDs (empty = none allowed) | `""` |
|
| 493 |
-
| `TELEGRAM_BOT_TOKEN` | Telegram bot token | `""` |
|
| 494 |
-
| `ALLOWED_TELEGRAM_USER_ID` | Allowed Telegram user ID | `""` |
|
| 495 |
-
| `CLAUDE_WORKSPACE` | Directory where the agent operates | `./agent_workspace` |
|
| 496 |
-
| `ALLOWED_DIR` | Allowed directories for the agent | `""` |
|
| 497 |
-
| `MESSAGING_RATE_LIMIT` | Messaging messages per window | `1` |
|
| 498 |
-
| `MESSAGING_RATE_WINDOW` | Messaging window (seconds) | `1` |
|
| 499 |
-
| `VOICE_NOTE_ENABLED` | Enable voice note handling | `true` |
|
| 500 |
-
| `WHISPER_DEVICE` | `cpu` \| `cuda` \| `nvidia_nim` | `cpu` |
|
| 501 |
-
| `WHISPER_MODEL` | Whisper model (local: `tiny`/`base`/`small`/`medium`/`large-v2`/`large-v3`/`large-v3-turbo`; NIM: `openai/whisper-large-v3`, `nvidia/parakeet-ctc-1.1b-asr`, etc.) | `base` |
|
| 502 |
-
| `HF_TOKEN` | Hugging Face token for faster downloads (local Whisper, optional) | β |
|
| 503 |
-
|
| 504 |
-
<details>
|
| 505 |
-
<summary><b>Advanced: Request optimization flags</b></summary>
|
| 506 |
-
|
| 507 |
-
These are enabled by default and intercept trivial Claude Code requests locally to save API quota.
|
| 508 |
-
|
| 509 |
-
| Variable | Description | Default |
|
| 510 |
-
| --------------------------------- | ------------------------------ | ------- |
|
| 511 |
-
| `FAST_PREFIX_DETECTION` | Enable fast prefix detection | `true` |
|
| 512 |
-
| `ENABLE_NETWORK_PROBE_MOCK` | Mock network probe requests | `true` |
|
| 513 |
-
| `ENABLE_TITLE_GENERATION_SKIP` | Skip title generation requests | `true` |
|
| 514 |
-
| `ENABLE_SUGGESTION_MODE_SKIP` | Skip suggestion mode requests | `true` |
|
| 515 |
-
| `ENABLE_FILEPATH_EXTRACTION_MOCK` | Mock filepath extraction | `true` |
|
| 516 |
-
|
| 517 |
-
</details>
|
| 518 |
-
|
| 519 |
-
See [`.env.example`](.env.example) for all supported parameters.
|
| 520 |
-
|
| 521 |
-
---
|
| 522 |
-
|
| 523 |
-
## Development
|
| 524 |
-
|
| 525 |
-
### Project Structure
|
| 526 |
-
|
| 527 |
-
```
|
| 528 |
-
free-claude-code/
|
| 529 |
-
βββ server.py # Entry point
|
| 530 |
-
βββ api/ # FastAPI routes, request detection, optimization handlers
|
| 531 |
-
βββ providers/ # BaseProvider, OpenAICompatibleProvider, NIM, OpenRouter, LM Studio, llamacpp
|
| 532 |
-
β βββ common/ # Shared utils (SSE builder, message converter, parsers, error mapping)
|
| 533 |
-
βββ messaging/ # MessagingPlatform ABC + Discord/Telegram bots, session management
|
| 534 |
-
βββ config/ # Settings, NIM config, logging
|
| 535 |
-
βββ cli/ # CLI session and process management
|
| 536 |
-
βββ tests/ # Pytest test suite
|
| 537 |
-
```
|
| 538 |
-
|
| 539 |
-
### Commands
|
| 540 |
-
|
| 541 |
-
```bash
|
| 542 |
-
uv run ruff format # Format code
|
| 543 |
-
uv run ruff check # Lint
|
| 544 |
-
uv run ty check # Type checking
|
| 545 |
-
uv run pytest # Run tests
|
| 546 |
-
```
|
| 547 |
-
|
| 548 |
-
### Extending
|
| 549 |
-
|
| 550 |
-
**Adding an OpenAI-compatible provider** (Groq, Together AI, etc.) β extend `OpenAICompatibleProvider`:
|
| 551 |
-
|
| 552 |
-
```python
|
| 553 |
-
from providers.openai_compat import OpenAICompatibleProvider
|
| 554 |
-
from providers.base import ProviderConfig
|
| 555 |
-
|
| 556 |
-
class MyProvider(OpenAICompatibleProvider):
|
| 557 |
-
def __init__(self, config: ProviderConfig):
|
| 558 |
-
super().__init__(config, provider_name="MYPROVIDER",
|
| 559 |
-
base_url="https://api.example.com/v1", api_key=config.api_key)
|
| 560 |
-
```
|
| 561 |
-
|
| 562 |
-
**Adding a fully custom provider** β extend `BaseProvider` directly and implement `stream_response()`.
|
| 563 |
-
|
| 564 |
-
**Adding a messaging platform** β extend `MessagingPlatform` in `messaging/` and implement `start()`, `stop()`, `send_message()`, `edit_message()`, and `on_message()`.
|
| 565 |
-
|
| 566 |
-
---
|
| 567 |
-
|
| 568 |
-
## Contributing
|
| 569 |
-
|
| 570 |
-
- Report bugs or suggest features via [Issues](https://github.com/Alishahryar1/free-claude-code/issues)
|
| 571 |
-
- Add new LLM providers (Groq, Together AI, etc.)
|
| 572 |
-
- Add new messaging platforms (Slack, etc.)
|
| 573 |
-
- Improve test coverage
|
| 574 |
-
- Not accepting Docker integration PRs for now
|
| 575 |
-
|
| 576 |
-
```bash
|
| 577 |
-
git checkout -b my-feature
|
| 578 |
-
uv run ruff format && uv run ruff check && uv run ty check && uv run pytest
|
| 579 |
-
# Open a pull request
|
| 580 |
-
```
|
| 581 |
-
|
| 582 |
-
---
|
| 583 |
-
|
| 584 |
-
## License
|
| 585 |
-
|
| 586 |
-
MIT License. See [LICENSE](LICENSE) for details.
|
| 587 |
-
|
| 588 |
-
Built with [FastAPI](https://fastapi.tiangolo.com/), [OpenAI Python SDK](https://github.com/openai/openai-python), [discord.py](https://github.com/Rapptz/discord.py), and [python-telegram-bot](https://python-telegram-bot.org/).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Claude_Code/Dockerfile β Dockerfile
RENAMED
|
File without changes
|
README.md
CHANGED
|
@@ -1,11 +1,588 @@
|
|
| 1 |
---
|
| 2 |
title: Claude Code
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: docker
|
|
|
|
| 7 |
pinned: false
|
| 8 |
-
license: mit
|
| 9 |
---
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: Claude Code
|
| 3 |
+
emoji: π€
|
| 4 |
+
colorFrom: indigo
|
| 5 |
+
colorTo: blue
|
| 6 |
sdk: docker
|
| 7 |
+
app_port: 7860
|
| 8 |
pinned: false
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
+
<div align="center">
|
| 12 |
+
|
| 13 |
+
# π€ Free Claude Code
|
| 14 |
+
|
| 15 |
+
### Use Claude Code CLI & VSCode for free. No Anthropic API key required.
|
| 16 |
+
|
| 17 |
+
[](https://opensource.org/licenses/MIT)
|
| 18 |
+
[](https://www.python.org/downloads/)
|
| 19 |
+
[](https://github.com/astral-sh/uv)
|
| 20 |
+
[](https://github.com/Alishahryar1/free-claude-code/actions/workflows/tests.yml)
|
| 21 |
+
[](https://pypi.org/project/ty/)
|
| 22 |
+
[](https://github.com/astral-sh/ruff)
|
| 23 |
+
[](https://github.com/Delgan/loguru)
|
| 24 |
+
|
| 25 |
+
A lightweight proxy that routes Claude Code's Anthropic API calls to **NVIDIA NIM** (40 req/min free), **OpenRouter** (hundreds of models), **LM Studio** (fully local), or **llama.cpp** (local with Anthropic endpoints).
|
| 26 |
+
|
| 27 |
+
[Quick Start](#quick-start) Β· [Providers](#providers) Β· [Discord Bot](#discord-bot) Β· [Configuration](#configuration) Β· [Development](#development) Β· [Contributing](#contributing)
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
</div>
|
| 32 |
+
|
| 33 |
+
<div align="center">
|
| 34 |
+
<img src="pic.png" alt="Free Claude Code in action" width="700">
|
| 35 |
+
<p><em>Claude Code running via NVIDIA NIM, completely free</em></p>
|
| 36 |
+
</div>
|
| 37 |
+
|
| 38 |
+
## Features
|
| 39 |
+
|
| 40 |
+
| Feature | Description |
|
| 41 |
+
| -------------------------- | ----------------------------------------------------------------------------------------------- |
|
| 42 |
+
| **Zero Cost** | 40 req/min free on NVIDIA NIM. Free models on OpenRouter. Fully local with LM Studio |
|
| 43 |
+
| **Drop-in Replacement** | Set 2 env vars. No modifications to Claude Code CLI or VSCode extension needed |
|
| 44 |
+
| **4 Providers** | NVIDIA NIM, OpenRouter (hundreds of models), LM Studio (local), llama.cpp (`llama-server`) |
|
| 45 |
+
| **Per-Model Mapping** | Route Opus / Sonnet / Haiku to different models and providers. Mix providers freely |
|
| 46 |
+
| **Thinking Token Support** | Parses `<think>` tags and `reasoning_content` into native Claude thinking blocks |
|
| 47 |
+
| **Heuristic Tool Parser** | Models outputting tool calls as text are auto-parsed into structured tool use |
|
| 48 |
+
| **Request Optimization** | 5 categories of trivial API calls intercepted locally, saving quota and latency |
|
| 49 |
+
| **Smart Rate Limiting** | Proactive rolling-window throttle + reactive 429 exponential backoff + optional concurrency cap |
|
| 50 |
+
| **Discord / Telegram Bot** | Remote autonomous coding with tree-based threading, session persistence, and live progress |
|
| 51 |
+
| **Subagent Control** | Task tool interception forces `run_in_background=False`. No runaway subagents |
|
| 52 |
+
| **Extensible** | Clean `BaseProvider` and `MessagingPlatform` ABCs. Add new providers or platforms easily |
|
| 53 |
+
|
| 54 |
+
## Quick Start
|
| 55 |
+
|
| 56 |
+
### Prerequisites
|
| 57 |
+
|
| 58 |
+
1. Get an API key (or use LM Studio / llama.cpp locally):
|
| 59 |
+
- **NVIDIA NIM**: [build.nvidia.com/settings/api-keys](https://build.nvidia.com/settings/api-keys)
|
| 60 |
+
- **OpenRouter**: [openrouter.ai/keys](https://openrouter.ai/keys)
|
| 61 |
+
- **LM Studio**: No API key needed. Run locally with [LM Studio](https://lmstudio.ai)
|
| 62 |
+
- **llama.cpp**: No API key needed. Run `llama-server` locally.
|
| 63 |
+
2. Install [Claude Code](https://github.com/anthropics/claude-code)
|
| 64 |
+
3. Install [uv](https://github.com/astral-sh/uv) (or `uv self update` if already installed)
|
| 65 |
+
|
| 66 |
+
### Clone & Configure
|
| 67 |
+
|
| 68 |
+
```bash
|
| 69 |
+
git clone https://github.com/Alishahryar1/free-claude-code.git
|
| 70 |
+
cd free-claude-code
|
| 71 |
+
cp .env.example .env
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
Choose your provider and edit `.env`:
|
| 75 |
+
|
| 76 |
+
<details>
|
| 77 |
+
<summary><b>NVIDIA NIM</b> (40 req/min free, recommended)</summary>
|
| 78 |
+
|
| 79 |
+
```dotenv
|
| 80 |
+
NVIDIA_NIM_API_KEY="nvapi-your-key-here"
|
| 81 |
+
|
| 82 |
+
MODEL_OPUS="nvidia_nim/z-ai/glm4.7"
|
| 83 |
+
MODEL_SONNET="nvidia_nim/moonshotai/kimi-k2-thinking"
|
| 84 |
+
MODEL_HAIKU="nvidia_nim/stepfun-ai/step-3.5-flash"
|
| 85 |
+
MODEL="nvidia_nim/z-ai/glm4.7" # fallback
|
| 86 |
+
|
| 87 |
+
# Enable for thinking models (kimi, nemotron). Leave false for others (e.g. Mistral).
|
| 88 |
+
NIM_ENABLE_THINKING=true
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
</details>
|
| 92 |
+
|
| 93 |
+
<details>
|
| 94 |
+
<summary><b>OpenRouter</b> (hundreds of models)</summary>
|
| 95 |
+
|
| 96 |
+
```dotenv
|
| 97 |
+
OPENROUTER_API_KEY="sk-or-your-key-here"
|
| 98 |
+
|
| 99 |
+
MODEL_OPUS="open_router/deepseek/deepseek-r1-0528:free"
|
| 100 |
+
MODEL_SONNET="open_router/openai/gpt-oss-120b:free"
|
| 101 |
+
MODEL_HAIKU="open_router/stepfun/step-3.5-flash:free"
|
| 102 |
+
MODEL="open_router/stepfun/step-3.5-flash:free" # fallback
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
</details>
|
| 106 |
+
|
| 107 |
+
<details>
|
| 108 |
+
<summary><b>LM Studio</b> (fully local, no API key)</summary>
|
| 109 |
+
|
| 110 |
+
```dotenv
|
| 111 |
+
MODEL_OPUS="lmstudio/unsloth/MiniMax-M2.5-GGUF"
|
| 112 |
+
MODEL_SONNET="lmstudio/unsloth/Qwen3.5-35B-A3B-GGUF"
|
| 113 |
+
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
|
| 114 |
+
MODEL="lmstudio/unsloth/GLM-4.7-Flash-GGUF" # fallback
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
</details>
|
| 118 |
+
|
| 119 |
+
<details>
|
| 120 |
+
<summary><b>llama.cpp</b> (fully local, no API key)</summary>
|
| 121 |
+
|
| 122 |
+
```dotenv
|
| 123 |
+
LLAMACPP_BASE_URL="http://localhost:8080/v1"
|
| 124 |
+
|
| 125 |
+
MODEL_OPUS="llamacpp/local-model"
|
| 126 |
+
MODEL_SONNET="llamacpp/local-model"
|
| 127 |
+
MODEL_HAIKU="llamacpp/local-model"
|
| 128 |
+
MODEL="llamacpp/local-model"
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
</details>
|
| 132 |
+
|
| 133 |
+
<details>
|
| 134 |
+
<summary><b>Mix providers</b></summary>
|
| 135 |
+
|
| 136 |
+
Each `MODEL_*` variable can use a different provider. `MODEL` is the fallback for unrecognized Claude models.
|
| 137 |
+
|
| 138 |
+
```dotenv
|
| 139 |
+
NVIDIA_NIM_API_KEY="nvapi-your-key-here"
|
| 140 |
+
OPENROUTER_API_KEY="sk-or-your-key-here"
|
| 141 |
+
|
| 142 |
+
MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5"
|
| 143 |
+
MODEL_SONNET="open_router/deepseek/deepseek-r1-0528:free"
|
| 144 |
+
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
|
| 145 |
+
MODEL="nvidia_nim/z-ai/glm4.7" # fallback
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
</details>
|
| 149 |
+
|
| 150 |
+
<details>
|
| 151 |
+
<summary><b>Optional Authentication</b> (restrict access to your proxy)</summary>
|
| 152 |
+
|
| 153 |
+
Set `ANTHROPIC_AUTH_TOKEN` in `.env` to require clients to authenticate:
|
| 154 |
+
|
| 155 |
+
```dotenv
|
| 156 |
+
ANTHROPIC_AUTH_TOKEN="your-secret-token-here"
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
**How it works:**
|
| 160 |
+
- If `ANTHROPIC_AUTH_TOKEN` is empty (default), no authentication is required (backward compatible)
|
| 161 |
+
- If set, clients must provide the same token via the `ANTHROPIC_AUTH_TOKEN` header
|
| 162 |
+
- For private Hugging Face Spaces, query auth is supported as `?psw=token`, `?psw:token`, or `?psw%3Atoken`
|
| 163 |
+
- The `claude-pick` script automatically reads the token from `.env` if configured
|
| 164 |
+
|
| 165 |
+
**Example usage:**
|
| 166 |
+
```bash
|
| 167 |
+
# With authentication
|
| 168 |
+
ANTHROPIC_AUTH_TOKEN="your-secret-token-here" \
|
| 169 |
+
ANTHROPIC_BASE_URL="http://localhost:8082" claude
|
| 170 |
+
|
| 171 |
+
# Hugging Face private Space (query auth in URL)
|
| 172 |
+
ANTHROPIC_API_KEY="Jack@188" \
|
| 173 |
+
ANTHROPIC_BASE_URL="https://<your-space>.hf.space?psw:Jack%40188" claude
|
| 174 |
+
|
| 175 |
+
# claude-pick automatically uses the configured token
|
| 176 |
+
claude-pick
|
| 177 |
+
```
|
| 178 |
+
|
| 179 |
+
Note: `HEAD /` returning `405 Method Not Allowed` means auth already passed; only `GET /` is implemented.
|
| 180 |
+
|
| 181 |
+
Use this feature if:
|
| 182 |
+
- Running the proxy on a public network
|
| 183 |
+
- Sharing the server with others but restricting access
|
| 184 |
+
- Wanting an additional layer of security
|
| 185 |
+
|
| 186 |
+
</details>
|
| 187 |
+
|
| 188 |
+
### Run It
|
| 189 |
+
|
| 190 |
+
**Terminal 1:** Start the proxy server:
|
| 191 |
+
|
| 192 |
+
```bash
|
| 193 |
+
uv run uvicorn server:app --host 0.0.0.0 --port 8082
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
**Terminal 2:** Run Claude Code:
|
| 197 |
+
|
| 198 |
+
#### Powershell
|
| 199 |
+
```powershell
|
| 200 |
+
$env:ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; $env:ANTHROPIC_API_KEY="Jack@188"; claude
|
| 201 |
+
```
|
| 202 |
+
#### Bash
|
| 203 |
+
```bash
|
| 204 |
+
export ANTHROPIC_BASE_URL="http://localhost:8082?psw:Jack%40188"; export ANTHROPIC_API_KEY="Jack@188"; claude
|
| 205 |
+
```
|
| 206 |
+
|
| 207 |
+
That's it! Claude Code now uses your configured provider for free.
|
| 208 |
+
|
| 209 |
+
### One-Click Factory Reset (Space Admin)
|
| 210 |
+
|
| 211 |
+
Open the admin page:
|
| 212 |
+
|
| 213 |
+
- Local: `http://localhost:8082/admin/factory-reset?psw:Jack%40188`
|
| 214 |
+
- Space: `https://<your-space>.hf.space/admin/factory-reset?psw:Jack%40188`
|
| 215 |
+
|
| 216 |
+
Click **Factory Restart** to clear runtime cache + workspace data and restart the server.
|
| 217 |
+
|
| 218 |
+
<details>
|
| 219 |
+
<summary><b>VSCode Extension Setup</b></summary>
|
| 220 |
+
|
| 221 |
+
1. Start the proxy server (same as above).
|
| 222 |
+
2. Open Settings (`Ctrl + ,`) and search for `claude-code.environmentVariables`.
|
| 223 |
+
3. Click **Edit in settings.json** and add:
|
| 224 |
+
|
| 225 |
+
```json
|
| 226 |
+
"claudeCode.environmentVariables": [
|
| 227 |
+
{ "name": "ANTHROPIC_BASE_URL", "value": "http://localhost:8082" },
|
| 228 |
+
{ "name": "ANTHROPIC_AUTH_TOKEN", "value": "freecc" }
|
| 229 |
+
]
|
| 230 |
+
```
|
| 231 |
+
|
| 232 |
+
4. Reload extensions.
|
| 233 |
+
5. **If you see the login screen**: Click **Anthropic Console**, then authorize. The extension will start working. You may be redirected to buy credits in the browser; ignore it β the extension already works.
|
| 234 |
+
|
| 235 |
+
To switch back to Anthropic models, comment out the added block and reload extensions.
|
| 236 |
+
|
| 237 |
+
</details>
|
| 238 |
+
|
| 239 |
+
<details>
|
| 240 |
+
<summary><b>Multi-Model Support (Model Picker)</b></summary>
|
| 241 |
+
|
| 242 |
+
`claude-pick` is an interactive model selector that lets you choose any model from your active provider each time you launch Claude, without editing `MODEL` in `.env`.
|
| 243 |
+
|
| 244 |
+
https://github.com/user-attachments/assets/9a33c316-90f8-4418-9650-97e7d33ad645
|
| 245 |
+
|
| 246 |
+
**1. Install [fzf](https://github.com/junegunn/fzf)**:
|
| 247 |
+
|
| 248 |
+
```bash
|
| 249 |
+
brew install fzf # macOS/Linux
|
| 250 |
+
```
|
| 251 |
+
|
| 252 |
+
**2. Add the alias to `~/.zshrc` or `~/.bashrc`:**
|
| 253 |
+
|
| 254 |
+
```bash
|
| 255 |
+
alias claude-pick="/absolute/path/to/free-claude-code/claude-pick"
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
Then reload your shell (`source ~/.zshrc` or `source ~/.bashrc`) and run `claude-pick`.
|
| 259 |
+
|
| 260 |
+
**Or use a fixed model alias** (no picker needed):
|
| 261 |
+
|
| 262 |
+
```bash
|
| 263 |
+
alias claude-kimi='ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc:moonshotai/kimi-k2.5" claude'
|
| 264 |
+
```
|
| 265 |
+
|
| 266 |
+
</details>
|
| 267 |
+
|
| 268 |
+
### Install as a Package (no clone needed)
|
| 269 |
+
|
| 270 |
+
```bash
|
| 271 |
+
uv tool install git+https://github.com/Alishahryar1/free-claude-code.git
|
| 272 |
+
fcc-init # creates ~/.config/free-claude-code/.env from the built-in template
|
| 273 |
+
```
|
| 274 |
+
|
| 275 |
+
Edit `~/.config/free-claude-code/.env` with your API keys and model names, then:
|
| 276 |
+
|
| 277 |
+
```bash
|
| 278 |
+
free-claude-code # starts the server
|
| 279 |
+
```
|
| 280 |
+
|
| 281 |
+
> To update: `uv tool upgrade free-claude-code`
|
| 282 |
+
|
| 283 |
+
---
|
| 284 |
+
|
| 285 |
+
## How It Works
|
| 286 |
+
|
| 287 |
+
```
|
| 288 |
+
βββββββββββββββββββ ββββββββββββββββββββββββ ββββββββββββββββββββ
|
| 289 |
+
β Claude Code ββββββββ>β Free Claude Code ββββββββ>β LLM Provider β
|
| 290 |
+
β CLI / VSCode β<ββββββββ Proxy (:8082) β<ββββββββ NIM / OR / LMS β
|
| 291 |
+
βββββββββββββββββββ ββββββββββββββββββββββββ ββββββββββββββββββββ
|
| 292 |
+
Anthropic API OpenAI-compatible
|
| 293 |
+
format (SSE) format (SSE)
|
| 294 |
+
```
|
| 295 |
+
|
| 296 |
+
- **Transparent proxy**: Claude Code sends standard Anthropic API requests; the proxy forwards them to your configured provider
|
| 297 |
+
- **Per-model routing**: Opus / Sonnet / Haiku requests resolve to their model-specific backend, with `MODEL` as fallback
|
| 298 |
+
- **Request optimization**: 5 categories of trivial requests (quota probes, title generation, prefix detection, suggestions, filepath extraction) are intercepted and responded to locally without using API quota
|
| 299 |
+
- **Format translation**: Requests are translated from Anthropic format to the provider's OpenAI-compatible format and streamed back
|
| 300 |
+
- **Thinking tokens**: `<think>` tags and `reasoning_content` fields are converted into native Claude thinking blocks
|
| 301 |
+
|
| 302 |
+
---
|
| 303 |
+
|
| 304 |
+
## Providers
|
| 305 |
+
|
| 306 |
+
| Provider | Cost | Rate Limit | Best For |
|
| 307 |
+
| -------------- | ------------ | ---------- | ------------------------------------ |
|
| 308 |
+
| **NVIDIA NIM** | Free | 40 req/min | Daily driver, generous free tier |
|
| 309 |
+
| **OpenRouter** | Free / Paid | Varies | Model variety, fallback options |
|
| 310 |
+
| **LM Studio** | Free (local) | Unlimited | Privacy, offline use, no rate limits |
|
| 311 |
+
| **llama.cpp** | Free (local) | Unlimited | Lightweight local inference engine |
|
| 312 |
+
|
| 313 |
+
Models use a prefix format: `provider_prefix/model/name`. An invalid prefix causes an error.
|
| 314 |
+
|
| 315 |
+
| Provider | `MODEL` prefix | API Key Variable | Default Base URL |
|
| 316 |
+
| ---------- | ----------------- | -------------------- | ----------------------------- |
|
| 317 |
+
| NVIDIA NIM | `nvidia_nim/...` | `NVIDIA_NIM_API_KEY` | `integrate.api.nvidia.com/v1` |
|
| 318 |
+
| OpenRouter | `open_router/...` | `OPENROUTER_API_KEY` | `openrouter.ai/api/v1` |
|
| 319 |
+
| LM Studio | `lmstudio/...` | (none) | `localhost:1234/v1` |
|
| 320 |
+
| llama.cpp | `llamacpp/...` | (none) | `localhost:8080/v1` |
|
| 321 |
+
|
| 322 |
+
<details>
|
| 323 |
+
<summary><b>NVIDIA NIM models</b></summary>
|
| 324 |
+
|
| 325 |
+
Popular models (full list in [`nvidia_nim_models.json`](nvidia_nim_models.json)):
|
| 326 |
+
|
| 327 |
+
- `nvidia_nim/minimaxai/minimax-m2.5`
|
| 328 |
+
- `nvidia_nim/qwen/qwen3.5-397b-a17b`
|
| 329 |
+
- `nvidia_nim/z-ai/glm5`
|
| 330 |
+
- `nvidia_nim/moonshotai/kimi-k2.5`
|
| 331 |
+
- `nvidia_nim/stepfun-ai/step-3.5-flash`
|
| 332 |
+
|
| 333 |
+
Browse: [build.nvidia.com](https://build.nvidia.com/explore/discover) Β· Update list: `curl "https://integrate.api.nvidia.com/v1/models" > nvidia_nim_models.json`
|
| 334 |
+
|
| 335 |
+
</details>
|
| 336 |
+
|
| 337 |
+
<details>
|
| 338 |
+
<summary><b>OpenRouter models</b></summary>
|
| 339 |
+
|
| 340 |
+
Popular free models:
|
| 341 |
+
|
| 342 |
+
- `open_router/arcee-ai/trinity-large-preview:free`
|
| 343 |
+
- `open_router/stepfun/step-3.5-flash:free`
|
| 344 |
+
- `open_router/deepseek/deepseek-r1-0528:free`
|
| 345 |
+
- `open_router/openai/gpt-oss-120b:free`
|
| 346 |
+
|
| 347 |
+
Browse: [openrouter.ai/models](https://openrouter.ai/models) Β· [Free models](https://openrouter.ai/collections/free-models)
|
| 348 |
+
|
| 349 |
+
</details>
|
| 350 |
+
|
| 351 |
+
<details>
|
| 352 |
+
<summary><b>LM Studio models</b></summary>
|
| 353 |
+
|
| 354 |
+
Run models locally with [LM Studio](https://lmstudio.ai). Load a model in the Chat or Developer tab, then set `MODEL` to its identifier.
|
| 355 |
+
|
| 356 |
+
Examples with native tool-use support:
|
| 357 |
+
|
| 358 |
+
- `LiquidAI/LFM2-24B-A2B-GGUF`
|
| 359 |
+
- `unsloth/MiniMax-M2.5-GGUF`
|
| 360 |
+
- `unsloth/GLM-4.7-Flash-GGUF`
|
| 361 |
+
- `unsloth/Qwen3.5-35B-A3B-GGUF`
|
| 362 |
+
|
| 363 |
+
Browse: [model.lmstudio.ai](https://model.lmstudio.ai)
|
| 364 |
+
|
| 365 |
+
</details>
|
| 366 |
+
|
| 367 |
+
<details>
|
| 368 |
+
<summary><b>llama.cpp models</b></summary>
|
| 369 |
+
|
| 370 |
+
Run models locally using `llama-server`. Ensure you have a tool-capable GGUF. Set `MODEL` to whatever arbitrary name you'd like (e.g. `llamacpp/my-model`), as `llama-server` ignores the model name when run via `/v1/messages`.
|
| 371 |
+
|
| 372 |
+
See the Unsloth docs for detailed instructions and capable models:
|
| 373 |
+
[https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b](https://unsloth.ai/docs/models/qwen3.5#qwen3.5-small-0.8b-2b-4b-9b)
|
| 374 |
+
|
| 375 |
+
</details>
|
| 376 |
+
|
| 377 |
+
---
|
| 378 |
+
|
| 379 |
+
## Discord Bot
|
| 380 |
+
|
| 381 |
+
Control Claude Code remotely from Discord (or Telegram). Send tasks, watch live progress, and manage multiple concurrent sessions.
|
| 382 |
+
|
| 383 |
+
**Capabilities:**
|
| 384 |
+
|
| 385 |
+
- Tree-based message threading: reply to a message to fork the conversation
|
| 386 |
+
- Session persistence across server restarts
|
| 387 |
+
- Live streaming of thinking tokens, tool calls, and results
|
| 388 |
+
- Unlimited concurrent Claude CLI sessions (concurrency controlled by `PROVIDER_MAX_CONCURRENCY`)
|
| 389 |
+
- Voice notes: send voice messages; they are transcribed and processed as regular prompts
|
| 390 |
+
- Commands: `/stop` (cancel a task; reply to a message to stop only that task), `/clear` (reset all sessions, or reply to clear a branch), `/stats`
|
| 391 |
+
|
| 392 |
+
### Setup
|
| 393 |
+
|
| 394 |
+
1. **Create a Discord Bot**: Go to [Discord Developer Portal](https://discord.com/developers/applications), create an application, add a bot, and copy the token. Enable **Message Content Intent** under Bot settings.
|
| 395 |
+
|
| 396 |
+
2. **Edit `.env`:**
|
| 397 |
+
|
| 398 |
+
```dotenv
|
| 399 |
+
MESSAGING_PLATFORM="discord"
|
| 400 |
+
DISCORD_BOT_TOKEN="your_discord_bot_token"
|
| 401 |
+
ALLOWED_DISCORD_CHANNELS="123456789,987654321"
|
| 402 |
+
```
|
| 403 |
+
|
| 404 |
+
> Enable Developer Mode in Discord (Settings β Advanced), then right-click a channel and "Copy ID". Comma-separate multiple channels. If empty, no channels are allowed.
|
| 405 |
+
|
| 406 |
+
3. **Configure the workspace** (where Claude will operate):
|
| 407 |
+
|
| 408 |
+
```dotenv
|
| 409 |
+
CLAUDE_WORKSPACE="./agent_workspace"
|
| 410 |
+
ALLOWED_DIR="C:/Users/yourname/projects"
|
| 411 |
+
```
|
| 412 |
+
|
| 413 |
+
4. **Start the server:**
|
| 414 |
+
|
| 415 |
+
```bash
|
| 416 |
+
uv run uvicorn server:app --host 0.0.0.0 --port 8082
|
| 417 |
+
```
|
| 418 |
+
|
| 419 |
+
5. **Invite the bot** via OAuth2 URL Generator (scopes: `bot`, permissions: Read Messages, Send Messages, Manage Messages, Read Message History).
|
| 420 |
+
|
| 421 |
+
### Telegram
|
| 422 |
+
|
| 423 |
+
Set `MESSAGING_PLATFORM=telegram` and configure:
|
| 424 |
+
|
| 425 |
+
```dotenv
|
| 426 |
+
TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrSTUvwxYZ"
|
| 427 |
+
ALLOWED_TELEGRAM_USER_ID="your_telegram_user_id"
|
| 428 |
+
```
|
| 429 |
+
|
| 430 |
+
Get a token from [@BotFather](https://t.me/BotFather); find your user ID via [@userinfobot](https://t.me/userinfobot).
|
| 431 |
+
|
| 432 |
+
### Voice Notes
|
| 433 |
+
|
| 434 |
+
Send voice messages on Discord or Telegram; they are transcribed and processed as regular prompts.
|
| 435 |
+
|
| 436 |
+
| Backend | Description | API Key |
|
| 437 |
+
| --------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------------- |
|
| 438 |
+
| **Local Whisper** (default) | [Hugging Face Whisper](https://huggingface.co/openai/whisper-large-v3-turbo) β free, offline, CUDA compatible | not required |
|
| 439 |
+
| **NVIDIA NIM** | Whisper/Parakeet models via gRPC | `NVIDIA_NIM_API_KEY` |
|
| 440 |
+
|
| 441 |
+
**Install the voice extras:**
|
| 442 |
+
|
| 443 |
+
```bash
|
| 444 |
+
# If you cloned the repo:
|
| 445 |
+
uv sync --extra voice_local # Local Whisper
|
| 446 |
+
uv sync --extra voice # NVIDIA NIM
|
| 447 |
+
uv sync --extra voice --extra voice_local # Both
|
| 448 |
+
|
| 449 |
+
# If you installed as a package (no clone):
|
| 450 |
+
uv tool install "free-claude-code[voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git"
|
| 451 |
+
uv tool install "free-claude-code[voice] @ git+https://github.com/Alishahryar1/free-claude-code.git"
|
| 452 |
+
uv tool install "free-claude-code[voice,voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git"
|
| 453 |
+
```
|
| 454 |
+
|
| 455 |
+
Configure via `WHISPER_DEVICE` (`cpu` | `cuda` | `nvidia_nim`) and `WHISPER_MODEL`. See the [Configuration](#configuration) table for all voice variables and supported model values.
|
| 456 |
+
|
| 457 |
+
---
|
| 458 |
+
|
| 459 |
+
## Configuration
|
| 460 |
+
|
| 461 |
+
### Core
|
| 462 |
+
|
| 463 |
+
| Variable | Description | Default |
|
| 464 |
+
| -------------------- | --------------------------------------------------------------------- | ------------------------------------------------- |
|
| 465 |
+
| `MODEL` | Fallback model (`provider/model/name` format; invalid prefix β error) | `nvidia_nim/stepfun-ai/step-3.5-flash` |
|
| 466 |
+
| `MODEL_OPUS` | Model for Claude Opus requests (falls back to `MODEL`) | `nvidia_nim/z-ai/glm4.7` |
|
| 467 |
+
| `MODEL_SONNET` | Model for Claude Sonnet requests (falls back to `MODEL`) | `open_router/arcee-ai/trinity-large-preview:free` |
|
| 468 |
+
| `MODEL_HAIKU` | Model for Claude Haiku requests (falls back to `MODEL`) | `open_router/stepfun/step-3.5-flash:free` |
|
| 469 |
+
| `NVIDIA_NIM_API_KEY` | NVIDIA API key | required for NIM |
|
| 470 |
+
| `NIM_ENABLE_THINKING` | Send `chat_template_kwargs` + `reasoning_budget` on NIM requests. Enable for thinking models (kimi, nemotron); leave `false` for others (e.g. Mistral) | `false` |
|
| 471 |
+
| `OPENROUTER_API_KEY` | OpenRouter API key | required for OpenRouter |
|
| 472 |
+
| `LM_STUDIO_BASE_URL` | LM Studio server URL | `http://localhost:1234/v1` |
|
| 473 |
+
| `LLAMACPP_BASE_URL` | llama.cpp server URL | `http://localhost:8080/v1` |
|
| 474 |
+
|
| 475 |
+
### Rate Limiting & Timeouts
|
| 476 |
+
|
| 477 |
+
| Variable | Description | Default |
|
| 478 |
+
| -------------------------- | ----------------------------------------- | ------- |
|
| 479 |
+
| `PROVIDER_RATE_LIMIT` | LLM API requests per window | `40` |
|
| 480 |
+
| `PROVIDER_RATE_WINDOW` | Rate limit window (seconds) | `60` |
|
| 481 |
+
| `PROVIDER_MAX_CONCURRENCY` | Max simultaneous open provider streams | `5` |
|
| 482 |
+
| `HTTP_READ_TIMEOUT` | Read timeout for provider requests (s) | `120` |
|
| 483 |
+
| `HTTP_WRITE_TIMEOUT` | Write timeout for provider requests (s) | `10` |
|
| 484 |
+
| `HTTP_CONNECT_TIMEOUT` | Connect timeout for provider requests (s) | `2` |
|
| 485 |
+
|
| 486 |
+
### Messaging & Voice
|
| 487 |
+
|
| 488 |
+
| Variable | Description | Default |
|
| 489 |
+
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- |
|
| 490 |
+
| `MESSAGING_PLATFORM` | `discord` or `telegram` | `discord` |
|
| 491 |
+
| `DISCORD_BOT_TOKEN` | Discord bot token | `""` |
|
| 492 |
+
| `ALLOWED_DISCORD_CHANNELS` | Comma-separated channel IDs (empty = none allowed) | `""` |
|
| 493 |
+
| `TELEGRAM_BOT_TOKEN` | Telegram bot token | `""` |
|
| 494 |
+
| `ALLOWED_TELEGRAM_USER_ID` | Allowed Telegram user ID | `""` |
|
| 495 |
+
| `CLAUDE_WORKSPACE` | Directory where the agent operates | `./agent_workspace` |
|
| 496 |
+
| `ALLOWED_DIR` | Allowed directories for the agent | `""` |
|
| 497 |
+
| `MESSAGING_RATE_LIMIT` | Messaging messages per window | `1` |
|
| 498 |
+
| `MESSAGING_RATE_WINDOW` | Messaging window (seconds) | `1` |
|
| 499 |
+
| `VOICE_NOTE_ENABLED` | Enable voice note handling | `true` |
|
| 500 |
+
| `WHISPER_DEVICE` | `cpu` \| `cuda` \| `nvidia_nim` | `cpu` |
|
| 501 |
+
| `WHISPER_MODEL` | Whisper model (local: `tiny`/`base`/`small`/`medium`/`large-v2`/`large-v3`/`large-v3-turbo`; NIM: `openai/whisper-large-v3`, `nvidia/parakeet-ctc-1.1b-asr`, etc.) | `base` |
|
| 502 |
+
| `HF_TOKEN` | Hugging Face token for faster downloads (local Whisper, optional) | β |
|
| 503 |
+
|
| 504 |
+
<details>
|
| 505 |
+
<summary><b>Advanced: Request optimization flags</b></summary>
|
| 506 |
+
|
| 507 |
+
These are enabled by default and intercept trivial Claude Code requests locally to save API quota.
|
| 508 |
+
|
| 509 |
+
| Variable | Description | Default |
|
| 510 |
+
| --------------------------------- | ------------------------------ | ------- |
|
| 511 |
+
| `FAST_PREFIX_DETECTION` | Enable fast prefix detection | `true` |
|
| 512 |
+
| `ENABLE_NETWORK_PROBE_MOCK` | Mock network probe requests | `true` |
|
| 513 |
+
| `ENABLE_TITLE_GENERATION_SKIP` | Skip title generation requests | `true` |
|
| 514 |
+
| `ENABLE_SUGGESTION_MODE_SKIP` | Skip suggestion mode requests | `true` |
|
| 515 |
+
| `ENABLE_FILEPATH_EXTRACTION_MOCK` | Mock filepath extraction | `true` |
|
| 516 |
+
|
| 517 |
+
</details>
|
| 518 |
+
|
| 519 |
+
See [`.env.example`](.env.example) for all supported parameters.
|
| 520 |
+
|
| 521 |
+
---
|
| 522 |
+
|
| 523 |
+
## Development
|
| 524 |
+
|
| 525 |
+
### Project Structure
|
| 526 |
+
|
| 527 |
+
```
|
| 528 |
+
free-claude-code/
|
| 529 |
+
βββ server.py # Entry point
|
| 530 |
+
βββ api/ # FastAPI routes, request detection, optimization handlers
|
| 531 |
+
βββ providers/ # BaseProvider, OpenAICompatibleProvider, NIM, OpenRouter, LM Studio, llamacpp
|
| 532 |
+
β βββ common/ # Shared utils (SSE builder, message converter, parsers, error mapping)
|
| 533 |
+
βββ messaging/ # MessagingPlatform ABC + Discord/Telegram bots, session management
|
| 534 |
+
βββ config/ # Settings, NIM config, logging
|
| 535 |
+
βββ cli/ # CLI session and process management
|
| 536 |
+
βββ tests/ # Pytest test suite
|
| 537 |
+
```
|
| 538 |
+
|
| 539 |
+
### Commands
|
| 540 |
+
|
| 541 |
+
```bash
|
| 542 |
+
uv run ruff format # Format code
|
| 543 |
+
uv run ruff check # Lint
|
| 544 |
+
uv run ty check # Type checking
|
| 545 |
+
uv run pytest # Run tests
|
| 546 |
+
```
|
| 547 |
+
|
| 548 |
+
### Extending
|
| 549 |
+
|
| 550 |
+
**Adding an OpenAI-compatible provider** (Groq, Together AI, etc.) β extend `OpenAICompatibleProvider`:
|
| 551 |
+
|
| 552 |
+
```python
|
| 553 |
+
from providers.openai_compat import OpenAICompatibleProvider
|
| 554 |
+
from providers.base import ProviderConfig
|
| 555 |
+
|
| 556 |
+
class MyProvider(OpenAICompatibleProvider):
|
| 557 |
+
def __init__(self, config: ProviderConfig):
|
| 558 |
+
super().__init__(config, provider_name="MYPROVIDER",
|
| 559 |
+
base_url="https://api.example.com/v1", api_key=config.api_key)
|
| 560 |
+
```
|
| 561 |
+
|
| 562 |
+
**Adding a fully custom provider** β extend `BaseProvider` directly and implement `stream_response()`.
|
| 563 |
+
|
| 564 |
+
**Adding a messaging platform** β extend `MessagingPlatform` in `messaging/` and implement `start()`, `stop()`, `send_message()`, `edit_message()`, and `on_message()`.
|
| 565 |
+
|
| 566 |
+
---
|
| 567 |
+
|
| 568 |
+
## Contributing
|
| 569 |
+
|
| 570 |
+
- Report bugs or suggest features via [Issues](https://github.com/Alishahryar1/free-claude-code/issues)
|
| 571 |
+
- Add new LLM providers (Groq, Together AI, etc.)
|
| 572 |
+
- Add new messaging platforms (Slack, etc.)
|
| 573 |
+
- Improve test coverage
|
| 574 |
+
- Not accepting Docker integration PRs for now
|
| 575 |
+
|
| 576 |
+
```bash
|
| 577 |
+
git checkout -b my-feature
|
| 578 |
+
uv run ruff format && uv run ruff check && uv run ty check && uv run pytest
|
| 579 |
+
# Open a pull request
|
| 580 |
+
```
|
| 581 |
+
|
| 582 |
+
---
|
| 583 |
+
|
| 584 |
+
## License
|
| 585 |
+
|
| 586 |
+
MIT License. See [LICENSE](LICENSE) for details.
|
| 587 |
+
|
| 588 |
+
Built with [FastAPI](https://fastapi.tiangolo.com/), [OpenAI Python SDK](https://github.com/openai/openai-python), [discord.py](https://github.com/Rapptz/discord.py), and [python-telegram-bot](https://python-telegram-bot.org/).
|
{Claude_Code/api β api}/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/app.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/command_utils.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/dependencies.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/detection.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/models/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/models/anthropic.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/models/responses.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/optimization_handlers.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/request_utils.py
RENAMED
|
File without changes
|
{Claude_Code/api β api}/routes.py
RENAMED
|
File without changes
|
Claude_Code/claude-pick β claude-pick
RENAMED
|
File without changes
|
{Claude_Code/cli β cli}/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/cli β cli}/entrypoints.py
RENAMED
|
File without changes
|
{Claude_Code/cli β cli}/manager.py
RENAMED
|
File without changes
|
{Claude_Code/cli β cli}/process_registry.py
RENAMED
|
File without changes
|
{Claude_Code/cli β cli}/session.py
RENAMED
|
File without changes
|
{Claude_Code/config β config}/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/config β config}/env.example
RENAMED
|
File without changes
|
{Claude_Code/config β config}/logging_config.py
RENAMED
|
File without changes
|
{Claude_Code/config β config}/nim.py
RENAMED
|
File without changes
|
{Claude_Code/config β config}/settings.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/commands.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/event_parser.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/handler.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/limiter.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/models.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/platforms/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/platforms/base.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/platforms/discord.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/platforms/factory.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/platforms/telegram.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/rendering/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/rendering/discord_markdown.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/rendering/telegram_markdown.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/session.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/transcript.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/transcription.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/trees/__init__.py
RENAMED
|
File without changes
|
{Claude_Code/messaging β messaging}/trees/data.py
RENAMED
|
File without changes
|