Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,7 @@ colorTo: purple
|
|
| 6 |
sdk: docker
|
| 7 |
app_port: 7860
|
| 8 |
pinned: true
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
<!-- Badges -->
|
|
@@ -14,319 +15,175 @@ pinned: true
|
|
| 14 |
[](https://huggingface.co/spaces)
|
| 15 |
[](https://github.com/openclaw/openclaw)
|
| 16 |
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
-
|
| 26 |
-
-
|
| 27 |
-
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
-
|
| 36 |
-
|
| 37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
## π Quick Start
|
| 40 |
|
| 41 |
-
### 1
|
| 42 |
-
[](https://huggingface.co/spaces/somratpro/HuggingClaw?duplicate=true)
|
| 43 |
-
|
| 44 |
-
Click the button above β name it β set to **Private**
|
| 45 |
-
|
| 46 |
-
### 2. Add Required Secrets
|
| 47 |
-
Go to **Settings β Secrets**:
|
| 48 |
-
|
| 49 |
-
| Secret | Value |
|
| 50 |
-
|--------|-------|
|
| 51 |
-
| `LLM_API_KEY` | Your API key ([Anthropic](https://console.anthropic.com/) / [OpenAI](https://platform.openai.com/) / [Google](https://ai.google.dev/)) |
|
| 52 |
-
| `LLM_MODEL` | Model to use (e.g. `google/gemini-2.5-flash`, `anthropic/claude-sonnet-4-5`, `openai/gpt-4`) |
|
| 53 |
-
| `GATEWAY_TOKEN` | Run `openssl rand -hex 32` to generate |
|
| 54 |
-
|
| 55 |
-
### 3. Deploy
|
| 56 |
-
That's it! The Space builds and starts automatically.
|
| 57 |
-
|
| 58 |
-
### 4. (Optional) Add Telegram
|
| 59 |
-
| Secret | Value |
|
| 60 |
-
|--------|-------|
|
| 61 |
-
| `TELEGRAM_BOT_TOKEN` | From [@BotFather](https://t.me/BotFather) |
|
| 62 |
-
| `TELEGRAM_USER_ID` | Your user ID ([how to find](https://t.me/userinfobot)) |
|
| 63 |
-
|
| 64 |
-
### 5. (Optional) Enable Workspace Backup
|
| 65 |
-
| Secret | Value |
|
| 66 |
-
|--------|-------|
|
| 67 |
-
| `HF_USERNAME` | Your HuggingFace username |
|
| 68 |
-
| `HF_TOKEN` | [HF token](https://huggingface.co/settings/tokens) with write access |
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
-
|
| 73 |
|
| 74 |
-
##
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
-
|
|
|
|
|
|
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
| `LLM_API_KEY` | LLM provider API key |
|
| 83 |
-
| `LLM_MODEL` | Model to use (e.g. `google/gemini-2.5-flash`, `anthropic/claude-sonnet-4-5`, `openai/gpt-4`) β auto-detects provider from prefix |
|
| 84 |
-
| `GATEWAY_TOKEN` | Gateway auth token |
|
| 85 |
|
| 86 |
-
###
|
| 87 |
|
| 88 |
-
|
| 89 |
-
|----------|---------|
|
| 90 |
-
| `TELEGRAM_BOT_TOKEN` | Bot token from @BotFather |
|
| 91 |
-
| `TELEGRAM_USER_ID` | Single user allowlist |
|
| 92 |
-
| `TELEGRAM_USER_IDS` | Multiple users (comma-separated): `123,456,789` |
|
| 93 |
|
| 94 |
-
##
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|----------|---------|---------|
|
| 98 |
-
| `HF_USERNAME` | β | Your HF username |
|
| 99 |
-
| `HF_TOKEN` | β | HF token (write access) |
|
| 100 |
-
| `BACKUP_DATASET_NAME` | `huggingclaw-backup` | Dataset name (auto-created!) |
|
| 101 |
-
| `WORKSPACE_GIT_USER` | `openclaw@example.com` | Git commit email |
|
| 102 |
-
| `WORKSPACE_GIT_NAME` | `OpenClaw Bot` | Git commit name |
|
| 103 |
|
| 104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
|
| 106 |
-
|
| 107 |
-
|----------|---------|---------|
|
| 108 |
-
| `KEEP_ALIVE_INTERVAL` | `300` (5 min) | Self-ping interval. `0` = disable |
|
| 109 |
-
| `SYNC_INTERVAL` | `600` (10 min) | Auto-sync interval |
|
| 110 |
|
| 111 |
-
##
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|----------|---------|---------|
|
| 115 |
-
| `OPENCLAW_PASSWORD` | β | Password auth (simpler alternative to token) |
|
| 116 |
-
| `TRUSTED_PROXIES` | β | Comma-separated proxy IPs (fixes auth issues behind reverse proxies) |
|
| 117 |
-
| `ALLOWED_ORIGINS` | β | Comma-separated URLs to lock down Control UI |
|
| 118 |
|
| 119 |
-
|
|
|
|
| 120 |
|
| 121 |
-
|
| 122 |
-
|----------|---------|---------|
|
| 123 |
-
| `OPENCLAW_VERSION` | `latest` | Pin OpenClaw version |
|
| 124 |
-
| `HEALTH_PORT` | `7861` | Health endpoint port |
|
| 125 |
|
| 126 |
-
|
| 127 |
|
| 128 |
-
|
| 129 |
|
| 130 |
-
|
| 131 |
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
LLM_API_KEY
|
| 135 |
-
LLM_MODEL
|
| 136 |
-
``
|
| 137 |
-
Models: `anthropic/claude-opus-4-6` Β· `anthropic/claude-sonnet-4-6` Β· `anthropic/claude-sonnet-4-5` Β· `anthropic/claude-haiku-4-5`
|
| 138 |
|
| 139 |
-
###
|
| 140 |
-
```
|
| 141 |
-
LLM_API_KEY=sk-...
|
| 142 |
-
LLM_MODEL=openai/gpt-5.4
|
| 143 |
-
```
|
| 144 |
-
Models: `openai/gpt-5.4-pro` Β· `openai/gpt-5.4` Β· `openai/gpt-5.4-mini` Β· `openai/gpt-5.4-nano` Β· `openai/gpt-4.1` Β· `openai/gpt-4.1-mini`
|
| 145 |
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
```
|
| 151 |
-
Models: `google/gemini-3.1-pro-preview` Β· `google/gemini-3-flash-preview` Β· `google/gemini-2.5-pro` Β· `google/gemini-2.5-flash`
|
| 152 |
|
| 153 |
-
###
|
| 154 |
-
```
|
| 155 |
-
LLM_API_KEY=sk-...
|
| 156 |
-
LLM_MODEL=deepseek/deepseek-v3.2
|
| 157 |
-
```
|
| 158 |
-
Models: `deepseek/deepseek-v3.2` Β· `deepseek/deepseek-r1-0528` Β· `deepseek/deepseek-r1`
|
| 159 |
-
Get key from: [DeepSeek Platform](https://platform.deepseek.com)
|
| 160 |
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
```
|
| 166 |
-
Models: `opencode/claude-opus-4-6` Β· `opencode/gpt-5.4`
|
| 167 |
-
Get key from: [OpenCode.ai](https://opencode.ai/auth)
|
| 168 |
|
| 169 |
-
###
|
| 170 |
-
```
|
| 171 |
-
LLM_API_KEY=your_opencode_api_key
|
| 172 |
-
LLM_MODEL=opencode-go/kimi-k2.5
|
| 173 |
-
```
|
| 174 |
-
Get key from: [OpenCode.ai](https://opencode.ai/auth)
|
| 175 |
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
```
|
| 181 |
-
|
| 182 |
-
|
| 183 |
|
| 184 |
-
###
|
| 185 |
-
```
|
| 186 |
-
LLM_API_KEY=sk-...
|
| 187 |
-
LLM_MODEL=moonshot/kimi-k2.5
|
| 188 |
-
```
|
| 189 |
-
Models: `moonshot/kimi-k2.5` Β· `moonshot/kimi-k2-thinking`
|
| 190 |
-
Get key from: [Moonshot API](https://platform.moonshot.cn)
|
| 191 |
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
```
|
| 197 |
-
Models: `mistral/mistral-large-latest` Β· `mistral/mistral-small-2603` Β· `mistral/devstral-medium` Β· `mistral/codestral-2508`
|
| 198 |
-
Get key from: [Mistral Console](https://console.mistral.ai)
|
| 199 |
|
| 200 |
-
##
|
| 201 |
-
```
|
| 202 |
-
LLM_API_KEY=your_xai_api_key
|
| 203 |
-
LLM_MODEL=xai/grok-4.20-beta
|
| 204 |
-
```
|
| 205 |
-
Models: `xai/grok-4.20-beta` Β· `xai/grok-4` Β· `xai/grok-4.1-fast`
|
| 206 |
-
Get key from: [xAI Console](https://console.x.ai)
|
| 207 |
|
| 208 |
-
|
| 209 |
-
```
|
| 210 |
-
LLM_API_KEY=your_minimax_api_key
|
| 211 |
-
LLM_MODEL=minimax/minimax-m2.7
|
| 212 |
-
```
|
| 213 |
-
Models: `minimax/minimax-m2.7` Β· `minimax/minimax-m2.5`
|
| 214 |
-
Get key from: [MiniMax Platform](https://platform.minimax.io)
|
| 215 |
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
|
| 220 |
-
```
|
| 221 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 222 |
|
| 223 |
-
###
|
| 224 |
-
```
|
| 225 |
-
LLM_API_KEY=your_xiaomi_api_key
|
| 226 |
-
LLM_MODEL=xiaomi/mimo-v2-pro
|
| 227 |
-
```
|
| 228 |
-
Models: `xiaomi/mimo-v2-pro` Β· `xiaomi/mimo-v2-omni`
|
| 229 |
|
| 230 |
-
|
| 231 |
-
```
|
| 232 |
-
LLM_API_KEY=your_volcengine_api_key
|
| 233 |
-
LLM_MODEL=volcengine/doubao-seed-1-8-251228
|
| 234 |
-
```
|
| 235 |
-
Models: `volcengine/doubao-seed-1-8-251228` Β· `volcengine/kimi-k2-5-260127` Β· `volcengine/glm-4-7-251222`
|
| 236 |
-
Get key from: [Volcengine](https://www.volcengine.com)
|
| 237 |
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
-
LLM_MODEL=groq/mixtral-8x7b-32768
|
| 242 |
```
|
| 243 |
-
Get key from: [Groq Console](https://console.groq.com)
|
| 244 |
|
| 245 |
-
|
| 246 |
-
```
|
| 247 |
-
LLM_API_KEY=your_cohere_api_key
|
| 248 |
-
LLM_MODEL=cohere/command-a
|
| 249 |
-
```
|
| 250 |
-
Get key from: [Cohere Dashboard](https://dashboard.cohere.com)
|
| 251 |
|
| 252 |
-
###
|
| 253 |
-
```
|
| 254 |
-
LLM_API_KEY=hf_your_token
|
| 255 |
-
LLM_MODEL=huggingface/deepseek-ai/DeepSeek-R1
|
| 256 |
-
```
|
| 257 |
-
Get key from: [HuggingFace Tokens](https://huggingface.co/settings/tokens)
|
| 258 |
|
| 259 |
-
|
| 260 |
-
```
|
| 261 |
-
LLM_API_KEY=sk-or-v1-...
|
| 262 |
-
LLM_MODEL=openrouter/anthropic/claude-sonnet-4-6
|
| 263 |
-
```
|
| 264 |
-
With OpenRouter, you can access **every model above** with a single API key! Just prefix with `openrouter/`:
|
| 265 |
-
- `openrouter/anthropic/claude-sonnet-4-6` β Anthropic Claude
|
| 266 |
-
- `openrouter/openai/gpt-5.4` β OpenAI
|
| 267 |
-
- `openrouter/deepseek/deepseek-v3.2` β DeepSeek
|
| 268 |
-
- `openrouter/google/gemini-2.5-flash` β Google Gemini
|
| 269 |
-
- `openrouter/meta-llama/llama-3.3-70b-instruct:free` β Llama (free!)
|
| 270 |
-
- `openrouter/moonshotai/kimi-k2.5` β Moonshot Kimi
|
| 271 |
-
- `openrouter/z-ai/glm-5-turbo` β Z.ai GLM
|
| 272 |
-
|
| 273 |
-
Get key from: [OpenRouter.ai](https://openrouter.ai) Β· [Full model list](https://openrouter.ai/models)
|
| 274 |
-
|
| 275 |
-
### Kilo Gateway
|
| 276 |
-
```
|
| 277 |
-
LLM_API_KEY=your_kilocode_api_key
|
| 278 |
-
LLM_MODEL=kilocode/anthropic/claude-opus-4.6
|
| 279 |
-
```
|
| 280 |
-
Get key from: [Kilo.ai](https://kilo.ai)
|
| 281 |
|
| 282 |
-
|
| 283 |
-
HuggingClaw supports **any LLM provider** that OpenClaw supports. Just use:
|
| 284 |
-
```
|
| 285 |
LLM_API_KEY=your_api_key
|
| 286 |
LLM_MODEL=provider/model-name
|
| 287 |
```
|
| 288 |
-
The provider prefix is auto-detected and mapped to the appropriate environment variable.
|
| 289 |
-
|
| 290 |
-
Full provider list: [OpenClaw Model Providers](https://docs.openclaw.ai/concepts/model-providers) Β· [OpenCode Providers](https://opencode.ai/docs/providers)
|
| 291 |
-
|
| 292 |
-
---
|
| 293 |
-
|
| 294 |
-
## π± Telegram Setup
|
| 295 |
-
|
| 296 |
-
1. Message [@BotFather](https://t.me/BotFather) β `/newbot` β copy the token
|
| 297 |
-
2. Message [@userinfobot](https://t.me/userinfobot) to get your user ID
|
| 298 |
-
3. Add secrets: `TELEGRAM_BOT_TOKEN` and `TELEGRAM_USER_ID`
|
| 299 |
-
4. Restart the Space β DM your bot π
|
| 300 |
-
|
| 301 |
-
**Multiple users?** Use `TELEGRAM_USER_IDS=123,456,789` (comma-separated)
|
| 302 |
-
|
| 303 |
-
---
|
| 304 |
-
|
| 305 |
-
## πΎ Workspace Backup
|
| 306 |
-
|
| 307 |
-
Set `HF_USERNAME` + `HF_TOKEN` and HuggingClaw handles everything:
|
| 308 |
-
|
| 309 |
-
1. **Auto-creates** the dataset if it doesn't exist
|
| 310 |
-
2. **Restores** workspace on every startup
|
| 311 |
-
3. **Smart sync** β uses `huggingface_hub` Python library (handles auth, LFS, retries automatically; falls back to git if unavailable)
|
| 312 |
-
4. **Auto-syncs** changes every 10 minutes (configurable via `SYNC_INTERVAL`)
|
| 313 |
-
5. **Saves** on shutdown (graceful SIGTERM handling)
|
| 314 |
-
|
| 315 |
-
Custom dataset name: `BACKUP_DATASET_NAME=my-custom-backup`
|
| 316 |
-
|
| 317 |
-
---
|
| 318 |
-
|
| 319 |
-
## π How It Stays Alive
|
| 320 |
-
|
| 321 |
-
HF Spaces sleeps after 48h of no HTTP requests. HuggingClaw prevents this with:
|
| 322 |
-
|
| 323 |
-
- **Self-ping** β pings its own URL every 5 min (uses HF's `SPACE_HOST` env var)
|
| 324 |
-
- **Health endpoint** β returns `200 OK` with uptime info
|
| 325 |
-
- **Zero dependencies** β no external cron, no third-party pinger
|
| 326 |
|
| 327 |
-
|
| 328 |
-
|
| 329 |
-
---
|
| 330 |
|
| 331 |
## π» Local Development
|
| 332 |
|
|
@@ -334,93 +191,89 @@ Your Space runs forever, powered entirely by HF. π―
|
|
| 334 |
git clone https://github.com/somratpro/huggingclaw.git
|
| 335 |
cd huggingclaw
|
| 336 |
cp .env.example .env
|
| 337 |
-
|
| 338 |
```
|
| 339 |
|
| 340 |
-
**Docker:**
|
|
|
|
| 341 |
```bash
|
| 342 |
docker build -t huggingclaw .
|
| 343 |
docker run -p 7860:7860 --env-file .env huggingclaw
|
| 344 |
```
|
| 345 |
|
| 346 |
**Without Docker:**
|
|
|
|
| 347 |
```bash
|
| 348 |
npm install -g openclaw@latest
|
| 349 |
export $(cat .env | xargs)
|
| 350 |
bash start.sh
|
| 351 |
```
|
| 352 |
|
| 353 |
-
|
| 354 |
|
| 355 |
-
|
| 356 |
|
| 357 |
```bash
|
| 358 |
npm install -g openclaw@latest
|
| 359 |
-
openclaw channels login --gateway https://
|
| 360 |
-
#
|
| 361 |
```
|
| 362 |
|
| 363 |
-
---
|
| 364 |
-
|
| 365 |
## ποΈ Architecture
|
| 366 |
|
| 367 |
-
```
|
| 368 |
HuggingClaw/
|
| 369 |
-
βββ Dockerfile # Multi-stage build
|
| 370 |
-
βββ start.sh # Config generator
|
| 371 |
-
βββ keep-alive.sh # Self-ping to prevent HF sleep
|
| 372 |
-
βββ workspace-sync.py #
|
| 373 |
-
βββ health-server.js #
|
| 374 |
-
βββ dns-fix.js # DNS
|
| 375 |
-
βββ .env.example #
|
| 376 |
-
βββ README.md #
|
| 377 |
-
|
| 378 |
-
|
| 379 |
-
|
| 380 |
-
|
| 381 |
-
|
| 382 |
-
|
| 383 |
-
|
| 384 |
-
|
| 385 |
-
|
| 386 |
-
|
| 387 |
-
|
| 388 |
-
|
| 389 |
-
|
| 390 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 391 |
|
| 392 |
## π Troubleshooting
|
| 393 |
|
| 394 |
-
**Missing secrets**
|
| 395 |
-
|
| 396 |
-
**
|
| 397 |
-
|
| 398 |
-
**
|
| 399 |
-
|
| 400 |
-
**
|
| 401 |
-
|
| 402 |
-
**"Proxy headers detected" or auth errors** β Set `TRUSTED_PROXIES` with the IPs from your Space logs (`remote=x.x.x.x`)
|
| 403 |
-
|
| 404 |
-
**Control UI blocked** β Set `ALLOWED_ORIGINS=https://your-space.hf.space` or check logs for origin errors
|
| 405 |
-
|
| 406 |
-
**Version issues** β Pin with `OPENCLAW_VERSION=2026.3.24` in secrets
|
| 407 |
-
|
| 408 |
-
---
|
| 409 |
|
| 410 |
## π Links
|
| 411 |
|
| 412 |
-
- [OpenClaw Docs](https://docs.openclaw.ai)
|
| 413 |
-
|
| 414 |
-
-
|
| 415 |
|
| 416 |
## π€ Contributing
|
| 417 |
|
| 418 |
-
Contributions welcome!
|
| 419 |
|
| 420 |
## π License
|
| 421 |
|
| 422 |
MIT β see [LICENSE](LICENSE) for details.
|
| 423 |
|
| 424 |
-
|
| 425 |
-
|
| 426 |
-
Made with β€οΈ by [@somratpro](https://github.com/somratpro) for the [OpenClaw](https://github.com/openclaw/openclaw) community
|
|
|
|
| 6 |
sdk: docker
|
| 7 |
app_port: 7860
|
| 8 |
pinned: true
|
| 9 |
+
license: mit
|
| 10 |
---
|
| 11 |
|
| 12 |
<!-- Badges -->
|
|
|
|
| 15 |
[](https://huggingface.co/spaces)
|
| 16 |
[](https://github.com/openclaw/openclaw)
|
| 17 |
|
| 18 |
+
**Your always-on AI assistant β free, no server needed.** HuggingClaw runs [OpenClaw](https://openclaw.ai) on HuggingFace Spaces, giving you a 24/7 AI chat assistant Telegram. It works with *any* large language model (LLM) β Claude, ChatGPT, Gemini, etc. β and even supports custom models via [OpenRouter](https://openrouter.ai). Deploy in minutes on the free HF Spaces tier (2 vCPU, 16GB RAM, 50GB) with automatic workspace backup to a HuggingFace Dataset so your chat history and settings persist across restarts.
|
| 19 |
+
|
| 20 |
+
## Table of Contents
|
| 21 |
+
|
| 22 |
+
- [β¨ Features](#-features)
|
| 23 |
+
- [π₯ Video Tutorial](#-video-tutorial)
|
| 24 |
+
- [π Quick Start](#-quick-start)
|
| 25 |
+
- [π± Telegram Setup *(Optional)*](#-telegram-setup-optional)
|
| 26 |
+
- [πΎ Workspace Backup *(Optional)*](#-workspace-backup-optional)
|
| 27 |
+
- [βοΈ Full Configuration Reference](#-full-configuration-reference)
|
| 28 |
+
- [π€ LLM Providers](#-llm-providers)
|
| 29 |
+
- [π» Local Development](#-local-development)
|
| 30 |
+
- [π CLI Access](#-cli-access)
|
| 31 |
+
- [ποΈ Architecture](#-architecture)
|
| 32 |
+
- [π Staying Alive](#-staying-alive)
|
| 33 |
+
- [π Troubleshooting](#-troubleshooting)
|
| 34 |
+
- [π Links](#-links)
|
| 35 |
+
- [π€ Contributing](#-contributing)
|
| 36 |
+
- [π License](#-license)
|
| 37 |
+
|
| 38 |
+
## β¨ Features
|
| 39 |
+
|
| 40 |
+
- π **Any LLM:** Use Claude, OpenAI GPT, Google Gemini, Grok, DeepSeek, Qwen, and 40+ providers (set `LLM_API_KEY` and `LLM_MODEL` accordingly).
|
| 41 |
+
- β‘ **Zero Config:** Duplicate this Space and set **just three** secrets (LLM_API_KEY, LLM_MODEL, GATEWAY_TOKEN) β no other setup needed.
|
| 42 |
+
- π³ **Fast Builds:** Uses a pre-built OpenClaw Docker image to deploy in minutes.
|
| 43 |
+
- πΎ **Workspace Backup:** Chats and settings sync to a private HF Dataset via the `huggingface_hub` (Git fallback), preserving data automatically.
|
| 44 |
+
- π **Always-On:** Built-in keep-alive pings prevent the HF Space from sleeping, so the assistant is always online.
|
| 45 |
+
- π₯ **Multi-User Telegram:** Configure one or more user IDs to control who can message the bot.
|
| 46 |
+
- π **Flexible Auth:** Secure the Control UI with either a gateway token or password.
|
| 47 |
+
- π **100% HF-Native:** Runs entirely on HuggingFaceβs free infrastructure (2 vCPU, 16GB RAM).
|
| 48 |
+
|
| 49 |
+
## π₯ Video Tutorial
|
| 50 |
+
|
| 51 |
+
Watch a quick walkthrough on YouTube: [Deploying HuggingClaw on HF Spaces](https://www.youtube.com/watch?v=S6pl7NmjX7g&t=73s).
|
| 52 |
|
| 53 |
## π Quick Start
|
| 54 |
|
| 55 |
+
### Step 1: Duplicate this Space
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
+
[](https://huggingface.co/spaces/somratpro/HuggingClaw?duplicate=true)
|
| 58 |
|
| 59 |
+
Click the button above to duplicate the template. And set the visibility to **Private** (recommended).
|
| 60 |
|
| 61 |
+
### Step 2: Add Your Secrets
|
| 62 |
|
| 63 |
+
Navigate to your new Space's **Settings**, scroll down to the **Variables and secrets** section, and add the following three under **Secrets**:
|
| 64 |
|
| 65 |
+
- `LLM_API_KEY` β Your provider API key (e.g., Anthropic, OpenAI, OpenRouter).
|
| 66 |
+
- `LLM_MODEL` β The model ID string you wish to use (e.g., `openai/gpt-4o` or `google/gemini-2.5-flash`).
|
| 67 |
+
- `GATEWAY_TOKEN` β A custom password or token to secure your Control UI. *(You can use any strong password, or generate one with `openssl rand -hex 32` if you prefer).*
|
| 68 |
|
| 69 |
+
> [!TIP]
|
| 70 |
+
> HuggingClaw is completely flexible! You only need these three secrets to get started. You can set other secrets later.
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
+
### Step 3: Deploy & Run
|
| 73 |
|
| 74 |
+
That's it! The Space will build the container and start up automatically. You can monitor the build process in the **Logs** tab.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
+
## π± Telegram Setup *(Optional)*
|
| 77 |
|
| 78 |
+
To chat via Telegram:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
+
1. Create a bot via [@BotFather](https://t.me/BotFather): send `/newbot`, follow prompts, and copy the bot token.
|
| 81 |
+
2. Find your Telegram user ID with [@userinfobot](https://t.me/userinfobot).
|
| 82 |
+
3. Add these secrets in Settings β Secrets:
|
| 83 |
+
- `TELEGRAM_BOT_TOKEN` β The token from @BotFather.
|
| 84 |
+
- `TELEGRAM_USER_ID` β Your Telegram user ID (for a single user).
|
| 85 |
+
- `TELEGRAM_USER_IDS` β Comma-separated user IDs (for team access, e.g. `123,456,789`).
|
| 86 |
|
| 87 |
+
After restarting, the bot should appear online on Telegram.
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
+
## πΎ Workspace Backup *(Optional)*
|
| 90 |
|
| 91 |
+
For persistent chat history and configuration:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
+
- Set `HF_USERNAME` to your HuggingFace username.
|
| 94 |
+
- Set `HF_TOKEN` to a HuggingFace token with write access.
|
| 95 |
|
| 96 |
+
Optionally set `BACKUP_DATASET_NAME` (default: `huggingclaw-backup`) to choose the HF Dataset name. On first run, HuggingClaw will create (or use) the private Dataset repo `HF_USERNAME/SPACE-backup`, then restore your workspace on startup and sync changes every 10 minutes. The workspace is also saved on graceful shutdown. This ensures your data survives restarts.
|
|
|
|
|
|
|
|
|
|
| 97 |
|
| 98 |
+
## βοΈ Full Configuration Reference
|
| 99 |
|
| 100 |
+
See `.env.example` for complete settings. Key environment variables:
|
| 101 |
|
| 102 |
+
### Core
|
| 103 |
|
| 104 |
+
| Variable | Description |
|
| 105 |
+
|----------------|---------------------------------------|
|
| 106 |
+
| `LLM_API_KEY` | LLM provider API key (e.g. OpenAI, Anthropic, etc.) |
|
| 107 |
+
| `LLM_MODEL` | Model ID (prefix `<provider>/`, auto-detected from prefix) |
|
| 108 |
+
| `GATEWAY_TOKEN`| Gateway token for Control UI access (required) |
|
|
|
|
| 109 |
|
| 110 |
+
### Background Services
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
+
| Variable | Default | Description |
|
| 113 |
+
|----------------------|---------|-----------------------------------|
|
| 114 |
+
| `KEEP_ALIVE_INTERVAL`| `300` | Self-ping interval in seconds (0 to disable) |
|
| 115 |
+
| `SYNC_INTERVAL` | `600` | Workspace sync interval (sec.) to HF Dataset |
|
|
|
|
|
|
|
| 116 |
|
| 117 |
+
### Security
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
|
| 119 |
+
| Variable | Description |
|
| 120 |
+
|---------------------|---------------------------------------------------|
|
| 121 |
+
| `OPENCLAW_PASSWORD` | (optional) Enable simple password auth instead of token |
|
| 122 |
+
| `TRUSTED_PROXIES` | Comma-separated IPs of HF proxies (for reverse-proxy fixes) |
|
| 123 |
+
| `ALLOWED_ORIGINS` | Comma-separated allowed origins for Control UI (e.g. `https://your-space.hf.space`) |
|
|
|
|
|
|
|
| 124 |
|
| 125 |
+
### Workspace Backup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
+
| Variable | Default | Description |
|
| 128 |
+
|---------------------|-------------------|------------------------------------------------|
|
| 129 |
+
| `HF_USERNAME` | β | Your HuggingFace username |
|
| 130 |
+
| `HF_TOKEN` | β | HF token with write access (for backups) |
|
| 131 |
+
| `BACKUP_DATASET_NAME`| `huggingclaw-backup`| Dataset name for backup repo (auto-created) |
|
| 132 |
+
| `WORKSPACE_GIT_USER`| `openclaw@example.com` | Git commit email for workspace commits |
|
| 133 |
+
| `WORKSPACE_GIT_NAME`| `OpenClaw Bot` | Git commit name for workspace commits |
|
| 134 |
|
| 135 |
+
### Advanced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
+
| Variable | Default | Description |
|
| 138 |
+
|---------------------|-----------|-------------------------------------|
|
| 139 |
+
| `OPENCLAW_VERSION` | `latest` | Pin a specific OpenClaw version |
|
| 140 |
+
| `HEALTH_PORT` | `7861` | Internal health endpoint port |
|
|
|
|
|
|
|
|
|
|
| 141 |
|
| 142 |
+
## π€ LLM Providers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
|
| 144 |
+
HuggingClaw supports **all providers** from OpenClaw. Set `LLM_MODEL=<provider/model>` and the provider is auto-detected. For example:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
+
| Provider | Prefix | Example Model | API Key Source |
|
| 147 |
+
|---------------|---------------|--------------------------------|-----------------------------------|
|
| 148 |
+
| **Anthropic** | `anthropic/` | `anthropic/claude-sonnet-4-6` | [Anthropic Console](https://console.anthropic.com/) |
|
| 149 |
+
| **OpenAI** | `openai/` | `openai/gpt-5.4` | [OpenAI Platform](https://platform.openai.com/) |
|
| 150 |
+
| **Google** | `google/` | `google/gemini-2.5-flash` | [AI Studio](https://ai.google.dev/) |
|
| 151 |
+
| **DeepSeek** | `deepseek/` | `deepseek/deepseek-v3.2` | [DeepSeek](https://platform.deepseek.com) |
|
| 152 |
+
| **xAI (Grok)**| `xai/` | `xai/grok-4` | [xAI](https://console.x.ai) |
|
| 153 |
+
| **Mistral** | `mistral/` | `mistral/mistral-large-latest` | [Mistral Console](https://console.mistral.ai) |
|
| 154 |
+
| **Moonshot** | `moonshot/` | `moonshot/kimi-k2.5` | [Moonshot](https://platform.moonshot.cn) |
|
| 155 |
+
| **Cohere** | `cohere/` | `cohere/command-a` | [Cohere Dashboard](https://dashboard.cohere.com) |
|
| 156 |
+
| **Groq** | `groq/` | `groq/mixtral-8x7b-32768` | [Groq](https://console.groq.com) |
|
| 157 |
+
| **MiniMax** | `minimax/` | `minimax/minimax-m2.7` | [MiniMax](https://platform.minimax.io) |
|
| 158 |
+
| **NVIDIA** | `nvidia/` | `nvidia/nemotron-3-super-120b-a12b` | [NVIDIA API](https://api.nvidia.com) |
|
| 159 |
+
| **Z.ai (GLM)**| `zai/` | `zai/glm-5` | [Z.ai](https://z.ai) |
|
| 160 |
+
| **Volcengine**| `volcengine/` | `volcengine/doubao-seed-1-8-251228` | [Volcengine](https://www.volcengine.com) |
|
| 161 |
+
| **HuggingFace**| `huggingface/`| `huggingface/deepseek-ai/DeepSeek-R1` | [HF Tokens](https://huggingface.co/settings/tokens) |
|
| 162 |
+
| **OpenCode Zen**| `opencode/` | `opencode/claude-opus-4-6` | [OpenCode.ai](https://opencode.ai/auth) |
|
| 163 |
+
| **OpenCode Go**| `opencode-go/`| `opencode-go/kimi-k2.5` | [OpenCode.ai](https://opencode.ai/auth) |
|
| 164 |
+
| **Kilo Gateway**| `kilocode/` | `kilocode/anthropic/claude-opus-4.6` | [Kilo.ai](https://kilo.ai) |
|
| 165 |
|
| 166 |
+
### OpenRouter β 200+ Models with One Key
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 167 |
|
| 168 |
+
Get an [OpenRouter](https://openrouter.ai) API key to use *all* providers. For example:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
|
| 170 |
+
```bash
|
| 171 |
+
LLM_API_KEY=sk-or-v1-xxxxxxxx
|
| 172 |
+
LLM_MODEL=openrouter/openai/gpt-5.4
|
|
|
|
| 173 |
```
|
|
|
|
| 174 |
|
| 175 |
+
Popular options include `openrouter/google/gemini-2.5-flash` or `openrouter/meta-llama/llama-3.3-70b-instruct`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 176 |
|
| 177 |
+
### Any Other Provider
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 178 |
|
| 179 |
+
You can also use any custom provider:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 180 |
|
| 181 |
+
```bash
|
|
|
|
|
|
|
| 182 |
LLM_API_KEY=your_api_key
|
| 183 |
LLM_MODEL=provider/model-name
|
| 184 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
|
| 186 |
+
The provider prefix in `LLM_MODEL` tells HuggingClaw how to call it. See [OpenClaw Model Providers](https://docs.openclaw.ai/concepts/model-providers) for the full list.
|
|
|
|
|
|
|
| 187 |
|
| 188 |
## π» Local Development
|
| 189 |
|
|
|
|
| 191 |
git clone https://github.com/somratpro/huggingclaw.git
|
| 192 |
cd huggingclaw
|
| 193 |
cp .env.example .env
|
| 194 |
+
# Edit .env with your secret values
|
| 195 |
```
|
| 196 |
|
| 197 |
+
**With Docker:**
|
| 198 |
+
|
| 199 |
```bash
|
| 200 |
docker build -t huggingclaw .
|
| 201 |
docker run -p 7860:7860 --env-file .env huggingclaw
|
| 202 |
```
|
| 203 |
|
| 204 |
**Without Docker:**
|
| 205 |
+
|
| 206 |
```bash
|
| 207 |
npm install -g openclaw@latest
|
| 208 |
export $(cat .env | xargs)
|
| 209 |
bash start.sh
|
| 210 |
```
|
| 211 |
|
| 212 |
+
## π CLI Access
|
| 213 |
|
| 214 |
+
After deploying, you can connect via the OpenClaw CLI (e.g., to onboard channels or run agents):
|
| 215 |
|
| 216 |
```bash
|
| 217 |
npm install -g openclaw@latest
|
| 218 |
+
openclaw channels login --gateway https://YOUR_SPACE_NAME.hf.space
|
| 219 |
+
# When prompted, enter your GATEWAY_TOKEN
|
| 220 |
```
|
| 221 |
|
|
|
|
|
|
|
| 222 |
## ποΈ Architecture
|
| 223 |
|
| 224 |
+
```bash
|
| 225 |
HuggingClaw/
|
| 226 |
+
βββ Dockerfile # Multi-stage build using pre-built OpenClaw image
|
| 227 |
+
βββ start.sh # Config generator, validator, and orchestrator
|
| 228 |
+
βββ keep-alive.sh # Self-ping to prevent HF Space sleep
|
| 229 |
+
βββ workspace-sync.py # Syncs workspace to HF Datasets (with Git fallback)
|
| 230 |
+
βββ health-server.js # /health endpoint for uptime checks
|
| 231 |
+
βββ dns-fix.js # DNS-over-HTTPS fallback (for blocked domains)
|
| 232 |
+
βββ .env.example # Environment variable reference
|
| 233 |
+
βββ README.md # (this file)
|
| 234 |
+
|
| 235 |
+
**Startup sequence:**
|
| 236 |
+
1. Validate required secrets (fail fast with clear error).
|
| 237 |
+
2. Check HF token (warn if expired or missing).
|
| 238 |
+
3. Auto-create backup dataset if missing.
|
| 239 |
+
4. Restore workspace from HF Dataset.
|
| 240 |
+
5. Generate `openclaw.json` from environment variables.
|
| 241 |
+
6. Print startup summary.
|
| 242 |
+
7. Launch background tasks (keep-alive, auto-sync).
|
| 243 |
+
8. Launch the OpenClaw gateway (start listening).
|
| 244 |
+
9. On `SIGTERM`, save workspace and exit cleanly.
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
## π Staying Alive
|
| 248 |
+
|
| 249 |
+
HuggingClaw keeps the Space awake without external cron tools:
|
| 250 |
+
|
| 251 |
+
- **Self-ping:** It periodically sends HTTP requests to its own URL (default every 5 minutes).
|
| 252 |
+
- **Health endpoint:** `/health` returns `200 OK` and uptime info.
|
| 253 |
+
- **No external deps:** Fully managed within HF Spaces (no outside pingers or servers).
|
| 254 |
|
| 255 |
## π Troubleshooting
|
| 256 |
|
| 257 |
+
- **Missing secrets:** Ensure `LLM_API_KEY`, `LLM_MODEL`, and `GATEWAY_TOKEN` are set in your Space **Settings β Secrets**.
|
| 258 |
+
- **Telegram bot issues:** Verify your `TELEGRAM_BOT_TOKEN`. Check Space logs for lines like `π± Enabling Telegram`.
|
| 259 |
+
- **Backup restore failing:** Make sure `HF_USERNAME` and `HF_TOKEN` are correct (token needs write access to your Dataset).
|
| 260 |
+
- **Space keeps sleeping:** Check logs for `Keep-alive` messages. Ensure `KEEP_ALIVE_INTERVAL` isnβt set to `0`.
|
| 261 |
+
- **Auth errors / proxy:** If you see reverse-proxy auth errors, add the logged IPs under `TRUSTED_PROXIES` (from logs `remote=x.x.x.x`).
|
| 262 |
+
- **UI blocked (CORS):** Set `ALLOWED_ORIGINS=https://your-space-name.hf.space`.
|
| 263 |
+
- **Version mismatches:** You can pin a specific OpenClaw version via the `OPENCLAW_VERSION` secret.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 264 |
|
| 265 |
## π Links
|
| 266 |
|
| 267 |
+
- [OpenClaw Docs](https://docs.openclaw.ai)
|
| 268 |
+
- [OpenClaw GitHub](https://github.com/openclaw/openclaw)
|
| 269 |
+
- [HuggingFace Spaces Docs](https://huggingface.co/docs/hub/spaces)
|
| 270 |
|
| 271 |
## π€ Contributing
|
| 272 |
|
| 273 |
+
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
| 274 |
|
| 275 |
## π License
|
| 276 |
|
| 277 |
MIT β see [LICENSE](LICENSE) for details.
|
| 278 |
|
| 279 |
+
*Made with β€οΈ by [@somratpro](https://github.com/somratpro) for the OpenClaw community.*
|
|
|
|
|
|