Spaces:
Running
Running
fix: update sync interval to 600 seconds and enhance environment variable handling in startup scripts
Browse files- README.md +114 -52
- hermes-sync.py +1 -1
- start.sh +24 -3
README.md
CHANGED
|
@@ -9,9 +9,9 @@ pinned: true
|
|
| 9 |
license: mit
|
| 10 |
secrets:
|
| 11 |
- name: LLM_API_KEY
|
| 12 |
-
description: "Your LLM provider API key
|
| 13 |
- name: LLM_MODEL
|
| 14 |
-
description: "Model
|
| 15 |
- name: GATEWAY_TOKEN
|
| 16 |
description: "Strong token to secure your dashboard and API (generate: openssl rand -hex 32)."
|
| 17 |
- name: TELEGRAM_BOT_TOKEN
|
|
@@ -19,7 +19,7 @@ secrets:
|
|
| 19 |
- name: TELEGRAM_ALLOWED_USERS
|
| 20 |
description: "Comma-separated list of numeric user IDs allowed to use the bot."
|
| 21 |
- name: HF_TOKEN
|
| 22 |
-
description: "Hugging Face token with write access. Used for automatic workspace backup."
|
| 23 |
- name: CLOUDFLARE_WORKERS_TOKEN
|
| 24 |
description: "Cloudflare API token for automatic Worker proxy and KeepAlive setup."
|
| 25 |
---
|
|
@@ -30,7 +30,7 @@ secrets:
|
|
| 30 |
[](https://huggingface.co/spaces)
|
| 31 |
[](https://github.com/NousResearch/hermes-agent)
|
| 32 |
|
| 33 |
-
**Self-hosted Hermes AI agent gateway
|
| 34 |
|
| 35 |
## Table of Contents
|
| 36 |
|
|
@@ -41,22 +41,22 @@ secrets:
|
|
| 41 |
- [π± Telegram Setup](#-telegram-setup)
|
| 42 |
- [π Cloudflare Proxy](#-cloudflare-proxy)
|
| 43 |
- [πΎ Backup & Persistence](#-backup--persistence)
|
| 44 |
-
- [π Staying Alive](#-staying-alive)
|
| 45 |
- [π Security & Advanced](#-security--advanced)
|
| 46 |
- [π» Local Development](#-local-development)
|
| 47 |
-
- [ποΈ Architecture](#
|
| 48 |
- [π Troubleshooting](#-troubleshooting)
|
| 49 |
- [π More Projects](#-more-projects)
|
| 50 |
|
| 51 |
## β¨ Features
|
| 52 |
|
| 53 |
-
- π§ **Hermes Core:** Runs
|
| 54 |
-
- π **Secure by Default:**
|
| 55 |
-
- π **Built-in Connectivity:**
|
| 56 |
-
- π **
|
| 57 |
-
- πΎ **Persistent Backup:**
|
| 58 |
-
- β° **
|
| 59 |
-
-
|
| 60 |
|
| 61 |
## π Quick Start
|
| 62 |
|
|
@@ -66,68 +66,129 @@ secrets:
|
|
| 66 |
|
| 67 |
### Step 2: Add Your Secrets
|
| 68 |
|
| 69 |
-
|
| 70 |
|
| 71 |
-
- `LLM_API_KEY`
|
| 72 |
-
- `LLM_MODEL`
|
| 73 |
-
- `GATEWAY_TOKEN`
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
-
### Step 3:
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
## π Access Control
|
| 80 |
|
| 81 |
-
Hermes' built-in dashboard is
|
| 82 |
|
| 83 |
-
- **Dashboard:** Opening `/app/` requires
|
| 84 |
-
- **API:** Routes under `/v1/*`
|
| 85 |
|
| 86 |
## π€ LLM Providers
|
| 87 |
|
| 88 |
-
HuggingMes
|
| 89 |
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
| **OpenAI** | `openai/` | `openai/gpt-4o` |
|
| 96 |
-
| **HuggingFace** | `huggingface/` | `huggingface/meta-llama/Llama-3.3-70B-Instruct` |
|
| 97 |
|
| 98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
|
| 100 |
To use Hermes via Telegram:
|
| 101 |
|
| 102 |
1. Add `TELEGRAM_BOT_TOKEN` from [@BotFather](https://t.me/BotFather).
|
| 103 |
-
2. Add `TELEGRAM_ALLOWED_USERS`
|
| 104 |
-
3. Add `CLOUDFLARE_WORKERS_TOKEN`
|
| 105 |
|
| 106 |
## π Cloudflare Proxy
|
| 107 |
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
1. Add `CLOUDFLARE_WORKERS_TOKEN` as a Space secret.
|
| 111 |
-
2. Restart the Space.
|
| 112 |
-
|
| 113 |
-
HuggingMes will auto-provision a Worker proxy for Telegram and other restricted traffic, and set up a keep-awake cron.
|
| 114 |
|
| 115 |
-
## πΎ Backup & Persistence
|
| 116 |
|
| 117 |
-
Set `HF_TOKEN` with
|
| 118 |
|
| 119 |
-
## π Staying Alive
|
| 120 |
|
| 121 |
-
|
| 122 |
|
| 123 |
-
## π Security & Advanced
|
| 124 |
|
| 125 |
| Variable | Default | Description |
|
| 126 |
| :--- | :--- | :--- |
|
| 127 |
| `GATEWAY_TOKEN` | β | Token for dashboard and API auth |
|
| 128 |
-
| `HF_TOKEN` | β | HF token with write access for backups |
|
| 129 |
-
| `CLOUDFLARE_WORKERS_TOKEN` | β | Cloudflare API token for
|
| 130 |
-
| `SYNC_INTERVAL` | `
|
| 131 |
| `CLOUDFLARE_KEEPALIVE_ENABLED` | `true` | Set `false` to disable keep-awake worker |
|
| 132 |
| `TELEGRAM_MODE` | `webhook` | `webhook` or `polling` |
|
| 133 |
|
|
@@ -144,15 +205,16 @@ docker compose up --build
|
|
| 144 |
- **Dashboard (`/`)**: Real-time management and monitoring.
|
| 145 |
- **Hermes App (`/app/`)**: Secure proxied access to the Hermes UI.
|
| 146 |
- **API (`/v1/*`)**: Proxied OpenAI-compatible agent API.
|
| 147 |
-
- **Health Check (`/health`)**: Readiness probe for HF and
|
| 148 |
- **Sync Engine**: Python background task for HF Dataset persistence.
|
| 149 |
|
| 150 |
## π Troubleshooting
|
| 151 |
|
| 152 |
-
- **Telegram bot not responding:** Ensure `CLOUDFLARE_WORKERS_TOKEN` is set
|
| 153 |
-
- **Authentication failed:** Clear
|
| 154 |
-
- **Data not persisting:** Ensure `HF_TOKEN` has
|
| 155 |
-
- **
|
|
|
|
| 156 |
|
| 157 |
## π More Projects
|
| 158 |
|
|
|
|
| 9 |
license: mit
|
| 10 |
secrets:
|
| 11 |
- name: LLM_API_KEY
|
| 12 |
+
description: "Your LLM provider API key for direct providers such as OpenRouter, Anthropic, OpenAI, Google, DeepSeek, xAI, and others."
|
| 13 |
- name: LLM_MODEL
|
| 14 |
+
description: "Model or provider model ID, such as openrouter/anthropic/claude-sonnet-4 or openai/gpt-4o."
|
| 15 |
- name: GATEWAY_TOKEN
|
| 16 |
description: "Strong token to secure your dashboard and API (generate: openssl rand -hex 32)."
|
| 17 |
- name: TELEGRAM_BOT_TOKEN
|
|
|
|
| 19 |
- name: TELEGRAM_ALLOWED_USERS
|
| 20 |
description: "Comma-separated list of numeric user IDs allowed to use the bot."
|
| 21 |
- name: HF_TOKEN
|
| 22 |
+
description: "Hugging Face token with write access. Used for automatic workspace backup and HF providers."
|
| 23 |
- name: CLOUDFLARE_WORKERS_TOKEN
|
| 24 |
description: "Cloudflare API token for automatic Worker proxy and KeepAlive setup."
|
| 25 |
---
|
|
|
|
| 30 |
[](https://huggingface.co/spaces)
|
| 31 |
[](https://github.com/NousResearch/hermes-agent)
|
| 32 |
|
| 33 |
+
**Self-hosted Hermes AI agent gateway for Hugging Face Spaces.** HuggingMes runs [Nous Research Hermes Agent](https://github.com/NousResearch/hermes-agent) on HuggingFace Spaces, giving you a 24/7 personal AI assistant with a management dashboard, persistent HF Dataset backup, and automatic connectivity fixes for blocked outbound traffic. HuggingMes directly wires the startup providers listed below, and it can also use Hermes providers configured through `hermes model` or `config.yaml`.
|
| 34 |
|
| 35 |
## Table of Contents
|
| 36 |
|
|
|
|
| 41 |
- [π± Telegram Setup](#-telegram-setup)
|
| 42 |
- [π Cloudflare Proxy](#-cloudflare-proxy)
|
| 43 |
- [πΎ Backup & Persistence](#-backup--persistence)
|
| 44 |
+
- [π Staying Alive](#-staying-alive-recommended-on-free-hf-spaces)
|
| 45 |
- [π Security & Advanced](#-security--advanced)
|
| 46 |
- [π» Local Development](#-local-development)
|
| 47 |
+
- [ποΈ Architecture](#-architecture)
|
| 48 |
- [π Troubleshooting](#-troubleshooting)
|
| 49 |
- [π More Projects](#-more-projects)
|
| 50 |
|
| 51 |
## β¨ Features
|
| 52 |
|
| 53 |
+
- π§ **Hermes Core:** Runs Hermes Agent for multi-turn chat, tools, memory, and agent workflows.
|
| 54 |
+
- π **Secure by Default:** Protects the dashboard and API with a single gateway token.
|
| 55 |
+
- π **Built-in Connectivity:** Adds Cloudflare Worker proxy support for Telegram and other blocked outbound traffic.
|
| 56 |
+
- π **Dashboard:** Real-time view of uptime, sync health, and agent status at `/`.
|
| 57 |
+
- πΎ **Persistent Backup:** Syncs chats, config, and session data to a private HF Dataset.
|
| 58 |
+
- β° **Keep-Alive:** Can provision a cron-triggered Cloudflare Worker to keep the Space awake.
|
| 59 |
+
- π€ **Broad Provider Support:** Supports Hermes' native providers, direct API-key providers, OAuth providers, and custom OpenAI-compatible endpoints.
|
| 60 |
|
| 61 |
## π Quick Start
|
| 62 |
|
|
|
|
| 66 |
|
| 67 |
### Step 2: Add Your Secrets
|
| 68 |
|
| 69 |
+
In your Space's **Settings β Variables and secrets**, add these under **Secrets**:
|
| 70 |
|
| 71 |
+
- `LLM_API_KEY` - Your provider API key for direct providers.
|
| 72 |
+
- `LLM_MODEL` - The model ID to use, such as `openrouter/anthropic/claude-sonnet-4`, `openai/gpt-4o`, or `google/gemini-2.5-flash`.
|
| 73 |
+
- `GATEWAY_TOKEN` - A strong token to secure the dashboard.
|
| 74 |
+
- `TELEGRAM_BOT_TOKEN` - Telegram bot token from BotFather.
|
| 75 |
+
- `TELEGRAM_ALLOWED_USERS` - Comma-separated numeric Telegram user IDs.
|
| 76 |
+
- `HF_TOKEN` - Hugging Face token with write access for backups and HF providers.
|
| 77 |
+
- `CLOUDFLARE_WORKERS_TOKEN` - Cloudflare token for outbound proxying and keep-alive automation.
|
| 78 |
|
| 79 |
+
### Step 3: Deploy & Run
|
| 80 |
|
| 81 |
+
After the Space builds, open it and click **Open Hermes UI** to access the agent interface.
|
| 82 |
|
| 83 |
## π Access Control
|
| 84 |
|
| 85 |
+
Hermes' built-in dashboard is wrapped by HuggingMes:
|
| 86 |
|
| 87 |
+
- **Dashboard:** Opening `/app/` requires `GATEWAY_TOKEN`.
|
| 88 |
+
- **API:** Routes under `/v1/*` require `Authorization: Bearer <GATEWAY_TOKEN>`.
|
| 89 |
|
| 90 |
## π€ LLM Providers
|
| 91 |
|
| 92 |
+
HuggingMes supports Hermes providers in two different ways:
|
| 93 |
|
| 94 |
+
- **Direct startup providers:** Set `LLM_API_KEY` and `LLM_MODEL`, and HuggingMes maps them during boot.
|
| 95 |
+
- **Hermes-native providers:** Use `hermes model` after the Space starts, or edit `config.yaml` through the Hermes UI.
|
| 96 |
+
- **Custom OpenAI-compatible endpoints:** Point Hermes at your own `/v1` endpoint.
|
| 97 |
+
|
| 98 |
+
### Direct startup providers
|
|
|
|
|
|
|
| 99 |
|
| 100 |
+
These are the providers that HuggingMes maps directly from `LLM_MODEL` and `LLM_API_KEY` at startup.
|
| 101 |
+
|
| 102 |
+
| Provider | Prefix | Example `LLM_MODEL` | Key env |
|
| 103 |
+
| :--- | :--- | :--- | :--- |
|
| 104 |
+
| OpenRouter | `openrouter/` | `openrouter/anthropic/claude-sonnet-4` | `LLM_API_KEY` -> `OPENROUTER_API_KEY` |
|
| 105 |
+
| Hugging Face Inference Providers | `huggingface/` or `hf/` | `huggingface/Qwen/Qwen3-235B-A22B-Thinking-2507` | `LLM_API_KEY` -> `HF_TOKEN` |
|
| 106 |
+
| AI Gateway / Vercel AI Gateway | `ai-gateway/` or `vercel-ai-gateway/` | `ai-gateway/openai/gpt-4o` | `LLM_API_KEY` -> `AI_GATEWAY_API_KEY` |
|
| 107 |
+
| Anthropic | `anthropic/` | `anthropic/claude-sonnet-4-6` | `LLM_API_KEY` -> `ANTHROPIC_API_KEY` |
|
| 108 |
+
| OpenAI / OpenAI Codex | `openai/` or `openai-codex/` | `openai/gpt-4o` | `LLM_API_KEY` -> `OPENAI_API_KEY` |
|
| 109 |
+
| Google Gemini | `google/` or `gemini/` | `google/gemini-2.5-flash` | `LLM_API_KEY` -> `GOOGLE_API_KEY` and `GEMINI_API_KEY` |
|
| 110 |
+
| DeepSeek | `deepseek/` | `deepseek/deepseek-chat` | `LLM_API_KEY` -> `DEEPSEEK_API_KEY` |
|
| 111 |
+
| Kimi / Moonshot | `kimi-coding/` or `moonshot/` | `kimi-coding/kimi-for-coding` | `LLM_API_KEY` -> `KIMI_API_KEY` |
|
| 112 |
+
| Kimi / Moonshot (China) | `kimi-coding-cn/` | `kimi-coding-cn/kimi-k2.5` | `LLM_API_KEY` -> `KIMI_CN_API_KEY` |
|
| 113 |
+
| Arcee AI | `arcee/` | `arcee/trinity-large-thinking` | `LLM_API_KEY` -> `ARCEEAI_API_KEY` |
|
| 114 |
+
| GMI Cloud | `gmi/` | `gmi/zai-org/GLM-5.1-FP8` | `LLM_API_KEY` -> `GMI_API_KEY` |
|
| 115 |
+
| MiniMax | `minimax/` | `minimax/MiniMax-M2.7` | `LLM_API_KEY` -> `MINIMAX_API_KEY` |
|
| 116 |
+
| MiniMax (China) | `minimax-cn/` | `minimax-cn/MiniMax-M2.7` | `LLM_API_KEY` -> `MINIMAX_CN_API_KEY` |
|
| 117 |
+
| Alibaba Cloud | `alibaba/` | `alibaba/qwen3.5-plus` | `LLM_API_KEY` -> `DASHSCOPE_API_KEY` |
|
| 118 |
+
| Alibaba Coding Plan | `alibaba-coding-plan/` | `alibaba-coding-plan/qwen3-coder-plus` | `LLM_API_KEY` -> `DASHSCOPE_API_KEY` |
|
| 119 |
+
| Xiaomi MiMo | `xiaomi/` | `xiaomi/mimo-v2-pro` | `LLM_API_KEY` -> `XIAOMI_API_KEY` |
|
| 120 |
+
| Tencent TokenHub | `tencent-tokenhub/` | `tencent-tokenhub/hy3-preview` | `LLM_API_KEY` -> `TOKENHUB_API_KEY` |
|
| 121 |
+
| Z.ai / GLM | `zai/`, `z-ai/`, `z.ai/`, or `glm/` | `zai/glm-5` | `LLM_API_KEY` -> `GLM_API_KEY` |
|
| 122 |
+
| NVIDIA NIM | `nvidia/` | `nvidia/nemotron-3-super-120b-a12b` | `LLM_API_KEY` -> `NVIDIA_API_KEY` |
|
| 123 |
+
| xAI / Grok | `xai/` or `grok/` | `xai/grok-4-1-fast-reasoning` | `LLM_API_KEY` -> `XAI_API_KEY` |
|
| 124 |
+
| Kilo Code | `kilocode/` | `kilocode/<model-id>` | `LLM_API_KEY` -> `KILOCODE_API_KEY` |
|
| 125 |
+
| OpenCode Zen | `opencode-zen/` | `opencode-zen/<model-id>` | `LLM_API_KEY` -> `OPENCODE_ZEN_API_KEY` |
|
| 126 |
+
| OpenCode Go | `opencode-go/` | `opencode-go/<model-id>` | `LLM_API_KEY` -> `OPENCODE_GO_API_KEY` |
|
| 127 |
+
|
| 128 |
+
### Hermes-native providers and OAuth flows
|
| 129 |
+
|
| 130 |
+
These providers are supported by Hermes and can be used in HuggingMes once the agent config is set through `hermes model` or `config.yaml`. HuggingMes does not auto-map them from `LLM_MODEL` at boot unless Hermes itself handles that provider.
|
| 131 |
+
|
| 132 |
+
| Provider | How to use | Notes |
|
| 133 |
+
| :--- | :--- | :--- |
|
| 134 |
+
| Nous Portal | `hermes model` | Subscription-based OAuth provider in Hermes |
|
| 135 |
+
| OpenAI Codex | `hermes model` | ChatGPT OAuth / Codex models |
|
| 136 |
+
| GitHub Copilot | `hermes model` | Uses `COPILOT_GITHUB_TOKEN`, `GH_TOKEN`, or `gh auth token` |
|
| 137 |
+
| GitHub Copilot ACP | `hermes model` | Spawns the Copilot CLI backend |
|
| 138 |
+
| Anthropic (OAuth / Claude Code) | `hermes model` | Also supports `ANTHROPIC_API_KEY` |
|
| 139 |
+
| Google Gemini (OAuth) | `hermes model` | Browser OAuth flow, including free-tier Gemini OAuth |
|
| 140 |
+
| Qwen Portal (OAuth) | `hermes model` | Alibaba Qwen portal OAuth login |
|
| 141 |
+
| MiniMax (OAuth) | `hermes model` | Portal login for MiniMax models |
|
| 142 |
+
| Hugging Face Inference Providers | `hermes model` | Unified HF provider routing with model suffixes like `:fastest` and `:cheapest` |
|
| 143 |
+
| AWS Bedrock | `hermes model` or `config.yaml` | Uses AWS credentials chain, not an API key |
|
| 144 |
+
| Ollama Cloud | `hermes model` | Managed Ollama catalog with `OLLAMA_API_KEY` |
|
| 145 |
+
| Arcee AI | `hermes model` | First-class Hermes provider |
|
| 146 |
+
| GMI Cloud | `hermes model` | First-class Hermes provider |
|
| 147 |
+
| Alibaba Cloud / DashScope | `hermes model` | First-class Hermes provider for Qwen models |
|
| 148 |
+
| Tencent TokenHub | `hermes model` | First-class Hermes provider |
|
| 149 |
+
| Custom endpoint | `hermes model` or `config.yaml` | Any OpenAI-compatible endpoint |
|
| 150 |
+
|
| 151 |
+
### Custom and self-hosted endpoints
|
| 152 |
+
|
| 153 |
+
HuggingMes also works with any OpenAI-compatible server. Common examples include local Ollama, LM Studio, llama.cpp / llama-server, vLLM, SGLang, LocalAI, Jan, LiteLLM, ClawRouter, Together AI, Groq, Fireworks AI, Azure OpenAI, and similar services.
|
| 154 |
+
|
| 155 |
+
Use either the Hermes model wizard or a direct `config.yaml` entry with a `base_url`, `model`, and optional API key. For local servers that do not require auth, leave the key empty.
|
| 156 |
+
|
| 157 |
+
### Recommended provider choices
|
| 158 |
+
|
| 159 |
+
- **Just want it to work:** OpenRouter or Hermes' Nous Portal.
|
| 160 |
+
- **Want local models:** Ollama, LM Studio, llama.cpp, vLLM, or SGLang through a custom endpoint.
|
| 161 |
+
- **Need cloud APIs:** OpenAI, Anthropic, Google Gemini, DeepSeek, xAI, Hugging Face, or any other direct provider above.
|
| 162 |
+
- **Need routing or fallback:** Use a custom endpoint such as LiteLLM or ClawRouter.
|
| 163 |
+
|
| 164 |
+
## π± Telegram Setup
|
| 165 |
|
| 166 |
To use Hermes via Telegram:
|
| 167 |
|
| 168 |
1. Add `TELEGRAM_BOT_TOKEN` from [@BotFather](https://t.me/BotFather).
|
| 169 |
+
2. Add `TELEGRAM_ALLOWED_USERS` if you want to restrict access.
|
| 170 |
+
3. Add `CLOUDFLARE_WORKERS_TOKEN` if you need automatic outbound proxying for Telegram API traffic.
|
| 171 |
|
| 172 |
## π Cloudflare Proxy
|
| 173 |
|
| 174 |
+
Hugging Face Spaces often block outbound calls to APIs used by Telegram and some provider backends. HuggingMes can provision a Cloudflare Worker proxy automatically when you add `CLOUDFLARE_WORKERS_TOKEN`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 175 |
|
| 176 |
+
## πΎ Backup & Persistence *(Optional)*
|
| 177 |
|
| 178 |
+
Set `HF_TOKEN` with write access to enable backup. HuggingMes syncs workspace data to a private HF Dataset named `huggingmes-backup` every 600 seconds by default.
|
| 179 |
|
| 180 |
+
## π Staying Alive
|
| 181 |
|
| 182 |
+
With `CLOUDFLARE_WORKERS_TOKEN` set, HuggingMes can create a keep-alive worker that pings the Space's `/health` endpoint on a schedule so the free tier stays awake longer.
|
| 183 |
|
| 184 |
+
## π Security & Advanced *(Optional)*
|
| 185 |
|
| 186 |
| Variable | Default | Description |
|
| 187 |
| :--- | :--- | :--- |
|
| 188 |
| `GATEWAY_TOKEN` | β | Token for dashboard and API auth |
|
| 189 |
+
| `HF_TOKEN` | β | HF token with write access for backups and HF providers |
|
| 190 |
+
| `CLOUDFLARE_WORKERS_TOKEN` | β | Cloudflare API token for proxying and keep-awake |
|
| 191 |
+
| `SYNC_INTERVAL` | `600` | Backup frequency in seconds |
|
| 192 |
| `CLOUDFLARE_KEEPALIVE_ENABLED` | `true` | Set `false` to disable keep-awake worker |
|
| 193 |
| `TELEGRAM_MODE` | `webhook` | `webhook` or `polling` |
|
| 194 |
|
|
|
|
| 205 |
- **Dashboard (`/`)**: Real-time management and monitoring.
|
| 206 |
- **Hermes App (`/app/`)**: Secure proxied access to the Hermes UI.
|
| 207 |
- **API (`/v1/*`)**: Proxied OpenAI-compatible agent API.
|
| 208 |
+
- **Health Check (`/health`)**: Readiness probe for HF and keep-alive.
|
| 209 |
- **Sync Engine**: Python background task for HF Dataset persistence.
|
| 210 |
|
| 211 |
## π Troubleshooting
|
| 212 |
|
| 213 |
+
- **Telegram bot not responding:** Ensure `CLOUDFLARE_WORKERS_TOKEN` is set and check logs for the proxy setup step.
|
| 214 |
+
- **Authentication failed:** Clear browser cookies or use an incognito window if `GATEWAY_TOKEN` changed.
|
| 215 |
+
- **Data not persisting:** Ensure `HF_TOKEN` has write access.
|
| 216 |
+
- **Provider not showing up:** If it is a Hermes-native provider, run `hermes model` and complete the provider-specific setup there. If it is a custom endpoint, verify the `base_url` exposes `/v1/models` or `/v1/chat/completions`.
|
| 217 |
+
- **Space keeps sleeping:** Add `CLOUDFLARE_WORKERS_TOKEN` to enable automatic keep-awake monitoring.
|
| 218 |
|
| 219 |
## π More Projects
|
| 220 |
|
hermes-sync.py
CHANGED
|
@@ -23,7 +23,7 @@ logging.getLogger("huggingface_hub").setLevel(logging.ERROR)
|
|
| 23 |
|
| 24 |
HERMES_HOME = Path(os.environ.get("HERMES_HOME", "/opt/data"))
|
| 25 |
STATUS_FILE = Path("/tmp/huggingmes-sync-status.json")
|
| 26 |
-
INTERVAL = int(os.environ.get("SYNC_INTERVAL", "
|
| 27 |
INITIAL_DELAY = int(os.environ.get("SYNC_START_DELAY", "10"))
|
| 28 |
HF_TOKEN = os.environ.get("HF_TOKEN", "").strip()
|
| 29 |
HF_USERNAME = os.environ.get("HF_USERNAME", "").strip()
|
|
|
|
| 23 |
|
| 24 |
HERMES_HOME = Path(os.environ.get("HERMES_HOME", "/opt/data"))
|
| 25 |
STATUS_FILE = Path("/tmp/huggingmes-sync-status.json")
|
| 26 |
+
INTERVAL = int(os.environ.get("SYNC_INTERVAL", "600"))
|
| 27 |
INITIAL_DELAY = int(os.environ.get("SYNC_START_DELAY", "10"))
|
| 28 |
HF_TOKEN = os.environ.get("HF_TOKEN", "").strip()
|
| 29 |
HF_USERNAME = os.environ.get("HF_USERNAME", "").strip()
|
start.sh
CHANGED
|
@@ -14,7 +14,7 @@ PUBLIC_PORT="${PORT:-7861}"
|
|
| 14 |
GATEWAY_API_PORT="${API_SERVER_PORT:-8642}"
|
| 15 |
DASHBOARD_PORT="${DASHBOARD_PORT:-9119}"
|
| 16 |
TELEGRAM_WEBHOOK_PORT="${TELEGRAM_WEBHOOK_PORT:-8765}"
|
| 17 |
-
SYNC_INTERVAL="${SYNC_INTERVAL:-
|
| 18 |
BACKUP_DATASET="${BACKUP_DATASET_NAME:-huggingmes-backup}"
|
| 19 |
CF_PROXY_ENV_FILE="/tmp/huggingmes-cloudflare-proxy.env"
|
| 20 |
|
|
@@ -118,7 +118,7 @@ case "$MODEL_PREFIX" in
|
|
| 118 |
[ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="openrouter"
|
| 119 |
MODEL_FOR_CONFIG="${MODEL_INPUT#openrouter/}"
|
| 120 |
;;
|
| 121 |
-
huggingface)
|
| 122 |
[ -n "$LLM_API_KEY" ] && export HF_TOKEN="${HF_TOKEN:-$LLM_API_KEY}"
|
| 123 |
[ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="huggingface"
|
| 124 |
MODEL_FOR_CONFIG="${MODEL_INPUT#huggingface/}"
|
|
@@ -145,15 +145,36 @@ case "$MODEL_PREFIX" in
|
|
| 145 |
kimi-coding|moonshot)
|
| 146 |
[ -n "$LLM_API_KEY" ] && export KIMI_API_KEY="${KIMI_API_KEY:-$LLM_API_KEY}"
|
| 147 |
;;
|
|
|
|
|
|
|
|
|
|
| 148 |
minimax)
|
| 149 |
[ -n "$LLM_API_KEY" ] && export MINIMAX_API_KEY="${MINIMAX_API_KEY:-$LLM_API_KEY}"
|
| 150 |
;;
|
|
|
|
|
|
|
|
|
|
| 151 |
xiaomi)
|
| 152 |
[ -n "$LLM_API_KEY" ] && export XIAOMI_API_KEY="${XIAOMI_API_KEY:-$LLM_API_KEY}"
|
| 153 |
;;
|
| 154 |
zai|z-ai|z.ai|glm)
|
| 155 |
[ -n "$LLM_API_KEY" ] && export GLM_API_KEY="${GLM_API_KEY:-$LLM_API_KEY}"
|
| 156 |
;;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 157 |
nvidia)
|
| 158 |
[ -n "$LLM_API_KEY" ] && export NVIDIA_API_KEY="${NVIDIA_API_KEY:-$LLM_API_KEY}"
|
| 159 |
;;
|
|
@@ -261,7 +282,7 @@ else
|
|
| 261 |
echo "Telegram : not configured"
|
| 262 |
fi
|
| 263 |
if [ -n "${HF_TOKEN:-}" ]; then
|
| 264 |
-
echo "Backup : ${BACKUP_DATASET} (every ${SYNC_INTERVAL:-
|
| 265 |
else
|
| 266 |
echo "Backup : disabled"
|
| 267 |
fi
|
|
|
|
| 14 |
GATEWAY_API_PORT="${API_SERVER_PORT:-8642}"
|
| 15 |
DASHBOARD_PORT="${DASHBOARD_PORT:-9119}"
|
| 16 |
TELEGRAM_WEBHOOK_PORT="${TELEGRAM_WEBHOOK_PORT:-8765}"
|
| 17 |
+
SYNC_INTERVAL="${SYNC_INTERVAL:-600}"
|
| 18 |
BACKUP_DATASET="${BACKUP_DATASET_NAME:-huggingmes-backup}"
|
| 19 |
CF_PROXY_ENV_FILE="/tmp/huggingmes-cloudflare-proxy.env"
|
| 20 |
|
|
|
|
| 118 |
[ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="openrouter"
|
| 119 |
MODEL_FOR_CONFIG="${MODEL_INPUT#openrouter/}"
|
| 120 |
;;
|
| 121 |
+
huggingface|hf)
|
| 122 |
[ -n "$LLM_API_KEY" ] && export HF_TOKEN="${HF_TOKEN:-$LLM_API_KEY}"
|
| 123 |
[ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="huggingface"
|
| 124 |
MODEL_FOR_CONFIG="${MODEL_INPUT#huggingface/}"
|
|
|
|
| 145 |
kimi-coding|moonshot)
|
| 146 |
[ -n "$LLM_API_KEY" ] && export KIMI_API_KEY="${KIMI_API_KEY:-$LLM_API_KEY}"
|
| 147 |
;;
|
| 148 |
+
kimi-coding-cn|moonshot-cn|kimi-cn)
|
| 149 |
+
[ -n "$LLM_API_KEY" ] && export KIMI_CN_API_KEY="${KIMI_CN_API_KEY:-$LLM_API_KEY}"
|
| 150 |
+
;;
|
| 151 |
minimax)
|
| 152 |
[ -n "$LLM_API_KEY" ] && export MINIMAX_API_KEY="${MINIMAX_API_KEY:-$LLM_API_KEY}"
|
| 153 |
;;
|
| 154 |
+
minimax-cn)
|
| 155 |
+
[ -n "$LLM_API_KEY" ] && export MINIMAX_CN_API_KEY="${MINIMAX_CN_API_KEY:-$LLM_API_KEY}"
|
| 156 |
+
;;
|
| 157 |
xiaomi)
|
| 158 |
[ -n "$LLM_API_KEY" ] && export XIAOMI_API_KEY="${XIAOMI_API_KEY:-$LLM_API_KEY}"
|
| 159 |
;;
|
| 160 |
zai|z-ai|z.ai|glm)
|
| 161 |
[ -n "$LLM_API_KEY" ] && export GLM_API_KEY="${GLM_API_KEY:-$LLM_API_KEY}"
|
| 162 |
;;
|
| 163 |
+
arcee|arcee-ai|arceeai)
|
| 164 |
+
[ -n "$LLM_API_KEY" ] && export ARCEEAI_API_KEY="${ARCEEAI_API_KEY:-$LLM_API_KEY}"
|
| 165 |
+
;;
|
| 166 |
+
gmi|gmi-cloud|gmicloud)
|
| 167 |
+
[ -n "$LLM_API_KEY" ] && export GMI_API_KEY="${GMI_API_KEY:-$LLM_API_KEY}"
|
| 168 |
+
;;
|
| 169 |
+
alibaba)
|
| 170 |
+
[ -n "$LLM_API_KEY" ] && export DASHSCOPE_API_KEY="${DASHSCOPE_API_KEY:-$LLM_API_KEY}"
|
| 171 |
+
;;
|
| 172 |
+
alibaba-coding-plan|alibaba_coding)
|
| 173 |
+
[ -n "$LLM_API_KEY" ] && export DASHSCOPE_API_KEY="${DASHSCOPE_API_KEY:-$LLM_API_KEY}"
|
| 174 |
+
;;
|
| 175 |
+
tencent-tokenhub|tencent|tokenhub|tencentmaas)
|
| 176 |
+
[ -n "$LLM_API_KEY" ] && export TOKENHUB_API_KEY="${TOKENHUB_API_KEY:-$LLM_API_KEY}"
|
| 177 |
+
;;
|
| 178 |
nvidia)
|
| 179 |
[ -n "$LLM_API_KEY" ] && export NVIDIA_API_KEY="${NVIDIA_API_KEY:-$LLM_API_KEY}"
|
| 180 |
;;
|
|
|
|
| 282 |
echo "Telegram : not configured"
|
| 283 |
fi
|
| 284 |
if [ -n "${HF_TOKEN:-}" ]; then
|
| 285 |
+
echo "Backup : ${BACKUP_DATASET} (every ${SYNC_INTERVAL:-600}s)"
|
| 286 |
else
|
| 287 |
echo "Backup : disabled"
|
| 288 |
fi
|