somratpro commited on
Commit
9277464
Β·
1 Parent(s): fdc1270

fix: update sync interval to 600 seconds and enhance environment variable handling in startup scripts

Browse files
Files changed (3) hide show
  1. README.md +114 -52
  2. hermes-sync.py +1 -1
  3. start.sh +24 -3
README.md CHANGED
@@ -9,9 +9,9 @@ pinned: true
9
  license: mit
10
  secrets:
11
  - name: LLM_API_KEY
12
- description: "Your LLM provider API key (e.g. Anthropic, OpenAI, Google, OpenRouter)."
13
  - name: LLM_MODEL
14
- description: "Model ID to use, e.g. google/gemini-2.0-flash or openai/gpt-4o."
15
  - name: GATEWAY_TOKEN
16
  description: "Strong token to secure your dashboard and API (generate: openssl rand -hex 32)."
17
  - name: TELEGRAM_BOT_TOKEN
@@ -19,7 +19,7 @@ secrets:
19
  - name: TELEGRAM_ALLOWED_USERS
20
  description: "Comma-separated list of numeric user IDs allowed to use the bot."
21
  - name: HF_TOKEN
22
- description: "Hugging Face token with write access. Used for automatic workspace backup."
23
  - name: CLOUDFLARE_WORKERS_TOKEN
24
  description: "Cloudflare API token for automatic Worker proxy and KeepAlive setup."
25
  ---
@@ -30,7 +30,7 @@ secrets:
30
  [![HF Space](https://img.shields.io/badge/πŸ€—%20HuggingFace-Space-blue?style=flat-square)](https://huggingface.co/spaces)
31
  [![Hermes](https://img.shields.io/badge/Hermes-Agent-indigo?style=flat-square)](https://github.com/NousResearch/hermes-agent)
32
 
33
- **Self-hosted Hermes AI agent gateway β€” free, no server needed.** HuggingMes runs [Nous Research Hermes Agent](https://github.com/NousResearch/hermes-agent) on HuggingFace Spaces, providing a 24/7 personal AI assistant. It includes a premium management dashboard, automatic persistent backup to HF Datasets, and built-in connectivity fixes to bypass platform restrictions. Deploy in minutes on the free HF Spaces tier with full data persistence.
34
 
35
  ## Table of Contents
36
 
@@ -41,22 +41,22 @@ secrets:
41
  - [πŸ“± Telegram Setup](#-telegram-setup)
42
  - [🌐 Cloudflare Proxy](#-cloudflare-proxy)
43
  - [πŸ’Ύ Backup & Persistence](#-backup--persistence)
44
- - [πŸ’“ Staying Alive](#-staying-alive)
45
  - [πŸ” Security & Advanced](#-security--advanced)
46
  - [πŸ’» Local Development](#-local-development)
47
- - [πŸ—οΈ Architecture](#️-architecture)
48
  - [πŸ› Troubleshooting](#-troubleshooting)
49
  - [🌟 More Projects](#-more-projects)
50
 
51
  ## ✨ Features
52
 
53
- - 🧠 **Hermes Core:** Runs the powerful Hermes agent framework for multi-modal chat and tool use.
54
- - πŸ” **Secure by Default:** Adds a custom auth layer to protect the Hermes dashboard and API routes.
55
- - 🌐 **Built-in Connectivity:** Includes transparent outbound proxying via Cloudflare Workers for Telegram, Google APIs, and more.
56
- - πŸ“Š **Premium Dashboard:** Beautiful Web UI at `/` for real-time monitoring of uptime, sync health, and agent status.
57
- - πŸ’Ύ **Persistent Backup:** Automatically syncs agent state, chats, and config to a private HF Dataset.
58
- - ⏰ **Easy Keep-Alive:** Uses `CLOUDFLARE_WORKERS_TOKEN` to automatically set up a cron-triggered keep-awake worker at boot.
59
- - 🐳 **Optimized Infrastructure:** Minimal resource usage with clean startup logs and production-ready proxying.
60
 
61
  ## πŸš€ Quick Start
62
 
@@ -66,68 +66,129 @@ secrets:
66
 
67
  ### Step 2: Add Your Secrets
68
 
69
- Navigate to your new Space's **Settings β†’ Variables and secrets**, and add the following three under **Secrets**:
70
 
71
- - `LLM_API_KEY` – Your provider API key (e.g., Anthropic, OpenAI, OpenRouter).
72
- - `LLM_MODEL` – The model ID string (e.g., `google/gemini-2.0-flash` or `openai/gpt-4o`).
73
- - `GATEWAY_TOKEN` – A custom password to secure your dashboard.
 
 
 
 
74
 
75
- ### Step 3: Access Your Dashboard
76
 
77
- Once the build is complete, visit your Space's public URL. You will see the HuggingMes management dashboard. Click **Open Hermes UI** and enter your `GATEWAY_TOKEN` to access the agent interface.
78
 
79
  ## πŸ” Access Control
80
 
81
- Hermes' built-in dashboard is local-first. HuggingMes adds a secure wrapper:
82
 
83
- - **Dashboard:** Opening `/app/` requires your `GATEWAY_TOKEN`.
84
- - **API:** Routes under `/v1/*` (OpenAI-compatible) require `Authorization: Bearer <GATEWAY_TOKEN>`.
85
 
86
  ## πŸ€– LLM Providers
87
 
88
- HuggingMes automatically maps your `LLM_MODEL` and `LLM_API_KEY` to the correct Hermes configuration.
89
 
90
- | Provider | Prefix | Example `LLM_MODEL` |
91
- | :--- | :--- | :--- |
92
- | **Google** | `google/` | `google/gemini-2.0-flash` |
93
- | **OpenRouter** | `openrouter/` | `openrouter/anthropic/claude-3.5-sonnet` |
94
- | **Anthropic** | `anthropic/` | `anthropic/claude-3-opus-latest` |
95
- | **OpenAI** | `openai/` | `openai/gpt-4o` |
96
- | **HuggingFace** | `huggingface/` | `huggingface/meta-llama/Llama-3.3-70B-Instruct` |
97
 
98
- ## πŸ“± Telegram Setup *(Optional)*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
  To use Hermes via Telegram:
101
 
102
  1. Add `TELEGRAM_BOT_TOKEN` from [@BotFather](https://t.me/BotFather).
103
- 2. Add `TELEGRAM_ALLOWED_USERS` (comma-separated numeric IDs) to restrict access.
104
- 3. Add `CLOUDFLARE_WORKERS_TOKEN` to bypass HF network restrictions automatically.
105
 
106
  ## 🌐 Cloudflare Proxy
107
 
108
- HuggingFace Spaces often block outbound connections to external APIs. HuggingMes handles this automatically:
109
-
110
- 1. Add `CLOUDFLARE_WORKERS_TOKEN` as a Space secret.
111
- 2. Restart the Space.
112
-
113
- HuggingMes will auto-provision a Worker proxy for Telegram and other restricted traffic, and set up a keep-awake cron.
114
 
115
- ## πŸ’Ύ Backup & Persistence
116
 
117
- Set `HF_TOKEN` with **Write** access to enable backup. HuggingMes syncs all agent data to a private Dataset named `huggingmes-backup` every 180 seconds.
118
 
119
- ## πŸ’“ Staying Alive *(Recommended on Free HF Spaces)*
120
 
121
- Your Space will automatically be kept awake by a background Cloudflare Worker when you configure the `CLOUDFLARE_WORKERS_TOKEN` secret. The worker uses a cron trigger to regularly ping your Space's `/health` endpoint. The dashboard displays the current keep-alive worker status.
122
 
123
- ## πŸ” Security & Advanced
124
 
125
  | Variable | Default | Description |
126
  | :--- | :--- | :--- |
127
  | `GATEWAY_TOKEN` | β€” | Token for dashboard and API auth |
128
- | `HF_TOKEN` | β€” | HF token with write access for backups |
129
- | `CLOUDFLARE_WORKERS_TOKEN` | β€” | Cloudflare API token for proxy & keep-awake |
130
- | `SYNC_INTERVAL` | `180` | Backup frequency in seconds |
131
  | `CLOUDFLARE_KEEPALIVE_ENABLED` | `true` | Set `false` to disable keep-awake worker |
132
  | `TELEGRAM_MODE` | `webhook` | `webhook` or `polling` |
133
 
@@ -144,15 +205,16 @@ docker compose up --build
144
  - **Dashboard (`/`)**: Real-time management and monitoring.
145
  - **Hermes App (`/app/`)**: Secure proxied access to the Hermes UI.
146
  - **API (`/v1/*`)**: Proxied OpenAI-compatible agent API.
147
- - **Health Check (`/health`)**: Readiness probe for HF and Keep-Alive.
148
  - **Sync Engine**: Python background task for HF Dataset persistence.
149
 
150
  ## πŸ› Troubleshooting
151
 
152
- - **Telegram bot not responding:** Ensure `CLOUDFLARE_WORKERS_TOKEN` is set. Check logs for "Setting up Cloudflare proxy".
153
- - **Authentication failed:** Clear your browser cookies or use an incognito window if your `GATEWAY_TOKEN` has changed.
154
- - **Data not persisting:** Ensure `HF_TOKEN` has **Write** permissions.
155
- - **Space keeps sleeping:** Add `CLOUDFLARE_WORKERS_TOKEN` as a Space secret to enable automatic keep-awake monitoring via Cloudflare Workers.
 
156
 
157
  ## 🌟 More Projects
158
 
 
9
  license: mit
10
  secrets:
11
  - name: LLM_API_KEY
12
+ description: "Your LLM provider API key for direct providers such as OpenRouter, Anthropic, OpenAI, Google, DeepSeek, xAI, and others."
13
  - name: LLM_MODEL
14
+ description: "Model or provider model ID, such as openrouter/anthropic/claude-sonnet-4 or openai/gpt-4o."
15
  - name: GATEWAY_TOKEN
16
  description: "Strong token to secure your dashboard and API (generate: openssl rand -hex 32)."
17
  - name: TELEGRAM_BOT_TOKEN
 
19
  - name: TELEGRAM_ALLOWED_USERS
20
  description: "Comma-separated list of numeric user IDs allowed to use the bot."
21
  - name: HF_TOKEN
22
+ description: "Hugging Face token with write access. Used for automatic workspace backup and HF providers."
23
  - name: CLOUDFLARE_WORKERS_TOKEN
24
  description: "Cloudflare API token for automatic Worker proxy and KeepAlive setup."
25
  ---
 
30
  [![HF Space](https://img.shields.io/badge/πŸ€—%20HuggingFace-Space-blue?style=flat-square)](https://huggingface.co/spaces)
31
  [![Hermes](https://img.shields.io/badge/Hermes-Agent-indigo?style=flat-square)](https://github.com/NousResearch/hermes-agent)
32
 
33
+ **Self-hosted Hermes AI agent gateway for Hugging Face Spaces.** HuggingMes runs [Nous Research Hermes Agent](https://github.com/NousResearch/hermes-agent) on HuggingFace Spaces, giving you a 24/7 personal AI assistant with a management dashboard, persistent HF Dataset backup, and automatic connectivity fixes for blocked outbound traffic. HuggingMes directly wires the startup providers listed below, and it can also use Hermes providers configured through `hermes model` or `config.yaml`.
34
 
35
  ## Table of Contents
36
 
 
41
  - [πŸ“± Telegram Setup](#-telegram-setup)
42
  - [🌐 Cloudflare Proxy](#-cloudflare-proxy)
43
  - [πŸ’Ύ Backup & Persistence](#-backup--persistence)
44
+ - [πŸ’“ Staying Alive](#-staying-alive-recommended-on-free-hf-spaces)
45
  - [πŸ” Security & Advanced](#-security--advanced)
46
  - [πŸ’» Local Development](#-local-development)
47
+ - [πŸ—οΈ Architecture](#-architecture)
48
  - [πŸ› Troubleshooting](#-troubleshooting)
49
  - [🌟 More Projects](#-more-projects)
50
 
51
  ## ✨ Features
52
 
53
+ - 🧠 **Hermes Core:** Runs Hermes Agent for multi-turn chat, tools, memory, and agent workflows.
54
+ - πŸ” **Secure by Default:** Protects the dashboard and API with a single gateway token.
55
+ - 🌐 **Built-in Connectivity:** Adds Cloudflare Worker proxy support for Telegram and other blocked outbound traffic.
56
+ - πŸ“Š **Dashboard:** Real-time view of uptime, sync health, and agent status at `/`.
57
+ - πŸ’Ύ **Persistent Backup:** Syncs chats, config, and session data to a private HF Dataset.
58
+ - ⏰ **Keep-Alive:** Can provision a cron-triggered Cloudflare Worker to keep the Space awake.
59
+ - πŸ€– **Broad Provider Support:** Supports Hermes' native providers, direct API-key providers, OAuth providers, and custom OpenAI-compatible endpoints.
60
 
61
  ## πŸš€ Quick Start
62
 
 
66
 
67
  ### Step 2: Add Your Secrets
68
 
69
+ In your Space's **Settings β†’ Variables and secrets**, add these under **Secrets**:
70
 
71
+ - `LLM_API_KEY` - Your provider API key for direct providers.
72
+ - `LLM_MODEL` - The model ID to use, such as `openrouter/anthropic/claude-sonnet-4`, `openai/gpt-4o`, or `google/gemini-2.5-flash`.
73
+ - `GATEWAY_TOKEN` - A strong token to secure the dashboard.
74
+ - `TELEGRAM_BOT_TOKEN` - Telegram bot token from BotFather.
75
+ - `TELEGRAM_ALLOWED_USERS` - Comma-separated numeric Telegram user IDs.
76
+ - `HF_TOKEN` - Hugging Face token with write access for backups and HF providers.
77
+ - `CLOUDFLARE_WORKERS_TOKEN` - Cloudflare token for outbound proxying and keep-alive automation.
78
 
79
+ ### Step 3: Deploy & Run
80
 
81
+ After the Space builds, open it and click **Open Hermes UI** to access the agent interface.
82
 
83
  ## πŸ” Access Control
84
 
85
+ Hermes' built-in dashboard is wrapped by HuggingMes:
86
 
87
+ - **Dashboard:** Opening `/app/` requires `GATEWAY_TOKEN`.
88
+ - **API:** Routes under `/v1/*` require `Authorization: Bearer <GATEWAY_TOKEN>`.
89
 
90
  ## πŸ€– LLM Providers
91
 
92
+ HuggingMes supports Hermes providers in two different ways:
93
 
94
+ - **Direct startup providers:** Set `LLM_API_KEY` and `LLM_MODEL`, and HuggingMes maps them during boot.
95
+ - **Hermes-native providers:** Use `hermes model` after the Space starts, or edit `config.yaml` through the Hermes UI.
96
+ - **Custom OpenAI-compatible endpoints:** Point Hermes at your own `/v1` endpoint.
97
+
98
+ ### Direct startup providers
 
 
99
 
100
+ These are the providers that HuggingMes maps directly from `LLM_MODEL` and `LLM_API_KEY` at startup.
101
+
102
+ | Provider | Prefix | Example `LLM_MODEL` | Key env |
103
+ | :--- | :--- | :--- | :--- |
104
+ | OpenRouter | `openrouter/` | `openrouter/anthropic/claude-sonnet-4` | `LLM_API_KEY` -> `OPENROUTER_API_KEY` |
105
+ | Hugging Face Inference Providers | `huggingface/` or `hf/` | `huggingface/Qwen/Qwen3-235B-A22B-Thinking-2507` | `LLM_API_KEY` -> `HF_TOKEN` |
106
+ | AI Gateway / Vercel AI Gateway | `ai-gateway/` or `vercel-ai-gateway/` | `ai-gateway/openai/gpt-4o` | `LLM_API_KEY` -> `AI_GATEWAY_API_KEY` |
107
+ | Anthropic | `anthropic/` | `anthropic/claude-sonnet-4-6` | `LLM_API_KEY` -> `ANTHROPIC_API_KEY` |
108
+ | OpenAI / OpenAI Codex | `openai/` or `openai-codex/` | `openai/gpt-4o` | `LLM_API_KEY` -> `OPENAI_API_KEY` |
109
+ | Google Gemini | `google/` or `gemini/` | `google/gemini-2.5-flash` | `LLM_API_KEY` -> `GOOGLE_API_KEY` and `GEMINI_API_KEY` |
110
+ | DeepSeek | `deepseek/` | `deepseek/deepseek-chat` | `LLM_API_KEY` -> `DEEPSEEK_API_KEY` |
111
+ | Kimi / Moonshot | `kimi-coding/` or `moonshot/` | `kimi-coding/kimi-for-coding` | `LLM_API_KEY` -> `KIMI_API_KEY` |
112
+ | Kimi / Moonshot (China) | `kimi-coding-cn/` | `kimi-coding-cn/kimi-k2.5` | `LLM_API_KEY` -> `KIMI_CN_API_KEY` |
113
+ | Arcee AI | `arcee/` | `arcee/trinity-large-thinking` | `LLM_API_KEY` -> `ARCEEAI_API_KEY` |
114
+ | GMI Cloud | `gmi/` | `gmi/zai-org/GLM-5.1-FP8` | `LLM_API_KEY` -> `GMI_API_KEY` |
115
+ | MiniMax | `minimax/` | `minimax/MiniMax-M2.7` | `LLM_API_KEY` -> `MINIMAX_API_KEY` |
116
+ | MiniMax (China) | `minimax-cn/` | `minimax-cn/MiniMax-M2.7` | `LLM_API_KEY` -> `MINIMAX_CN_API_KEY` |
117
+ | Alibaba Cloud | `alibaba/` | `alibaba/qwen3.5-plus` | `LLM_API_KEY` -> `DASHSCOPE_API_KEY` |
118
+ | Alibaba Coding Plan | `alibaba-coding-plan/` | `alibaba-coding-plan/qwen3-coder-plus` | `LLM_API_KEY` -> `DASHSCOPE_API_KEY` |
119
+ | Xiaomi MiMo | `xiaomi/` | `xiaomi/mimo-v2-pro` | `LLM_API_KEY` -> `XIAOMI_API_KEY` |
120
+ | Tencent TokenHub | `tencent-tokenhub/` | `tencent-tokenhub/hy3-preview` | `LLM_API_KEY` -> `TOKENHUB_API_KEY` |
121
+ | Z.ai / GLM | `zai/`, `z-ai/`, `z.ai/`, or `glm/` | `zai/glm-5` | `LLM_API_KEY` -> `GLM_API_KEY` |
122
+ | NVIDIA NIM | `nvidia/` | `nvidia/nemotron-3-super-120b-a12b` | `LLM_API_KEY` -> `NVIDIA_API_KEY` |
123
+ | xAI / Grok | `xai/` or `grok/` | `xai/grok-4-1-fast-reasoning` | `LLM_API_KEY` -> `XAI_API_KEY` |
124
+ | Kilo Code | `kilocode/` | `kilocode/<model-id>` | `LLM_API_KEY` -> `KILOCODE_API_KEY` |
125
+ | OpenCode Zen | `opencode-zen/` | `opencode-zen/<model-id>` | `LLM_API_KEY` -> `OPENCODE_ZEN_API_KEY` |
126
+ | OpenCode Go | `opencode-go/` | `opencode-go/<model-id>` | `LLM_API_KEY` -> `OPENCODE_GO_API_KEY` |
127
+
128
+ ### Hermes-native providers and OAuth flows
129
+
130
+ These providers are supported by Hermes and can be used in HuggingMes once the agent config is set through `hermes model` or `config.yaml`. HuggingMes does not auto-map them from `LLM_MODEL` at boot unless Hermes itself handles that provider.
131
+
132
+ | Provider | How to use | Notes |
133
+ | :--- | :--- | :--- |
134
+ | Nous Portal | `hermes model` | Subscription-based OAuth provider in Hermes |
135
+ | OpenAI Codex | `hermes model` | ChatGPT OAuth / Codex models |
136
+ | GitHub Copilot | `hermes model` | Uses `COPILOT_GITHUB_TOKEN`, `GH_TOKEN`, or `gh auth token` |
137
+ | GitHub Copilot ACP | `hermes model` | Spawns the Copilot CLI backend |
138
+ | Anthropic (OAuth / Claude Code) | `hermes model` | Also supports `ANTHROPIC_API_KEY` |
139
+ | Google Gemini (OAuth) | `hermes model` | Browser OAuth flow, including free-tier Gemini OAuth |
140
+ | Qwen Portal (OAuth) | `hermes model` | Alibaba Qwen portal OAuth login |
141
+ | MiniMax (OAuth) | `hermes model` | Portal login for MiniMax models |
142
+ | Hugging Face Inference Providers | `hermes model` | Unified HF provider routing with model suffixes like `:fastest` and `:cheapest` |
143
+ | AWS Bedrock | `hermes model` or `config.yaml` | Uses AWS credentials chain, not an API key |
144
+ | Ollama Cloud | `hermes model` | Managed Ollama catalog with `OLLAMA_API_KEY` |
145
+ | Arcee AI | `hermes model` | First-class Hermes provider |
146
+ | GMI Cloud | `hermes model` | First-class Hermes provider |
147
+ | Alibaba Cloud / DashScope | `hermes model` | First-class Hermes provider for Qwen models |
148
+ | Tencent TokenHub | `hermes model` | First-class Hermes provider |
149
+ | Custom endpoint | `hermes model` or `config.yaml` | Any OpenAI-compatible endpoint |
150
+
151
+ ### Custom and self-hosted endpoints
152
+
153
+ HuggingMes also works with any OpenAI-compatible server. Common examples include local Ollama, LM Studio, llama.cpp / llama-server, vLLM, SGLang, LocalAI, Jan, LiteLLM, ClawRouter, Together AI, Groq, Fireworks AI, Azure OpenAI, and similar services.
154
+
155
+ Use either the Hermes model wizard or a direct `config.yaml` entry with a `base_url`, `model`, and optional API key. For local servers that do not require auth, leave the key empty.
156
+
157
+ ### Recommended provider choices
158
+
159
+ - **Just want it to work:** OpenRouter or Hermes' Nous Portal.
160
+ - **Want local models:** Ollama, LM Studio, llama.cpp, vLLM, or SGLang through a custom endpoint.
161
+ - **Need cloud APIs:** OpenAI, Anthropic, Google Gemini, DeepSeek, xAI, Hugging Face, or any other direct provider above.
162
+ - **Need routing or fallback:** Use a custom endpoint such as LiteLLM or ClawRouter.
163
+
164
+ ## πŸ“± Telegram Setup
165
 
166
  To use Hermes via Telegram:
167
 
168
  1. Add `TELEGRAM_BOT_TOKEN` from [@BotFather](https://t.me/BotFather).
169
+ 2. Add `TELEGRAM_ALLOWED_USERS` if you want to restrict access.
170
+ 3. Add `CLOUDFLARE_WORKERS_TOKEN` if you need automatic outbound proxying for Telegram API traffic.
171
 
172
  ## 🌐 Cloudflare Proxy
173
 
174
+ Hugging Face Spaces often block outbound calls to APIs used by Telegram and some provider backends. HuggingMes can provision a Cloudflare Worker proxy automatically when you add `CLOUDFLARE_WORKERS_TOKEN`.
 
 
 
 
 
175
 
176
+ ## πŸ’Ύ Backup & Persistence *(Optional)*
177
 
178
+ Set `HF_TOKEN` with write access to enable backup. HuggingMes syncs workspace data to a private HF Dataset named `huggingmes-backup` every 600 seconds by default.
179
 
180
+ ## πŸ’“ Staying Alive
181
 
182
+ With `CLOUDFLARE_WORKERS_TOKEN` set, HuggingMes can create a keep-alive worker that pings the Space's `/health` endpoint on a schedule so the free tier stays awake longer.
183
 
184
+ ## πŸ” Security & Advanced *(Optional)*
185
 
186
  | Variable | Default | Description |
187
  | :--- | :--- | :--- |
188
  | `GATEWAY_TOKEN` | β€” | Token for dashboard and API auth |
189
+ | `HF_TOKEN` | β€” | HF token with write access for backups and HF providers |
190
+ | `CLOUDFLARE_WORKERS_TOKEN` | β€” | Cloudflare API token for proxying and keep-awake |
191
+ | `SYNC_INTERVAL` | `600` | Backup frequency in seconds |
192
  | `CLOUDFLARE_KEEPALIVE_ENABLED` | `true` | Set `false` to disable keep-awake worker |
193
  | `TELEGRAM_MODE` | `webhook` | `webhook` or `polling` |
194
 
 
205
  - **Dashboard (`/`)**: Real-time management and monitoring.
206
  - **Hermes App (`/app/`)**: Secure proxied access to the Hermes UI.
207
  - **API (`/v1/*`)**: Proxied OpenAI-compatible agent API.
208
+ - **Health Check (`/health`)**: Readiness probe for HF and keep-alive.
209
  - **Sync Engine**: Python background task for HF Dataset persistence.
210
 
211
  ## πŸ› Troubleshooting
212
 
213
+ - **Telegram bot not responding:** Ensure `CLOUDFLARE_WORKERS_TOKEN` is set and check logs for the proxy setup step.
214
+ - **Authentication failed:** Clear browser cookies or use an incognito window if `GATEWAY_TOKEN` changed.
215
+ - **Data not persisting:** Ensure `HF_TOKEN` has write access.
216
+ - **Provider not showing up:** If it is a Hermes-native provider, run `hermes model` and complete the provider-specific setup there. If it is a custom endpoint, verify the `base_url` exposes `/v1/models` or `/v1/chat/completions`.
217
+ - **Space keeps sleeping:** Add `CLOUDFLARE_WORKERS_TOKEN` to enable automatic keep-awake monitoring.
218
 
219
  ## 🌟 More Projects
220
 
hermes-sync.py CHANGED
@@ -23,7 +23,7 @@ logging.getLogger("huggingface_hub").setLevel(logging.ERROR)
23
 
24
  HERMES_HOME = Path(os.environ.get("HERMES_HOME", "/opt/data"))
25
  STATUS_FILE = Path("/tmp/huggingmes-sync-status.json")
26
- INTERVAL = int(os.environ.get("SYNC_INTERVAL", "180"))
27
  INITIAL_DELAY = int(os.environ.get("SYNC_START_DELAY", "10"))
28
  HF_TOKEN = os.environ.get("HF_TOKEN", "").strip()
29
  HF_USERNAME = os.environ.get("HF_USERNAME", "").strip()
 
23
 
24
  HERMES_HOME = Path(os.environ.get("HERMES_HOME", "/opt/data"))
25
  STATUS_FILE = Path("/tmp/huggingmes-sync-status.json")
26
+ INTERVAL = int(os.environ.get("SYNC_INTERVAL", "600"))
27
  INITIAL_DELAY = int(os.environ.get("SYNC_START_DELAY", "10"))
28
  HF_TOKEN = os.environ.get("HF_TOKEN", "").strip()
29
  HF_USERNAME = os.environ.get("HF_USERNAME", "").strip()
start.sh CHANGED
@@ -14,7 +14,7 @@ PUBLIC_PORT="${PORT:-7861}"
14
  GATEWAY_API_PORT="${API_SERVER_PORT:-8642}"
15
  DASHBOARD_PORT="${DASHBOARD_PORT:-9119}"
16
  TELEGRAM_WEBHOOK_PORT="${TELEGRAM_WEBHOOK_PORT:-8765}"
17
- SYNC_INTERVAL="${SYNC_INTERVAL:-180}"
18
  BACKUP_DATASET="${BACKUP_DATASET_NAME:-huggingmes-backup}"
19
  CF_PROXY_ENV_FILE="/tmp/huggingmes-cloudflare-proxy.env"
20
 
@@ -118,7 +118,7 @@ case "$MODEL_PREFIX" in
118
  [ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="openrouter"
119
  MODEL_FOR_CONFIG="${MODEL_INPUT#openrouter/}"
120
  ;;
121
- huggingface)
122
  [ -n "$LLM_API_KEY" ] && export HF_TOKEN="${HF_TOKEN:-$LLM_API_KEY}"
123
  [ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="huggingface"
124
  MODEL_FOR_CONFIG="${MODEL_INPUT#huggingface/}"
@@ -145,15 +145,36 @@ case "$MODEL_PREFIX" in
145
  kimi-coding|moonshot)
146
  [ -n "$LLM_API_KEY" ] && export KIMI_API_KEY="${KIMI_API_KEY:-$LLM_API_KEY}"
147
  ;;
 
 
 
148
  minimax)
149
  [ -n "$LLM_API_KEY" ] && export MINIMAX_API_KEY="${MINIMAX_API_KEY:-$LLM_API_KEY}"
150
  ;;
 
 
 
151
  xiaomi)
152
  [ -n "$LLM_API_KEY" ] && export XIAOMI_API_KEY="${XIAOMI_API_KEY:-$LLM_API_KEY}"
153
  ;;
154
  zai|z-ai|z.ai|glm)
155
  [ -n "$LLM_API_KEY" ] && export GLM_API_KEY="${GLM_API_KEY:-$LLM_API_KEY}"
156
  ;;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
  nvidia)
158
  [ -n "$LLM_API_KEY" ] && export NVIDIA_API_KEY="${NVIDIA_API_KEY:-$LLM_API_KEY}"
159
  ;;
@@ -261,7 +282,7 @@ else
261
  echo "Telegram : not configured"
262
  fi
263
  if [ -n "${HF_TOKEN:-}" ]; then
264
- echo "Backup : ${BACKUP_DATASET} (every ${SYNC_INTERVAL:-180}s)"
265
  else
266
  echo "Backup : disabled"
267
  fi
 
14
  GATEWAY_API_PORT="${API_SERVER_PORT:-8642}"
15
  DASHBOARD_PORT="${DASHBOARD_PORT:-9119}"
16
  TELEGRAM_WEBHOOK_PORT="${TELEGRAM_WEBHOOK_PORT:-8765}"
17
+ SYNC_INTERVAL="${SYNC_INTERVAL:-600}"
18
  BACKUP_DATASET="${BACKUP_DATASET_NAME:-huggingmes-backup}"
19
  CF_PROXY_ENV_FILE="/tmp/huggingmes-cloudflare-proxy.env"
20
 
 
118
  [ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="openrouter"
119
  MODEL_FOR_CONFIG="${MODEL_INPUT#openrouter/}"
120
  ;;
121
+ huggingface|hf)
122
  [ -n "$LLM_API_KEY" ] && export HF_TOKEN="${HF_TOKEN:-$LLM_API_KEY}"
123
  [ "$PROVIDER_FOR_CONFIG" = "auto" ] && PROVIDER_FOR_CONFIG="huggingface"
124
  MODEL_FOR_CONFIG="${MODEL_INPUT#huggingface/}"
 
145
  kimi-coding|moonshot)
146
  [ -n "$LLM_API_KEY" ] && export KIMI_API_KEY="${KIMI_API_KEY:-$LLM_API_KEY}"
147
  ;;
148
+ kimi-coding-cn|moonshot-cn|kimi-cn)
149
+ [ -n "$LLM_API_KEY" ] && export KIMI_CN_API_KEY="${KIMI_CN_API_KEY:-$LLM_API_KEY}"
150
+ ;;
151
  minimax)
152
  [ -n "$LLM_API_KEY" ] && export MINIMAX_API_KEY="${MINIMAX_API_KEY:-$LLM_API_KEY}"
153
  ;;
154
+ minimax-cn)
155
+ [ -n "$LLM_API_KEY" ] && export MINIMAX_CN_API_KEY="${MINIMAX_CN_API_KEY:-$LLM_API_KEY}"
156
+ ;;
157
  xiaomi)
158
  [ -n "$LLM_API_KEY" ] && export XIAOMI_API_KEY="${XIAOMI_API_KEY:-$LLM_API_KEY}"
159
  ;;
160
  zai|z-ai|z.ai|glm)
161
  [ -n "$LLM_API_KEY" ] && export GLM_API_KEY="${GLM_API_KEY:-$LLM_API_KEY}"
162
  ;;
163
+ arcee|arcee-ai|arceeai)
164
+ [ -n "$LLM_API_KEY" ] && export ARCEEAI_API_KEY="${ARCEEAI_API_KEY:-$LLM_API_KEY}"
165
+ ;;
166
+ gmi|gmi-cloud|gmicloud)
167
+ [ -n "$LLM_API_KEY" ] && export GMI_API_KEY="${GMI_API_KEY:-$LLM_API_KEY}"
168
+ ;;
169
+ alibaba)
170
+ [ -n "$LLM_API_KEY" ] && export DASHSCOPE_API_KEY="${DASHSCOPE_API_KEY:-$LLM_API_KEY}"
171
+ ;;
172
+ alibaba-coding-plan|alibaba_coding)
173
+ [ -n "$LLM_API_KEY" ] && export DASHSCOPE_API_KEY="${DASHSCOPE_API_KEY:-$LLM_API_KEY}"
174
+ ;;
175
+ tencent-tokenhub|tencent|tokenhub|tencentmaas)
176
+ [ -n "$LLM_API_KEY" ] && export TOKENHUB_API_KEY="${TOKENHUB_API_KEY:-$LLM_API_KEY}"
177
+ ;;
178
  nvidia)
179
  [ -n "$LLM_API_KEY" ] && export NVIDIA_API_KEY="${NVIDIA_API_KEY:-$LLM_API_KEY}"
180
  ;;
 
282
  echo "Telegram : not configured"
283
  fi
284
  if [ -n "${HF_TOKEN:-}" ]; then
285
+ echo "Backup : ${BACKUP_DATASET} (every ${SYNC_INTERVAL:-600}s)"
286
  else
287
  echo "Backup : disabled"
288
  fi