Spaces:
Paused
Paused
update readme
Browse files
README.md
CHANGED
|
@@ -16,680 +16,259 @@ pinned: false
|
|
| 16 |
</picture>
|
| 17 |
</p>
|
| 18 |
|
| 19 |
-
#
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
| 26 |
-
[](https://cursor.com/en/install-mcp?name=wandb&config=eyJ0cmFuc3BvcnQiOiJodHRwIiwidXJsIjoiaHR0cHM6Ly9tY3Aud2l0aHdhbmRiLmNvbS9tY3AiLCJoZWFkZXJzIjp7IkF1dGhvcml6YXRpb24iOiJCZWFyZXIge3tXQU5EQl9BUElfS0VZfX0iLCJBY2NlcHQiOiJhcHBsaWNhdGlvbi9qc29uLCB0ZXh0L2V2ZW50LXN0cmVhbSJ9fQ%3D%3D)
|
| 27 |
-
[](#vscode-hosted-server)
|
| 28 |
-
[](#windsurf-ide-hosted-server)
|
| 29 |
-
|
| 30 |
-
### AI Coding Agents
|
| 31 |
-
[](#claude-code-hosted)
|
| 32 |
-
[](#gemini-hosted-server)
|
| 33 |
-
[](#github-codex)
|
| 34 |
-
|
| 35 |
-
### AI Chat Clients
|
| 36 |
-
[](#chatgpt-hosted-server)
|
| 37 |
-
[](#mistral-lechat-hosted-server)
|
| 38 |
-
[](#claude-desktop-hosted-server)
|
| 39 |
-
[](#other-web-clients)
|
| 40 |
-
|
| 41 |
-
> **Quick Setup:** Click the button for your client above. For Cursor, it auto-installs with one click. For others, you'll be taken to the setup instructions. Just replace `YOUR_WANDB_API_KEY` with your actual API key from [wandb.ai/authorize](https://wandb.ai/authorize).
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
## Example Use Cases
|
| 45 |
-
|
| 46 |
-
<details>
|
| 47 |
-
<summary><b>📋 Available MCP Tools & Descriptions</b></summary>
|
| 48 |
-
|
| 49 |
-
### W&B Models Tools
|
| 50 |
-
|
| 51 |
-
**`query_wandb_tool`** - Execute GraphQL queries against W&B experiment tracking data (runs, sweeps, artifacts)
|
| 52 |
-
- Query experiment runs, metrics, and performance comparisons
|
| 53 |
-
- Access artifact management and model registry data
|
| 54 |
-
- Analyze hyperparameter optimization and sweeps
|
| 55 |
-
- Retrieve project dashboards and reports data
|
| 56 |
-
- Supports pagination with `max_items` and `items_per_page` parameters
|
| 57 |
-
- Accepts custom GraphQL queries with variables
|
| 58 |
-
|
| 59 |
-
### Weave Tools (LLM/GenAI)
|
| 60 |
-
|
| 61 |
-
**`query_weave_traces_tool`** - Query LLM traces and evaluations with advanced filtering and pagination
|
| 62 |
-
- Retrieve execution traces and paths of LLM operations
|
| 63 |
-
- Access LLM inputs, outputs, and intermediate results
|
| 64 |
-
- Filter by display name, operation name, trace ID, status, time range, latency
|
| 65 |
-
- Sort by various fields (started_at, latency, cost, etc.)
|
| 66 |
-
- Support for metadata-only queries to avoid context window overflow
|
| 67 |
-
- Includes cost calculations and token usage analysis
|
| 68 |
-
- Configurable data truncation and column selection
|
| 69 |
-
|
| 70 |
-
**`count_weave_traces_tool`** - Efficiently count traces without returning full data
|
| 71 |
-
- Get total trace counts and root trace counts
|
| 72 |
-
- Apply same filtering options as query tool
|
| 73 |
-
- Useful for understanding project scope before detailed queries
|
| 74 |
-
- Returns storage size information in bytes
|
| 75 |
-
- Much faster than full trace queries when you only need counts
|
| 76 |
-
|
| 77 |
-
### Support & Knowledge
|
| 78 |
-
|
| 79 |
-
**`query_wandb_support_bot`** - Get help from [wandbot](https://github.com/wandb/wandbot)
|
| 80 |
-
- RAG-powered technical support agent for W&B/Weave questions
|
| 81 |
-
- Provides code examples and debugging assistance
|
| 82 |
-
- Covers experiment tracking, Weave tracing, model management
|
| 83 |
-
- Explains W&B features, best practices, and troubleshooting
|
| 84 |
-
- Works out-of-the-box with no configuration needed
|
| 85 |
-
|
| 86 |
-
### Reporting & Documentation
|
| 87 |
-
|
| 88 |
-
**`create_wandb_report_tool`** - Create shareable W&B Reports with markdown and visualizations
|
| 89 |
-
- Generate reports with markdown text and HTML-rendered charts
|
| 90 |
-
- Support for multiple chart sections with proper organization
|
| 91 |
-
- Interactive visualizations with hover effects and SVG elements
|
| 92 |
-
- Permanent, shareable documentation for analysis findings
|
| 93 |
-
- Accepts both single HTML strings and dictionaries of multiple charts
|
| 94 |
-
|
| 95 |
-
### Discovery & Navigation
|
| 96 |
-
|
| 97 |
-
**`query_wandb_entity_projects`** - List available entities and projects
|
| 98 |
-
- Discover accessible W&B entities (teams/usernames) and their projects
|
| 99 |
-
- Get project metadata including descriptions, visibility, tags
|
| 100 |
-
- Essential for understanding available data sources
|
| 101 |
-
- Helps with proper entity/project specification in queries
|
| 102 |
-
- Returns creation/update timestamps and project details
|
| 103 |
-
|
| 104 |
-
</details>
|
| 105 |
-
|
| 106 |
-
### 1. 🔍 Analyze ML Experiments
|
| 107 |
-
```
|
| 108 |
-
"Show me the top 5 runs with the highest accuracy from my wandb-smle/hiring-agent-demo-public project and create a report comparing their hyperparameters"
|
| 109 |
-
```
|
| 110 |
-
The MCP server queries W&B runs, compares metrics, and generates a shareable report with visualizations.
|
| 111 |
-
|
| 112 |
-
### 2. 🐛 Debug LLM Applications
|
| 113 |
-
```
|
| 114 |
-
"Find all failed OpenAI chat traces in my weave project from the last 24 hours and analyze their error patterns"
|
| 115 |
-
```
|
| 116 |
-
The server retrieves Weave traces, filters by status, and provides detailed error analysis for debugging.
|
| 117 |
-
|
| 118 |
-
### 3. 📊 Evaluate Model Performance
|
| 119 |
-
```
|
| 120 |
-
"Compare the F1 scores across all evaluations in my RAG pipeline and identify which prompts performed best"
|
| 121 |
-
```
|
| 122 |
-
The server queries Weave evaluations, aggregates scores, and highlights top-performing configurations.
|
| 123 |
-
|
| 124 |
-
### 4. 🤖 Get Expert Help with W&B/Weave
|
| 125 |
-
```
|
| 126 |
-
"How do I implement custom metrics in Weave evaluations? Show me an example with async scorers"
|
| 127 |
-
```
|
| 128 |
-
The integrated [wandbot](https://github.com/wandb/wandbot) support agent provides detailed answers, code examples, and debugging assistance for any W&B or Weave-related questions.
|
| 129 |
-
|
| 130 |
-
## Installation & Deployment
|
| 131 |
-
|
| 132 |
-
This MCP server can be deployed in three ways. **We recommend starting with the hosted server** for the easiest setup experience.
|
| 133 |
-
|
| 134 |
-
### Option 1: Hosted Server (Recommended - No Installation Required)
|
| 135 |
-
|
| 136 |
-
Use our publicly hosted server on Hugging Face Spaces - **zero installation needed!**
|
| 137 |
-
|
| 138 |
-
**Server URL:** `https://mcp.withwandb.com/mcp`
|
| 139 |
-
|
| 140 |
-
> **ℹ️ Quick Setup:** Click the button for your client above, then use the configuration examples in the sections below. Just replace `YOUR_WANDB_API_KEY` with your actual API key from [wandb.ai/authorize](https://wandb.ai/authorize).
|
| 141 |
-
|
| 142 |
-
### Option 2: Local Development (STDIO)
|
| 143 |
-
|
| 144 |
-
Run the server locally with direct stdio communication - best for development and testing.
|
| 145 |
-
|
| 146 |
-
#### Running the Local Server
|
| 147 |
-
|
| 148 |
-
There are multiple ways to run the server locally:
|
| 149 |
-
|
| 150 |
-
**1. STDIO Mode (for MCP clients like Cursor/Claude Desktop):**
|
| 151 |
-
```bash
|
| 152 |
-
# Using the installed command
|
| 153 |
-
wandb_mcp_server --transport stdio
|
| 154 |
-
|
| 155 |
-
# Or using UV directly
|
| 156 |
-
uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server --transport stdio
|
| 157 |
-
|
| 158 |
-
# Or if cloned locally
|
| 159 |
-
uv run src/wandb_mcp_server/server.py --transport stdio
|
| 160 |
-
```
|
| 161 |
-
|
| 162 |
-
**2. HTTP Mode (for testing with HTTP clients):**
|
| 163 |
-
```bash
|
| 164 |
-
# Using the installed command (runs on port 8080 by default)
|
| 165 |
-
wandb_mcp_server --transport http --host localhost --port 8080
|
| 166 |
-
|
| 167 |
-
# Or if cloned locally
|
| 168 |
-
uv run src/wandb_mcp_server/server.py --transport http --host localhost --port 8080
|
| 169 |
-
```
|
| 170 |
-
|
| 171 |
-
**3. Using the FastAPI app (for deployment-like testing):**
|
| 172 |
-
```bash
|
| 173 |
-
# Runs the full FastAPI app with web interface on port 7860
|
| 174 |
-
uv run app.py
|
| 175 |
-
|
| 176 |
-
# Or with custom port
|
| 177 |
-
PORT=8000 uv run app.py
|
| 178 |
-
```
|
| 179 |
-
|
| 180 |
-
The FastAPI app includes:
|
| 181 |
-
- Landing page at `/`
|
| 182 |
-
- Health endpoint at `/health` (returns JSON status)
|
| 183 |
-
- MCP endpoint at `/mcp` (for MCP protocol communication)
|
| 184 |
-
|
| 185 |
-
> **⚠️ Important Note for OpenAI Client Users:**
|
| 186 |
-
> The OpenAI MCP implementation is server-side, meaning OpenAI's servers connect to your MCP server.
|
| 187 |
-
> This means **local servers (localhost) won't work with the OpenAI client** because OpenAI's servers
|
| 188 |
-
> cannot reach your local machine. Use one of these alternatives:
|
| 189 |
-
> - Use the hosted server at `https://mcp.withwandb.com/mcp`
|
| 190 |
-
> - Deploy your server to a public URL (e.g., using ngrok, Cloudflare Tunnel, or cloud hosting)
|
| 191 |
-
> - Use MCP clients with local support (Cursor, Claude Desktop, etc.) for local development
|
| 192 |
|
| 193 |
-
|
| 194 |
|
| 195 |
-
|
|
|
|
| 196 |
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
brew install ngrok/ngrok/ngrok
|
| 201 |
|
| 202 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 203 |
```
|
| 204 |
|
| 205 |
-
|
| 206 |
-
|
| 207 |
-
# Using app.py (recommended for full features)
|
| 208 |
-
uv run app.py
|
| 209 |
|
| 210 |
-
|
| 211 |
-
|
|
|
|
|
|
|
|
|
|
| 212 |
```
|
| 213 |
|
| 214 |
-
|
| 215 |
-
|
| 216 |
-
# For app.py (port 7860)
|
| 217 |
-
ngrok http 7860
|
| 218 |
|
| 219 |
-
|
| 220 |
-
|
|
|
|
|
|
|
|
|
|
| 221 |
```
|
| 222 |
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
After running ngrok, you'll see output like:
|
| 226 |
-
```
|
| 227 |
-
Forwarding https://abc123.ngrok-free.app -> http://localhost:7860
|
| 228 |
-
```
|
| 229 |
|
| 230 |
-
|
| 231 |
-
```
|
| 232 |
-
|
| 233 |
-
|
| 234 |
-
"server_url": "https://abc123.ngrok-free.app/mcp", # Your ngrok URL + /mcp
|
| 235 |
-
"authorization": os.getenv('WANDB_API_KEY'),
|
| 236 |
-
# ... rest of configuration
|
| 237 |
-
}
|
| 238 |
```
|
| 239 |
|
| 240 |
-
>
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
Deploy your own HTTP server with API key authentication - great for team deployments or custom infrastructure.
|
| 245 |
-
|
| 246 |
-
---
|
| 247 |
-
|
| 248 |
-
## Hosted Server Setup (Recommended)
|
| 249 |
-
|
| 250 |
-
**No installation required!** Just configure your MCP client to connect to our hosted server.
|
| 251 |
-
|
| 252 |
-
### Get Your W&B API Key
|
| 253 |
-
|
| 254 |
-
Get your Weights & Biases API key at: [https://wandb.ai/authorize](https://wandb.ai/authorize)
|
| 255 |
|
| 256 |
-
|
|
|
|
| 257 |
|
| 258 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 259 |
|
| 260 |
-
|
| 261 |
|
| 262 |
<details>
|
| 263 |
-
<summary><
|
| 264 |
|
| 265 |
-
**
|
| 266 |
-
|
| 267 |
-
2. Add the configuration below
|
| 268 |
-
3. Replace `YOUR_WANDB_API_KEY` with your key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
| 269 |
-
4. Restart Cursor
|
| 270 |
|
| 271 |
-
|
|
|
|
| 272 |
|
| 273 |
-
|
| 274 |
-
|
| 275 |
-
"mcpServers": {
|
| 276 |
-
"wandb": {
|
| 277 |
-
"transport": "http",
|
| 278 |
-
"url": "https://mcp.withwandb.com/mcp",
|
| 279 |
-
"headers": {
|
| 280 |
-
"Authorization": "Bearer YOUR_WANDB_API_KEY",
|
| 281 |
-
"Accept": "application/json, text/event-stream"
|
| 282 |
-
}
|
| 283 |
-
}
|
| 284 |
-
}
|
| 285 |
-
}
|
| 286 |
-
```
|
| 287 |
|
| 288 |
-
✅ **That's it!** No installation, no dependencies, just configuration.
|
| 289 |
</details>
|
| 290 |
|
| 291 |
-
|
| 292 |
-
<summary><b id="windsurf-ide-hosted-server">Windsurf IDE (Hosted Server)</b></summary>
|
| 293 |
-
|
| 294 |
-
**Quick Setup:**
|
| 295 |
-
1. Open Windsurf settings → MCP
|
| 296 |
-
2. Add the configuration below
|
| 297 |
-
3. Replace `YOUR_WANDB_API_KEY` with your key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
| 298 |
-
4. Restart Windsurf
|
| 299 |
|
| 300 |
-
|
| 301 |
|
| 302 |
-
|
| 303 |
-
{
|
| 304 |
-
"mcpServers": {
|
| 305 |
-
"wandb": {
|
| 306 |
-
"transport": "http",
|
| 307 |
-
"url": "https://mcp.withwandb.com/mcp",
|
| 308 |
-
"headers": {
|
| 309 |
-
"Authorization": "Bearer YOUR_WANDB_API_KEY",
|
| 310 |
-
"Accept": "application/json, text/event-stream"
|
| 311 |
-
}
|
| 312 |
-
}
|
| 313 |
-
}
|
| 314 |
-
}
|
| 315 |
-
```
|
| 316 |
|
| 317 |
-
|
| 318 |
-
</details>
|
| 319 |
|
|
|
|
| 320 |
<details>
|
| 321 |
-
<summary
|
| 322 |
-
|
| 323 |
-
**Quick Setup:**
|
| 324 |
-
1. Create a `.vscode/mcp.json` file in your project root
|
| 325 |
-
2. Add the configuration below
|
| 326 |
-
3. Replace `YOUR_WANDB_API_KEY` with your key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
| 327 |
-
4. Restart VSCode or reload the window
|
| 328 |
|
| 329 |
-
|
| 330 |
-
|
| 331 |
-
|
| 332 |
-
|
| 333 |
-
|
| 334 |
-
|
| 335 |
-
|
| 336 |
-
"url": "https://mcp.withwandb.com/mcp",
|
| 337 |
-
"headers": {
|
| 338 |
-
"Authorization": "Bearer YOUR_WANDB_API_KEY",
|
| 339 |
-
"Accept": "application/json, text/event-stream"
|
| 340 |
-
}
|
| 341 |
-
}
|
| 342 |
-
}
|
| 343 |
-
}
|
| 344 |
-
```
|
| 345 |
|
| 346 |
-
|
| 347 |
</details>
|
| 348 |
|
| 349 |
-
|
| 350 |
-
|
| 351 |
<details>
|
| 352 |
-
<summary
|
|
|
|
|
|
|
| 353 |
|
| 354 |
-
**Quick Setup:**
|
| 355 |
-
1. Install Claude Code if you haven't already
|
| 356 |
-
2. Configure the MCP server with HTTP transport:
|
| 357 |
```bash
|
| 358 |
-
|
| 359 |
-
|
| 360 |
-
--url https://mcp.withwandb.com/mcp \
|
| 361 |
-
--header "Authorization: Bearer YOUR_WANDB_API_KEY" \
|
| 362 |
-
--header "Accept: application/json, text/event-stream"
|
| 363 |
-
```
|
| 364 |
-
3. Replace `YOUR_WANDB_API_KEY` with your key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
| 365 |
|
| 366 |
-
|
|
|
|
|
|
|
| 367 |
|
| 368 |
-
Edit your Claude Code MCP config file:
|
| 369 |
```json
|
| 370 |
{
|
| 371 |
"mcpServers": {
|
| 372 |
"wandb": {
|
| 373 |
-
"transport": "http",
|
| 374 |
"url": "https://mcp.withwandb.com/mcp",
|
| 375 |
-
"
|
| 376 |
-
"Authorization": "Bearer YOUR_WANDB_API_KEY",
|
| 377 |
-
"Accept": "application/json, text/event-stream"
|
| 378 |
-
}
|
| 379 |
}
|
| 380 |
}
|
| 381 |
}
|
| 382 |
```
|
| 383 |
|
| 384 |
-
|
|
|
|
|
|
|
| 385 |
</details>
|
| 386 |
|
|
|
|
| 387 |
<details>
|
| 388 |
-
<summary
|
| 389 |
|
| 390 |
-
**Quick Setup:**
|
| 391 |
-
|
| 392 |
-
GitHub Copilot doesn't directly support MCP servers, but you can use the W&B API through code comments:
|
| 393 |
-
|
| 394 |
-
1. Install the W&B Python SDK in your project:
|
| 395 |
-
```bash
|
| 396 |
-
pip install wandb
|
| 397 |
-
```
|
| 398 |
-
|
| 399 |
-
2. Use Copilot to generate W&B code by adding comments like:
|
| 400 |
```python
|
| 401 |
-
|
| 402 |
-
|
| 403 |
-
```
|
| 404 |
-
|
| 405 |
-
**Note:** For direct MCP integration, consider using Cursor or VSCode with MCP extensions.
|
| 406 |
-
</details>
|
| 407 |
-
|
| 408 |
-
<details>
|
| 409 |
-
<summary><b id="gemini-hosted-server">Gemini CLI (Hosted Server)</b></summary>
|
| 410 |
|
| 411 |
-
|
| 412 |
-
1. Create a `gemini-extension.json` file in your project:
|
| 413 |
|
| 414 |
-
|
| 415 |
-
|
| 416 |
-
|
| 417 |
-
|
| 418 |
-
|
| 419 |
-
|
| 420 |
-
|
| 421 |
-
|
| 422 |
-
|
| 423 |
-
|
| 424 |
-
"Accept": "application/json, text/event-stream"
|
| 425 |
-
}
|
| 426 |
-
}
|
| 427 |
-
}
|
| 428 |
-
}
|
| 429 |
```
|
| 430 |
|
| 431 |
-
|
| 432 |
-
|
| 433 |
-
3. Install the extension:
|
| 434 |
-
```bash
|
| 435 |
-
gemini extensions install --path .
|
| 436 |
-
```
|
| 437 |
-
|
| 438 |
-
✅ **That's it!** No installation required.
|
| 439 |
</details>
|
| 440 |
|
| 441 |
-
|
| 442 |
-
|
| 443 |
<details>
|
| 444 |
-
<summary
|
| 445 |
-
|
| 446 |
-
**Quick Setup:**
|
| 447 |
|
| 448 |
-
|
| 449 |
-
|
| 450 |
-
|
| 451 |
-
2. In the "Actions" section, click "Create new action"
|
| 452 |
-
3. Configure Authentication:
|
| 453 |
-
- **Authentication Type**: API Key
|
| 454 |
-
- **Auth Type**: Bearer
|
| 455 |
-
- **API Key**: `YOUR_WANDB_API_KEY`
|
| 456 |
-
|
| 457 |
-
3. Add the OpenAPI schema:
|
| 458 |
|
| 459 |
-
|
| 460 |
-
|
| 461 |
-
"openapi": "3.1.0",
|
| 462 |
-
"info": {
|
| 463 |
-
"title": "W&B MCP Server",
|
| 464 |
-
"version": "1.0.0",
|
| 465 |
-
"description": "Access W&B experiment tracking and Weave traces"
|
| 466 |
-
},
|
| 467 |
-
"servers": [
|
| 468 |
-
{
|
| 469 |
-
"url": "https://mcp.withwandb.com"
|
| 470 |
-
}
|
| 471 |
-
],
|
| 472 |
-
"paths": {
|
| 473 |
-
"/mcp": {
|
| 474 |
-
"post": {
|
| 475 |
-
"operationId": "callTool",
|
| 476 |
-
"summary": "Execute W&B MCP tools",
|
| 477 |
-
"requestBody": {
|
| 478 |
-
"required": true,
|
| 479 |
-
"content": {
|
| 480 |
-
"application/json": {
|
| 481 |
-
"schema": {
|
| 482 |
-
"type": "object",
|
| 483 |
-
"required": ["tool", "params"],
|
| 484 |
-
"properties": {
|
| 485 |
-
"tool": {
|
| 486 |
-
"type": "string",
|
| 487 |
-
"description": "The MCP tool to call"
|
| 488 |
-
},
|
| 489 |
-
"params": {
|
| 490 |
-
"type": "object",
|
| 491 |
-
"description": "Parameters for the tool"
|
| 492 |
-
}
|
| 493 |
-
}
|
| 494 |
-
}
|
| 495 |
-
}
|
| 496 |
-
}
|
| 497 |
-
},
|
| 498 |
-
"responses": {
|
| 499 |
-
"200": {
|
| 500 |
-
"description": "Successful response",
|
| 501 |
-
"content": {
|
| 502 |
-
"application/json": {
|
| 503 |
-
"schema": {
|
| 504 |
-
"type": "object"
|
| 505 |
-
}
|
| 506 |
-
}
|
| 507 |
-
}
|
| 508 |
-
}
|
| 509 |
-
}
|
| 510 |
-
}
|
| 511 |
-
}
|
| 512 |
-
}
|
| 513 |
-
}
|
| 514 |
```
|
| 515 |
|
| 516 |
-
|
| 517 |
|
| 518 |
-
|
| 519 |
</details>
|
| 520 |
|
|
|
|
| 521 |
<details>
|
| 522 |
-
<summary
|
| 523 |
|
| 524 |
-
|
| 525 |
-
|
| 526 |
-
|
| 527 |
-
3. Configure with:
|
| 528 |
-
- **Server URL**: `https://mcp.withwandb.com/mcp`
|
| 529 |
-
- **Authentication**: Choose "API Key Authentication"
|
| 530 |
-
- **Token**: Enter your W&B API key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
| 531 |
|
| 532 |
-
|
| 533 |
</details>
|
| 534 |
|
|
|
|
| 535 |
<details>
|
| 536 |
-
<summary
|
| 537 |
-
|
| 538 |
-
**Quick Setup:**
|
| 539 |
-
1. [Download Claude Desktop](https://claude.ai/download) if you haven't already
|
| 540 |
-
2. Open Claude Desktop
|
| 541 |
-
3. Go to Settings → Features → Model Context Protocol
|
| 542 |
-
4. Add the configuration below
|
| 543 |
-
5. Replace `YOUR_WANDB_API_KEY` with your key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
| 544 |
-
6. Restart Claude Desktop
|
| 545 |
|
| 546 |
-
|
|
|
|
|
|
|
|
|
|
| 547 |
|
| 548 |
```json
|
| 549 |
{
|
| 550 |
-
"
|
| 551 |
"wandb": {
|
| 552 |
-
"transport": "http",
|
| 553 |
"url": "https://mcp.withwandb.com/mcp",
|
| 554 |
"headers": {
|
| 555 |
-
"Authorization": "Bearer YOUR_WANDB_API_KEY"
|
| 556 |
-
"Accept": "application/json, text/event-stream"
|
| 557 |
}
|
| 558 |
}
|
| 559 |
}
|
| 560 |
}
|
| 561 |
```
|
| 562 |
|
| 563 |
-
|
| 564 |
-
</details>
|
| 565 |
-
|
| 566 |
-
<details>
|
| 567 |
-
<summary><b id="other-web-clients">Other Web Clients</b></summary>
|
| 568 |
-
|
| 569 |
-
**Quick Setup:**
|
| 570 |
-
1. Use our hosted public version: [HF Spaces](https://wandb-wandb-mcp-server.hf.space)
|
| 571 |
-
2. Configure your `WANDB_API_KEY` directly in the interface
|
| 572 |
-
3. Follow the instructions in the space to add it to your preferred client
|
| 573 |
-
|
| 574 |
-
This version allows you to access your own projects with your API key or work with all public projects otherwise.
|
| 575 |
-
|
| 576 |
-
✅ **That's it!** No installation required.
|
| 577 |
</details>
|
| 578 |
|
| 579 |
---
|
| 580 |
|
| 581 |
-
##
|
| 582 |
-
|
| 583 |
-
If you prefer to run the MCP server locally or need custom configurations, follow these instructions.
|
| 584 |
-
|
| 585 |
-
### Prerequisites
|
| 586 |
-
|
| 587 |
-
#### 1. Install UV Package Manager
|
| 588 |
-
|
| 589 |
-
UV is required to run the MCP server. Install it using one of these methods:
|
| 590 |
-
|
| 591 |
-
**macOS/Linux:**
|
| 592 |
-
```bash
|
| 593 |
-
curl -LsSf https://astral.sh/uv/install.sh | sh
|
| 594 |
-
```
|
| 595 |
-
|
| 596 |
-
**macOS (Homebrew):**
|
| 597 |
-
```bash
|
| 598 |
-
brew install uv
|
| 599 |
-
```
|
| 600 |
-
|
| 601 |
-
**Windows:**
|
| 602 |
-
```powershell
|
| 603 |
-
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
|
| 604 |
-
```
|
| 605 |
-
|
| 606 |
-
#### 2. Get Your W&B API Key
|
| 607 |
-
|
| 608 |
-
You'll need a Weights & Biases API key. Get yours at: [https://wandb.ai/authorize](https://wandb.ai/authorize)
|
| 609 |
-
|
| 610 |
-
Configure your API key using one of these methods (first one recommended to have the other default parameters too):
|
| 611 |
-
|
| 612 |
-
1. **`.env` file** in your project (copy from `env.example`):
|
| 613 |
-
```bash
|
| 614 |
-
cp env.example .env
|
| 615 |
-
# Edit .env and add your API key
|
| 616 |
-
```
|
| 617 |
-
|
| 618 |
-
2. **`.netrc` file**:
|
| 619 |
-
```bash
|
| 620 |
-
uvx wandb login
|
| 621 |
-
```
|
| 622 |
-
|
| 623 |
-
3. **Environment variable** (recommended):
|
| 624 |
-
```bash
|
| 625 |
-
export WANDB_API_KEY=your-api-key
|
| 626 |
-
```
|
| 627 |
-
|
| 628 |
-
4. **Command-line argument**:
|
| 629 |
-
```bash
|
| 630 |
-
wandb_mcp_server --wandb-api-key your-api-key
|
| 631 |
-
```
|
| 632 |
-
|
| 633 |
-
#### 3. Environment Configuration (Optional)
|
| 634 |
-
|
| 635 |
-
The server includes [wandbot](https://github.com/wandb/wandbot) support for answering W&B/Weave questions. **wandbot works out-of-the-box without any configuration!** It uses the default public endpoint automatically.
|
| 636 |
-
|
| 637 |
-
See `env.example` for optional configuration like custom wandbot instances or other advanced settings.
|
| 638 |
-
|
| 639 |
-
### MCP Client Setup for Local Server
|
| 640 |
-
|
| 641 |
-
Choose your MCP client from the options below for local server setup:
|
| 642 |
|
| 643 |
<details>
|
| 644 |
-
<summary><
|
| 645 |
|
| 646 |
-
|
| 647 |
-
```bash
|
| 648 |
-
uvx --from git+https://github.com/wandb/wandb-mcp-server -- add_to_client --config_path .cursor/mcp.json && uvx wandb login
|
| 649 |
-
```
|
| 650 |
|
| 651 |
-
|
| 652 |
-
```bash
|
| 653 |
-
uvx --from git+https://github.com/wandb/wandb-mcp-server -- add_to_client --config_path ~/.cursor/mcp.json && uvx wandb login
|
| 654 |
-
```
|
| 655 |
|
| 656 |
-
|
| 657 |
-
<summary>Manual Configuration</summary>
|
| 658 |
|
| 659 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 660 |
|
| 661 |
-
|
| 662 |
-
{
|
| 663 |
-
"mcpServers": {
|
| 664 |
-
"wandb": {
|
| 665 |
-
"command": "uvx",
|
| 666 |
-
"args": [
|
| 667 |
-
"--from",
|
| 668 |
-
"git+https://github.com/wandb/wandb-mcp-server",
|
| 669 |
-
"wandb_mcp_server"
|
| 670 |
-
],
|
| 671 |
-
"env": {
|
| 672 |
-
"WANDB_API_KEY": "your-api-key"
|
| 673 |
-
}
|
| 674 |
-
}
|
| 675 |
-
}
|
| 676 |
-
}
|
| 677 |
-
```
|
| 678 |
-
</details>
|
| 679 |
</details>
|
| 680 |
|
| 681 |
<details>
|
| 682 |
-
<summary><
|
| 683 |
-
|
| 684 |
-
**Quick Install:**
|
| 685 |
-
```bash
|
| 686 |
-
uvx --from git+https://github.com/wandb/wandb-mcp-server -- add_to_client --config_path ~/.codeium/windsurf/mcp_config.json && uvx wandb login
|
| 687 |
-
```
|
| 688 |
|
| 689 |
-
|
| 690 |
-
<summary>Manual Configuration</summary>
|
| 691 |
|
| 692 |
-
|
|
|
|
| 693 |
|
| 694 |
```json
|
| 695 |
{
|
|
@@ -702,406 +281,148 @@ Add to `~/.codeium/windsurf/mcp_config.json`:
|
|
| 702 |
"wandb_mcp_server"
|
| 703 |
],
|
| 704 |
"env": {
|
| 705 |
-
"WANDB_API_KEY": "
|
| 706 |
}
|
| 707 |
}
|
| 708 |
}
|
| 709 |
}
|
| 710 |
```
|
| 711 |
-
</details>
|
| 712 |
-
</details>
|
| 713 |
|
| 714 |
-
|
| 715 |
-
<summary><b>Gemini</b></summary>
|
| 716 |
-
**Quick Install:**
|
| 717 |
|
| 718 |
-
|
|
|
|
| 719 |
|
| 720 |
```bash
|
| 721 |
-
#
|
| 722 |
-
|
| 723 |
-
|
| 724 |
-
# Option 2: Use wandb login (opens browser)
|
| 725 |
-
uvx wandb login
|
| 726 |
```
|
| 727 |
|
| 728 |
-
|
| 729 |
|
| 730 |
```bash
|
| 731 |
-
|
| 732 |
-
|
| 733 |
|
| 734 |
-
|
| 735 |
-
|
| 736 |
-
Create `gemini-extension.json` in your project root (use `--path=path/to/folder-with-gemini-extension.json` to add local folder):
|
| 737 |
-
|
| 738 |
-
```json
|
| 739 |
-
{
|
| 740 |
-
"name": "wandb-mcp-server",
|
| 741 |
-
"version": "0.1.0",
|
| 742 |
-
"mcpServers": {
|
| 743 |
-
"wandb": {
|
| 744 |
-
"httpUrl": "https://mcp.withwandb.com/mcp",
|
| 745 |
-
"trust": true,
|
| 746 |
-
"headers": {
|
| 747 |
-
"Authorization": "Bearer $WANDB_API_KEY",
|
| 748 |
-
"Accept": "application/json, text/event-stream"
|
| 749 |
-
}
|
| 750 |
-
}
|
| 751 |
-
}
|
| 752 |
-
}
|
| 753 |
```
|
| 754 |
|
| 755 |
-
|
| 756 |
-
<summary><b>🤖 Claude Desktop</b></summary>
|
| 757 |
|
| 758 |
-
|
|
|
|
| 759 |
```bash
|
| 760 |
-
uvx --from git+https://github.com/wandb/wandb-mcp-server
|
| 761 |
```
|
| 762 |
|
| 763 |
-
|
| 764 |
-
|
| 765 |
-
|
| 766 |
-
Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
|
| 767 |
-
|
| 768 |
-
```json
|
| 769 |
-
{
|
| 770 |
-
"mcpServers": {
|
| 771 |
-
"wandb": {
|
| 772 |
-
"command": "uvx",
|
| 773 |
-
"args": [
|
| 774 |
-
"--from",
|
| 775 |
-
"git+https://github.com/wandb/wandb-mcp-server",
|
| 776 |
-
"wandb_mcp_server"
|
| 777 |
-
],
|
| 778 |
-
"env": {
|
| 779 |
-
"WANDB_API_KEY": "your-api-key"
|
| 780 |
-
}
|
| 781 |
-
}
|
| 782 |
-
}
|
| 783 |
-
}
|
| 784 |
-
```
|
| 785 |
-
</details>
|
| 786 |
-
</details>
|
| 787 |
-
|
| 788 |
-
<details>
|
| 789 |
-
<summary><b id="claude-code">💻 Claude Code</b></summary>
|
| 790 |
-
|
| 791 |
-
**Quick Install:**
|
| 792 |
```bash
|
| 793 |
-
|
| 794 |
```
|
| 795 |
|
| 796 |
-
|
| 797 |
```bash
|
| 798 |
-
|
| 799 |
```
|
| 800 |
-
</details>
|
| 801 |
-
|
| 802 |
-
|
| 803 |
-
|
| 804 |
-
## Usage Tips
|
| 805 |
-
|
| 806 |
-
### Be Specific About Projects
|
| 807 |
-
Always specify the W&B entity and project name in your queries:
|
| 808 |
-
|
| 809 |
-
✅ **Good:** "Show traces from wandb-team/my-project"
|
| 810 |
-
❌ **Bad:** "Show my traces"
|
| 811 |
-
|
| 812 |
-
### Avoid Overly Broad Questions
|
| 813 |
-
Be specific to get better results:
|
| 814 |
-
|
| 815 |
-
✅ **Good:** "What eval had the highest F1 score in the last week?"
|
| 816 |
-
❌ **Bad:** "What's my best evaluation?"
|
| 817 |
-
|
| 818 |
-
### Verify Complete Data Retrieval
|
| 819 |
-
When analyzing performance across multiple runs, ask the LLM to confirm it retrieved all available data to ensure comprehensive analysis.
|
| 820 |
-
|
| 821 |
-
## Self-Hosting Guide
|
| 822 |
-
|
| 823 |
-
### Deploy to Hugging Face Spaces
|
| 824 |
-
|
| 825 |
-
Deploy your own instance of the W&B MCP Server on Hugging Face Spaces:
|
| 826 |
-
|
| 827 |
-
1. **Fork this repository** or clone it locally
|
| 828 |
-
2. **Create a new Space on Hugging Face:**
|
| 829 |
-
- Go to [huggingface.co/spaces](https://huggingface.co/spaces)
|
| 830 |
-
- Click "Create new Space"
|
| 831 |
-
- Choose "Docker" as the SDK
|
| 832 |
-
- Set visibility as needed
|
| 833 |
-
|
| 834 |
-
3. **Push the code to your Space:**
|
| 835 |
-
```bash
|
| 836 |
-
git remote add hf-space https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
|
| 837 |
-
git push hf-space main
|
| 838 |
-
```
|
| 839 |
-
|
| 840 |
-
4. **Your server will be available at:**
|
| 841 |
-
```
|
| 842 |
-
https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space/mcp
|
| 843 |
-
```
|
| 844 |
-
|
| 845 |
-
The server is deployed on HuggingFace Spaces at `https://mcp.withwandb.com`.
|
| 846 |
-
|
| 847 |
-
### Run Local HTTP Server
|
| 848 |
-
|
| 849 |
-
Run the server locally with HTTP transport for development or testing:
|
| 850 |
|
|
|
|
| 851 |
```bash
|
| 852 |
-
|
| 853 |
-
pip install -r requirements.txt
|
| 854 |
-
|
| 855 |
-
# Run with authentication (recommended)
|
| 856 |
-
python app.py
|
| 857 |
-
|
| 858 |
-
# Or run without authentication (development only)
|
| 859 |
-
MCP_AUTH_DISABLED=true python app.py
|
| 860 |
-
```
|
| 861 |
-
|
| 862 |
-
The server will be available at `http://localhost:7860/mcp`
|
| 863 |
-
|
| 864 |
-
**Authentication:** See [AUTH_README.md](AUTH_README.md) for details on Bearer token authentication.
|
| 865 |
-
|
| 866 |
-
### File Structure for Deployment
|
| 867 |
-
|
| 868 |
-
```
|
| 869 |
-
wandb-mcp-server/
|
| 870 |
-
├── app.py # HF Spaces/HTTP server entry point
|
| 871 |
-
├── Dockerfile # Container configuration for HF Spaces
|
| 872 |
-
├── requirements.txt # Python dependencies for HTTP deployment
|
| 873 |
-
├── index.html # Landing page for web interface
|
| 874 |
-
├── AUTH_README.md # Authentication documentation
|
| 875 |
-
├── ARCHITECTURE.md # Architecture & scalability guide
|
| 876 |
-
├── src/
|
| 877 |
-
│ └── wandb_mcp_server/
|
| 878 |
-
│ ├── server.py # Core MCP server (STDIO & HTTP)
|
| 879 |
-
│ ├── auth.py # Bearer token authentication
|
| 880 |
-
│ └── mcp_tools/ # Tool implementations
|
| 881 |
-
└── pyproject.toml # Package configuration for local/pip install
|
| 882 |
```
|
| 883 |
|
| 884 |
-
|
| 885 |
-
|
| 886 |
-
### Enabling Weave Tracing for MCP Operations
|
| 887 |
-
|
| 888 |
-
Track all MCP tool calls using [Weave's MCP integration](https://weave-docs.wandb.ai/guides/integrations/mcp):
|
| 889 |
-
|
| 890 |
```bash
|
| 891 |
-
|
| 892 |
-
export WEAVE_DISABLED=false
|
| 893 |
-
export MCP_LOGS_WANDB_ENTITY=your-entity
|
| 894 |
-
export MCP_LOGS_WANDB_PROJECT=mcp-logs
|
| 895 |
-
|
| 896 |
-
# Optional: trace list operations
|
| 897 |
-
export MCP_TRACE_LIST_OPERATIONS=true
|
| 898 |
```
|
| 899 |
|
| 900 |
-
|
| 901 |
-
|
| 902 |
-
### Logging Configuration
|
| 903 |
-
|
| 904 |
-
Control server logging with environment variables:
|
| 905 |
-
|
| 906 |
```bash
|
| 907 |
-
|
| 908 |
-
export MCP_SERVER_LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
|
| 909 |
-
|
| 910 |
-
# W&B/Weave output control
|
| 911 |
-
export WANDB_SILENT=False # Show W&B output
|
| 912 |
-
export WEAVE_SILENT=False # Show Weave output
|
| 913 |
-
|
| 914 |
-
# Debug mode
|
| 915 |
-
export WANDB_DEBUG=true # Verbose W&B logging
|
| 916 |
```
|
| 917 |
|
| 918 |
-
###
|
| 919 |
|
| 920 |
-
|
| 921 |
-
For local development where the MCP client and server run on the same machine:
|
| 922 |
-
```bash
|
| 923 |
-
wandb_mcp_server --transport stdio
|
| 924 |
-
# Or with UV:
|
| 925 |
-
uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server
|
| 926 |
-
```
|
| 927 |
-
- Requires W&B API key in environment
|
| 928 |
-
- Direct communication via stdin/stdout
|
| 929 |
-
- Best for local IDE integrations (Cursor, Windsurf, etc.)
|
| 930 |
|
| 931 |
-
#### HTTP Transport (For Remote Access)
|
| 932 |
-
For remote access, web applications, or hosted deployments:
|
| 933 |
```bash
|
| 934 |
-
#
|
| 935 |
-
|
| 936 |
-
|
| 937 |
-
# Or using the CLI
|
| 938 |
-
wandb_mcp_server --transport http --host 0.0.0.0 --port 8080
|
| 939 |
-
```
|
| 940 |
-
- Clients provide W&B API key as Bearer token
|
| 941 |
-
- Supports authentication middleware
|
| 942 |
-
- Uses Server-Sent Events (SSE) for streaming
|
| 943 |
-
- Ideal for hosted deployments and web clients
|
| 944 |
|
| 945 |
-
|
|
|
|
| 946 |
|
| 947 |
-
|
| 948 |
-
git clone https://github.com/wandb/wandb-mcp-server
|
| 949 |
-
cd wandb-mcp-server
|
| 950 |
-
wandb login
|
| 951 |
-
uv run src/wandb_mcp_server/server.py
|
| 952 |
```
|
| 953 |
|
| 954 |
-
|
| 955 |
-
|
| 956 |
-
### Error: spawn uv ENOENT
|
| 957 |
-
|
| 958 |
-
If `uv` cannot be found:
|
| 959 |
-
|
| 960 |
-
1. Reinstall UV:
|
| 961 |
-
```bash
|
| 962 |
-
curl -LsSf https://astral.sh/uv/install.sh | sh
|
| 963 |
-
```
|
| 964 |
|
| 965 |
-
|
| 966 |
-
|
| 967 |
-
sudo ln -s ~/.local/bin/uv /usr/local/bin/uv
|
| 968 |
-
```
|
| 969 |
|
| 970 |
-
|
| 971 |
|
| 972 |
-
###
|
| 973 |
|
| 974 |
-
Verify W&B authentication:
|
| 975 |
```bash
|
| 976 |
-
|
|
|
|
|
|
|
| 977 |
```
|
| 978 |
|
| 979 |
-
|
| 980 |
-
```bash
|
| 981 |
-
echo $WANDB_API_KEY
|
| 982 |
-
```
|
| 983 |
-
|
| 984 |
-
## Testing
|
| 985 |
-
|
| 986 |
-
Run integration tests with LLM providers:
|
| 987 |
|
| 988 |
```bash
|
| 989 |
-
#
|
| 990 |
-
|
| 991 |
-
|
| 992 |
-
# Run specific test file
|
| 993 |
-
uv run pytest -s -n 10 tests/test_query_wandb_gql.py
|
| 994 |
-
|
| 995 |
-
# Debug single test
|
| 996 |
-
pytest -s -n 1 "tests/test_query_weave_traces.py::test_query_weave_trace[sample_name]" -v --log-cli-level=DEBUG
|
| 997 |
-
```
|
| 998 |
-
|
| 999 |
-
## Contributing
|
| 1000 |
-
|
| 1001 |
-
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
|
| 1002 |
-
|
| 1003 |
-
## License
|
| 1004 |
-
|
| 1005 |
-
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
| 1006 |
-
|
| 1007 |
-
## System Architecture
|
| 1008 |
-
|
| 1009 |
-
### Overview
|
| 1010 |
-
|
| 1011 |
-
The W&B MCP Server is built with a modern, scalable architecture designed for both local development and cloud deployment:
|
| 1012 |
|
| 1013 |
-
|
| 1014 |
-
|
| 1015 |
-
|
| 1016 |
-
│ (Cursor, Claude, ChatGPT, VSCode, etc.) │
|
| 1017 |
-
└──────────────┬──────────────────────────────┘
|
| 1018 |
-
│ HTTP/SSE with Bearer Auth
|
| 1019 |
-
▼
|
| 1020 |
-
┌─────────────────────────────────────────────┐
|
| 1021 |
-
│ FastAPI Application │
|
| 1022 |
-
│ ┌────────────────────────────────────────┐ │
|
| 1023 |
-
│ │ Authentication Middleware │ │
|
| 1024 |
-
│ │ - Bearer token validation │ │
|
| 1025 |
-
│ │ - Per-request API key isolation │ │
|
| 1026 |
-
│ │ - Thread-safe context management │ │
|
| 1027 |
-
│ └────────────────────────────────────────┘ │
|
| 1028 |
-
│ ┌────────────────────────────────────────┐ │
|
| 1029 |
-
│ │ MCP Server (FastMCP) │ │
|
| 1030 |
-
│ │ - Tool registration & dispatch │ │
|
| 1031 |
-
│ │ - Session management │ │
|
| 1032 |
-
│ │ - SSE streaming for responses │ │
|
| 1033 |
-
│ └────────────────────────────────────────┘ │
|
| 1034 |
-
└──────────────┬──────────────────────────────┘
|
| 1035 |
-
│
|
| 1036 |
-
▼
|
| 1037 |
-
┌─────────────────────────────────────────────┐
|
| 1038 |
-
│ W&B/Weave Tools │
|
| 1039 |
-
│ ┌────────────────────────────────────────┐ │
|
| 1040 |
-
│ │ • query_wandb_tool (GraphQL) │ │
|
| 1041 |
-
│ │ • query_weave_traces (LLM traces) │ │
|
| 1042 |
-
│ │ • count_weave_traces (Analytics) │ │
|
| 1043 |
-
│ │ • create_wandb_report (Reporting) │ │
|
| 1044 |
-
│ │ • query_wandb_support_bot (Help) │ │
|
| 1045 |
-
│ └────────────────────────────────────────┘ │
|
| 1046 |
-
└──────────────┬──────────────────────────────┘
|
| 1047 |
-
│
|
| 1048 |
-
▼
|
| 1049 |
-
┌─────────────────────────────────────────────┐
|
| 1050 |
-
│ External Services │
|
| 1051 |
-
│ • W&B API (api.wandb.ai) │
|
| 1052 |
-
│ • Weave API (trace.wandb.ai) │
|
| 1053 |
-
│ • Wandbot (wandbot.wandb.ai) │
|
| 1054 |
-
└─────────────────────────────────────────────┘
|
| 1055 |
```
|
| 1056 |
|
| 1057 |
-
###
|
| 1058 |
|
| 1059 |
-
1.
|
| 1060 |
-
2.
|
| 1061 |
-
3.
|
| 1062 |
-
4.
|
| 1063 |
-
5.
|
| 1064 |
|
| 1065 |
-
|
| 1066 |
-
|
| 1067 |
-
The server can be deployed in multiple configurations:
|
| 1068 |
-
|
| 1069 |
-
- **Local Development**: Single process with STDIO transport
|
| 1070 |
-
- **Single Instance**: FastAPI with Uvicorn for small deployments
|
| 1071 |
-
- **Async Concurrency**: Single worker with high-performance async event loop
|
| 1072 |
-
- **Containerized**: Docker with configurable worker counts
|
| 1073 |
-
- **Cloud Platforms**: Hugging Face Spaces, AWS, GCP, etc.
|
| 1074 |
-
|
| 1075 |
-
For detailed architecture and scalability information, see the [Architecture Guide](ARCHITECTURE.md).
|
| 1076 |
|
| 1077 |
-
|
| 1078 |
|
| 1079 |
-
|
| 1080 |
|
| 1081 |
-
|
| 1082 |
-
- **Remote Server (mcp.withwandb.com)**: 500+ concurrent connections @ ~35 req/s
|
| 1083 |
-
- **Local Server**: 1000 concurrent connections @ ~50 req/s
|
| 1084 |
-
- **100% Success Rate**: Up to 500 clients (remote) or 1000 (local)
|
| 1085 |
-
- **Horizontal Scaling**: Add workers to multiply capacity
|
| 1086 |
|
| 1087 |
-
|
| 1088 |
|
| 1089 |
-
|
| 1090 |
-
|
| 1091 |
-
|
|
|
|
|
|
|
|
|
|
| 1092 |
|
| 1093 |
-
|
| 1094 |
-
python load_test.py --url https://mcp.withwandb.com --mode stress
|
| 1095 |
|
| 1096 |
-
|
| 1097 |
-
python load_test.py --url https://mcp.withwandb.com --clients 100 --requests 20
|
| 1098 |
-
```
|
| 1099 |
|
| 1100 |
-
|
|
|
|
|
|
|
|
|
|
| 1101 |
|
| 1102 |
-
|
| 1103 |
|
| 1104 |
-
|
|
|
|
| 1105 |
|
| 1106 |
```python
|
| 1107 |
from openai import OpenAI
|
|
@@ -1113,14 +434,14 @@ load_dotenv()
|
|
| 1113 |
client = OpenAI()
|
| 1114 |
|
| 1115 |
resp = client.responses.create(
|
| 1116 |
-
model="gpt-4o", # Use gpt-4o for larger context window
|
| 1117 |
tools=[
|
| 1118 |
{
|
| 1119 |
"type": "mcp",
|
| 1120 |
"server_label": "wandb",
|
| 1121 |
-
"server_description": "
|
| 1122 |
-
"server_url": "https://mcp.withwandb.com/mcp",
|
| 1123 |
-
"authorization": os.getenv('WANDB_API_KEY'),
|
| 1124 |
"require_approval": "never",
|
| 1125 |
},
|
| 1126 |
],
|
|
@@ -1129,16 +450,10 @@ resp = client.responses.create(
|
|
| 1129 |
|
| 1130 |
print(resp.output_text)
|
| 1131 |
```
|
|
|
|
| 1132 |
|
| 1133 |
-
|
| 1134 |
-
- OpenAI's MCP implementation is server-side, so you must use a publicly accessible URL
|
| 1135 |
-
- The `authorization` field should contain your W&B API key directly (not in headers)
|
| 1136 |
-
- Use `gpt-4o` model for sufficient context window to handle all W&B tools
|
| 1137 |
-
- The server operates in stateless mode - each request includes authentication
|
| 1138 |
-
|
| 1139 |
-
## Support
|
| 1140 |
|
| 1141 |
-
- [W&B Documentation](https://docs.wandb.ai)
|
| 1142 |
-
- [Weave Documentation](https://weave-docs.wandb.ai)
|
| 1143 |
- [GitHub Issues](https://github.com/wandb/wandb-mcp-server/issues)
|
| 1144 |
-
- [W&B Community
|
|
|
|
|
|
| 16 |
</picture>
|
| 17 |
</p>
|
| 18 |
|
| 19 |
+
# W&B MCP Server
|
| 20 |
|
| 21 |
+
Query and analyze your Weights & Biases data using natural language through the Model Context Protocol.
|
| 22 |
|
| 23 |
+
<div align="center">
|
| 24 |
+
<a href="https://cursor.com/en/install-mcp?name=wandb&config=eyJ0cmFuc3BvcnQiOiJodHRwIiwidXJsIjoiaHR0cHM6Ly9tY3Aud2l0aHdhbmRiLmNvbS9tY3AiLCJoZWFkZXJzIjp7IkF1dGhvcml6YXRpb24iOiJCZWFyZXIge3tXQU5EQl9BUElfS0VZfX0iLCJBY2NlcHQiOiJhcHBsaWNhdGlvbi9qc29uLCB0ZXh0L2V2ZW50LXN0cmVhbSJ9fQ%3D%3D"><img src="https://cursor.com/deeplink/mcp-install-dark.svg" alt="Cursor" height="28"/></a>
|
| 25 |
+
<a href="#claude-desktop"><img src="https://img.shields.io/badge/Claude-6B5CE6?logo=anthropic&logoColor=white" alt="Claude" height="28"/></a>
|
| 26 |
+
<a href="#openai"><img src="https://img.shields.io/badge/OpenAI-412991?logo=openai&logoColor=white" alt="OpenAI" height="28"/></a>
|
| 27 |
+
<a href="#gemini-cli"><img src="https://img.shields.io/badge/Gemini-4285F4?logo=google&logoColor=white" alt="Gemini" height="28"/></a>
|
| 28 |
+
<a href="#mistral-lechat"><img src="https://img.shields.io/badge/LeChat-FF6B6B?logo=mistralai&logoColor=white" alt="LeChat" height="28"/></a>
|
| 29 |
+
<a href="#vscode"><img src="https://img.shields.io/badge/VSCode-007ACC?logo=visualstudiocode&logoColor=white" alt="VSCode" height="28"/></a>
|
| 30 |
+
</div>
|
| 31 |
|
| 32 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
## What Can This Server Do?
|
| 35 |
|
| 36 |
+
<details open>
|
| 37 |
+
<summary><strong>Example Use Cases</strong> (click command to copy)</summary>
|
| 38 |
|
| 39 |
+
<table>
|
| 40 |
+
<tr>
|
| 41 |
+
<td width="25%">
|
|
|
|
| 42 |
|
| 43 |
+
**Analyze Experiments**
|
| 44 |
+
```text
|
| 45 |
+
Show me the top 5 runs
|
| 46 |
+
by eval/accuracy in
|
| 47 |
+
wandb-smle/hiring-agent-demo-public?
|
| 48 |
```
|
| 49 |
|
| 50 |
+
</td>
|
| 51 |
+
<td width="25%">
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
**Debug Traces**
|
| 54 |
+
```text
|
| 55 |
+
How did the latency of
|
| 56 |
+
my hiring agent predict traces
|
| 57 |
+
evolve over the last months?
|
| 58 |
```
|
| 59 |
|
| 60 |
+
</td>
|
| 61 |
+
<td width="25%">
|
|
|
|
|
|
|
| 62 |
|
| 63 |
+
**Create Reports**
|
| 64 |
+
```text
|
| 65 |
+
Generate a wandb report
|
| 66 |
+
comparing the decisions made
|
| 67 |
+
by the hiring agent last month
|
| 68 |
```
|
| 69 |
|
| 70 |
+
</td>
|
| 71 |
+
<td width="25%">
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
+
**Get Help**
|
| 74 |
+
```text
|
| 75 |
+
How do I create a leaderboard
|
| 76 |
+
in Weave - ask SupportBot?
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
```
|
| 78 |
|
| 79 |
+
</td>
|
| 80 |
+
</tr>
|
| 81 |
+
</table>
|
| 82 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
<details>
|
| 85 |
+
<summary><strong>Available Tools</strong> (6 powerful tools)</summary>
|
| 86 |
|
| 87 |
+
| Tool | Description | Example Query |
|
| 88 |
+
|------|-------------|---------------|
|
| 89 |
+
| **query_wandb_tool** | Query W&B runs, metrics, and experiments | *"Show me runs with loss < 0.1"* |
|
| 90 |
+
| **query_weave_traces_tool** | Analyze LLM traces and evaluations | *"What's the average latency?"* |
|
| 91 |
+
| **count_weave_traces_tool** | Count traces and get storage metrics | *"How many traces failed?"* |
|
| 92 |
+
| **create_wandb_report_tool** | Create W&B reports programmatically | *"Create a performance report"* |
|
| 93 |
+
| **query_wandb_entity_projects** | List projects for an entity | *"What projects exist?"* |
|
| 94 |
+
| **query_wandb_support_bot** | Get help from W&B documentation | *"How do I use sweeps?"* |
|
| 95 |
|
| 96 |
+
</details>
|
| 97 |
|
| 98 |
<details>
|
| 99 |
+
<summary><strong>Usage Tips</strong> (best practices)</summary>
|
| 100 |
|
| 101 |
+
**→ Provide your W&B project and entity name**
|
| 102 |
+
LLMs are not mind readers, ensure you specify the W&B Entity and W&B Project to the LLM.
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
+
**→ Avoid asking overly broad questions**
|
| 105 |
+
Questions such as "what is my best evaluation?" are probably overly broad and you'll get to an answer faster by refining your question to be more specific such as: "what eval had the highest f1 score?"
|
| 106 |
|
| 107 |
+
**→ Ensure all data was retrieved**
|
| 108 |
+
When asking broad, general questions such as "what are my best performing runs/evaluations?" it's always a good idea to ask the LLM to check that it retrieved all the available runs. The MCP tools are designed to fetch the correct amount of data, but sometimes there can be a tendency from the LLMs to only retrieve the latest runs or the last N runs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
|
|
|
|
| 110 |
</details>
|
| 111 |
|
| 112 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
|
| 114 |
+
## Quick Start
|
| 115 |
|
| 116 |
+
We recommend using our **hosted server** at `https://mcp.withwandb.com` - no installation required!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
|
| 118 |
+
> 🔑 Get your API key from [wandb.ai/authorize](https://wandb.ai/authorize)
|
|
|
|
| 119 |
|
| 120 |
+
### Cursor
|
| 121 |
<details>
|
| 122 |
+
<summary>One-click installation</summary>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
|
| 124 |
+
1. Open Cursor Settings (`⌘,` or `Ctrl,`)
|
| 125 |
+
2. Navigate to **Features** → **Model Context Protocol**
|
| 126 |
+
3. Click **"Install from Registry"** or **"Add MCP Server"**
|
| 127 |
+
4. Search for "wandb" or enter:
|
| 128 |
+
- **Name**: `wandb`
|
| 129 |
+
- **URL**: `https://mcp.withwandb.com/mcp`
|
| 130 |
+
- **API Key**: Your W&B API key
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
|
| 132 |
+
For local installation, see [Option 2](#option-2-local-development-stdio) below.
|
| 133 |
</details>
|
| 134 |
|
| 135 |
+
### Claude Desktop
|
|
|
|
| 136 |
<details>
|
| 137 |
+
<summary>Configuration setup</summary>
|
| 138 |
+
|
| 139 |
+
Add to your Claude config file:
|
| 140 |
|
|
|
|
|
|
|
|
|
|
| 141 |
```bash
|
| 142 |
+
# macOS
|
| 143 |
+
open ~/Library/Application\ Support/Claude/claude_desktop_config.json
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 144 |
|
| 145 |
+
# Windows
|
| 146 |
+
notepad %APPDATA%\Claude\claude_desktop_config.json
|
| 147 |
+
```
|
| 148 |
|
|
|
|
| 149 |
```json
|
| 150 |
{
|
| 151 |
"mcpServers": {
|
| 152 |
"wandb": {
|
|
|
|
| 153 |
"url": "https://mcp.withwandb.com/mcp",
|
| 154 |
+
"apiKey": "YOUR_WANDB_API_KEY"
|
|
|
|
|
|
|
|
|
|
| 155 |
}
|
| 156 |
}
|
| 157 |
}
|
| 158 |
```
|
| 159 |
|
| 160 |
+
Restart Claude Desktop to activate.
|
| 161 |
+
|
| 162 |
+
For local installation, see [Option 2](#option-2-local-development-stdio) below.
|
| 163 |
</details>
|
| 164 |
|
| 165 |
+
### OpenAI Response API
|
| 166 |
<details>
|
| 167 |
+
<summary>Python client setup</summary>
|
| 168 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
```python
|
| 170 |
+
from openai import OpenAI
|
| 171 |
+
import os
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
|
| 173 |
+
client = OpenAI()
|
|
|
|
| 174 |
|
| 175 |
+
resp = client.responses.create(
|
| 176 |
+
model="gpt-4o",
|
| 177 |
+
tools=[{
|
| 178 |
+
"type": "mcp",
|
| 179 |
+
"server_url": "https://mcp.withwandb.com/mcp",
|
| 180 |
+
"authorization": os.getenv('WANDB_API_KEY'),
|
| 181 |
+
}],
|
| 182 |
+
input="How many traces are in my project?"
|
| 183 |
+
)
|
| 184 |
+
print(resp.output_text)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
```
|
| 186 |
|
| 187 |
+
> **Note**: OpenAI's MCP is server-side, so localhost URLs won't work. For local servers, see [Option 2](#option-2-local-development-stdio) with ngrok.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 188 |
</details>
|
| 189 |
|
| 190 |
+
### Gemini CLI
|
|
|
|
| 191 |
<details>
|
| 192 |
+
<summary>One-command installation</summary>
|
|
|
|
|
|
|
| 193 |
|
| 194 |
+
```bash
|
| 195 |
+
# Set your API key
|
| 196 |
+
export WANDB_API_KEY="your-api-key-here"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 197 |
|
| 198 |
+
# Install the extension
|
| 199 |
+
gemini extensions install https://github.com/wandb/wandb-mcp-server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 200 |
```
|
| 201 |
|
| 202 |
+
The extension will use the configuration from `gemini-extension.json` pointing to the hosted server.
|
| 203 |
|
| 204 |
+
For local installation, see [Option 2](#option-2-local-development-stdio) below.
|
| 205 |
</details>
|
| 206 |
|
| 207 |
+
### Mistral LeChat
|
| 208 |
<details>
|
| 209 |
+
<summary>Configuration setup</summary>
|
| 210 |
|
| 211 |
+
In LeChat settings, add an MCP server:
|
| 212 |
+
- **URL**: `https://mcp.withwandb.com/mcp`
|
| 213 |
+
- **API Key**: Your W&B API key
|
|
|
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
+
For local installation, see [Option 2](#option-2-local-development-stdio) below.
|
| 216 |
</details>
|
| 217 |
|
| 218 |
+
### VSCode
|
| 219 |
<details>
|
| 220 |
+
<summary>Settings configuration</summary>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 221 |
|
| 222 |
+
```bash
|
| 223 |
+
# Open settings
|
| 224 |
+
code ~/.config/Code/User/settings.json
|
| 225 |
+
```
|
| 226 |
|
| 227 |
```json
|
| 228 |
{
|
| 229 |
+
"mcp.servers": {
|
| 230 |
"wandb": {
|
|
|
|
| 231 |
"url": "https://mcp.withwandb.com/mcp",
|
| 232 |
"headers": {
|
| 233 |
+
"Authorization": "Bearer YOUR_WANDB_API_KEY"
|
|
|
|
| 234 |
}
|
| 235 |
}
|
| 236 |
}
|
| 237 |
}
|
| 238 |
```
|
| 239 |
|
| 240 |
+
For local installation, see [Option 2](#option-2-local-development-stdio) below.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 241 |
</details>
|
| 242 |
|
| 243 |
---
|
| 244 |
|
| 245 |
+
## General Installation Guide
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 246 |
|
| 247 |
<details>
|
| 248 |
+
<summary><strong>Option 1: Hosted Server (Recommended)</strong></summary>
|
| 249 |
|
| 250 |
+
The hosted server provides a zero-configuration experience with enterprise-grade reliability. This server is maintained by the W&B team, automatically updated with new features, and scales to handle any workload. Perfect for teams and production use cases where you want to focus on your ML work rather than infrastructure.
|
|
|
|
|
|
|
|
|
|
| 251 |
|
| 252 |
+
### Using the Public Server
|
|
|
|
|
|
|
|
|
|
| 253 |
|
| 254 |
+
The easiest way is using our hosted server at `https://mcp.withwandb.com`.
|
|
|
|
| 255 |
|
| 256 |
+
**Benefits:**
|
| 257 |
+
- ✅ Zero installation
|
| 258 |
+
- ✅ Always up-to-date
|
| 259 |
+
- ✅ Automatic scaling
|
| 260 |
+
- ✅ No maintenance
|
| 261 |
|
| 262 |
+
Simply use the configurations shown in [Quick Start](#quick-start).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 263 |
</details>
|
| 264 |
|
| 265 |
<details>
|
| 266 |
+
<summary><strong>Option 2: Local Development (STDIO)</strong></summary>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 267 |
|
| 268 |
+
Run the MCP server locally for development, testing, or when you need full control over your data. The local server runs directly on your machine with STDIO transport for desktop clients or HTTP transport for web-based clients. Ideal for developers who want to customize the server or work in air-gapped environments.
|
|
|
|
| 269 |
|
| 270 |
+
### Manual Configuration
|
| 271 |
+
Add to your MCP client config:
|
| 272 |
|
| 273 |
```json
|
| 274 |
{
|
|
|
|
| 281 |
"wandb_mcp_server"
|
| 282 |
],
|
| 283 |
"env": {
|
| 284 |
+
"WANDB_API_KEY": "YOUR_API_KEY"
|
| 285 |
}
|
| 286 |
}
|
| 287 |
}
|
| 288 |
}
|
| 289 |
```
|
|
|
|
|
|
|
| 290 |
|
| 291 |
+
### Prerequisites
|
|
|
|
|
|
|
| 292 |
|
| 293 |
+
- Python 3.10+
|
| 294 |
+
- [uv](https://docs.astral.sh/uv/) (recommended) or pip
|
| 295 |
|
| 296 |
```bash
|
| 297 |
+
# Install uv (if not already installed)
|
| 298 |
+
curl -LsSf https://astral.sh/uv/install.sh | sh
|
|
|
|
|
|
|
|
|
|
| 299 |
```
|
| 300 |
|
| 301 |
+
### Installation
|
| 302 |
|
| 303 |
```bash
|
| 304 |
+
# Using uv (recommended)
|
| 305 |
+
uv pip install wandb-mcp-server
|
| 306 |
|
| 307 |
+
# Or from GitHub
|
| 308 |
+
pip install git+https://github.com/wandb/wandb-mcp-server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 309 |
```
|
| 310 |
|
| 311 |
+
### Client-Specific Installation Commands
|
|
|
|
| 312 |
|
| 313 |
+
#### Cursor (Project-only)
|
| 314 |
+
Enable the server for a specific project:
|
| 315 |
```bash
|
| 316 |
+
uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path .cursor/mcp.json && uvx wandb login
|
| 317 |
```
|
| 318 |
|
| 319 |
+
#### Cursor (Global)
|
| 320 |
+
Enable the server for all Cursor projects:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 321 |
```bash
|
| 322 |
+
uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path ~/.cursor/mcp.json && uvx wandb login
|
| 323 |
```
|
| 324 |
|
| 325 |
+
#### Windsurf
|
| 326 |
```bash
|
| 327 |
+
uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path ~/.codeium/windsurf/mcp_config.json && uvx wandb login
|
| 328 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 329 |
|
| 330 |
+
#### Claude Code
|
| 331 |
```bash
|
| 332 |
+
claude mcp add wandb -- uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server && uvx wandb login
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 333 |
```
|
| 334 |
|
| 335 |
+
With API key:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 336 |
```bash
|
| 337 |
+
claude mcp add wandb -e WANDB_API_KEY=your-api-key -- uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 338 |
```
|
| 339 |
|
| 340 |
+
#### Claude Desktop
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 341 |
```bash
|
| 342 |
+
uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path "~/Library/Application Support/Claude/claude_desktop_config.json" && uvx wandb login
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 343 |
```
|
| 344 |
|
| 345 |
+
### Testing with ngrok (for server-side clients)
|
| 346 |
|
| 347 |
+
For clients like OpenAI and LeChat that require public URLs:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 348 |
|
|
|
|
|
|
|
| 349 |
```bash
|
| 350 |
+
# 1. Start HTTP server
|
| 351 |
+
uvx wandb-mcp-server --transport http --port 8080
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 352 |
|
| 353 |
+
# 2. Expose with ngrok
|
| 354 |
+
ngrok http 8080
|
| 355 |
|
| 356 |
+
# 3. Use the ngrok URL in your client configuration
|
|
|
|
|
|
|
|
|
|
|
|
|
| 357 |
```
|
| 358 |
|
| 359 |
+
> **Note**: These utilities are inspired by the OpenMCP Server Registry [add-to-client pattern](https://www.open-mcp.org/servers).
|
| 360 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 361 |
|
| 362 |
+
<details>
|
| 363 |
+
<summary><strong>Option 3: Self-Hosted HTTP Server</strong></summary>
|
|
|
|
|
|
|
| 364 |
|
| 365 |
+
Deploy your own W&B MCP server for team-wide access or custom infrastructure requirements. This option gives you complete control over deployment, security, and scaling while maintaining compatibility with all MCP clients. Perfect for organizations that need on-premises deployment or want to integrate with existing infrastructure.
|
| 366 |
|
| 367 |
+
### Using Docker
|
| 368 |
|
|
|
|
| 369 |
```bash
|
| 370 |
+
docker run -p 7860:7860 \
|
| 371 |
+
-e WANDB_API_KEY=your-server-key \
|
| 372 |
+
ghcr.io/wandb/wandb-mcp-server
|
| 373 |
```
|
| 374 |
|
| 375 |
+
### From Source
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 376 |
|
| 377 |
```bash
|
| 378 |
+
# Clone repository
|
| 379 |
+
git clone https://github.com/wandb/wandb-mcp-server
|
| 380 |
+
cd wandb-mcp-server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 381 |
|
| 382 |
+
# Install and run
|
| 383 |
+
uv pip install -r requirements.txt
|
| 384 |
+
uv run app.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 385 |
```
|
| 386 |
|
| 387 |
+
### Deploy to HuggingFace Spaces
|
| 388 |
|
| 389 |
+
1. Fork [wandb-mcp-server](https://github.com/wandb/wandb-mcp-server)
|
| 390 |
+
2. Create new Space on [Hugging Face](https://huggingface.co/spaces)
|
| 391 |
+
3. Choose "Docker" SDK
|
| 392 |
+
4. Connect your fork
|
| 393 |
+
5. Add `WANDB_API_KEY` as secret (optional)
|
| 394 |
|
| 395 |
+
Server URL: `https://YOUR-SPACE.hf.space/mcp`
|
| 396 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 397 |
|
| 398 |
+
---
|
| 399 |
|
| 400 |
+
## More Information
|
| 401 |
|
| 402 |
+
### Architecture & Performance
|
|
|
|
|
|
|
|
|
|
|
|
|
| 403 |
|
| 404 |
+
The W&B MCP Server uses **pure stateless architecture** for excellent performance:
|
| 405 |
|
| 406 |
+
| Metric | Performance |
|
| 407 |
+
|--------|------------|
|
| 408 |
+
| **Concurrent Connections** | 500+ (hosted) / 1000+ (local) |
|
| 409 |
+
| **Throughput** | ~35 req/s (hosted) / ~50 req/s (local) |
|
| 410 |
+
| **Success Rate** | 100% up to capacity |
|
| 411 |
+
| **Scaling** | Horizontal (add workers) |
|
| 412 |
|
| 413 |
+
> 📖 See [Architecture Guide](ARCHITECTURE.md) for technical details
|
|
|
|
| 414 |
|
| 415 |
+
### Key Resources
|
|
|
|
|
|
|
| 416 |
|
| 417 |
+
- **W&B Docs**: [docs.wandb.ai](https://docs.wandb.ai)
|
| 418 |
+
- **Weave Docs**: [weave-docs.wandb.ai](https://weave-docs.wandb.ai)
|
| 419 |
+
- **MCP Spec**: [modelcontextprotocol.io](https://modelcontextprotocol.io)
|
| 420 |
+
- **GitHub**: [github.com/wandb/wandb-mcp-server](https://github.com/wandb/wandb-mcp-server)
|
| 421 |
|
| 422 |
+
### Example Code
|
| 423 |
|
| 424 |
+
<details>
|
| 425 |
+
<summary>Complete OpenAI Example</summary>
|
| 426 |
|
| 427 |
```python
|
| 428 |
from openai import OpenAI
|
|
|
|
| 434 |
client = OpenAI()
|
| 435 |
|
| 436 |
resp = client.responses.create(
|
| 437 |
+
model="gpt-4o", # Use gpt-4o for larger context window
|
| 438 |
tools=[
|
| 439 |
{
|
| 440 |
"type": "mcp",
|
| 441 |
"server_label": "wandb",
|
| 442 |
+
"server_description": "Query W&B data",
|
| 443 |
+
"server_url": "https://mcp.withwandb.com/mcp",
|
| 444 |
+
"authorization": os.getenv('WANDB_API_KEY'),
|
| 445 |
"require_approval": "never",
|
| 446 |
},
|
| 447 |
],
|
|
|
|
| 450 |
|
| 451 |
print(resp.output_text)
|
| 452 |
```
|
| 453 |
+
</details>
|
| 454 |
|
| 455 |
+
### Support
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 456 |
|
|
|
|
|
|
|
| 457 |
- [GitHub Issues](https://github.com/wandb/wandb-mcp-server/issues)
|
| 458 |
+
- [W&B Community](https://community.wandb.ai)
|
| 459 |
+
- [W&B Support](https://wandb.ai/support)
|