docs: sync from NAIL-Group canonical (V2 snapshot + scoring + V2Trace)
Browse files
README.md
CHANGED
|
@@ -51,21 +51,22 @@ Live results — pulled from [`leaderboard/results.csv`](https://huggingface.co/
|
|
| 51 |
|
| 52 |
[](https://huggingface.co/spaces/NAIL-Group/clawbench-leaderboard)
|
| 53 |
|
| 54 |
-
**
|
| 55 |
-
|
| 56 |
-
| Rank | Model | Harness |
|
| 57 |
-
|------|-------|---------|--------
|
| 58 |
-
| 1 | `glm-5.1` | hermes |
|
| 59 |
-
| 2 | `
|
| 60 |
-
| 3 | `openrouter/owl-alpha` | hermes |
|
| 61 |
-
| 4 | `deepseek
|
| 62 |
-
| 5 | `glm-5.1` | openclaw |
|
| 63 |
-
| 6 | `
|
| 64 |
-
|
|
|
|
| 65 |
|
| 66 |
**Submit a result** → run [`clawbench-eval`](https://pypi.org/project/clawbench-eval/) on your model and open a PR to [`leaderboard/results.csv`](https://huggingface.co/datasets/NAIL-Group/ClawBench/blob/main/leaderboard/results.csv) — one row per (model × harness × corpus).
|
| 67 |
|
| 68 |
-
> **Companion
|
| 69 |
|
| 70 |
## Dataset Structure
|
| 71 |
|
|
|
|
| 51 |
|
| 52 |
[](https://huggingface.co/spaces/NAIL-Group/clawbench-leaderboard)
|
| 53 |
|
| 54 |
+
**V2 snapshot — refreshed 2026-05-11** (full scoring logic: [`docs/scoring.md`](https://github.com/reacher-z/ClawBench/blob/main/docs/scoring.md))
|
| 55 |
+
|
| 56 |
+
| Rank | Model | Harness | Intercepted | Reward | Pass / Total |
|
| 57 |
+
|------|-------|---------|-------------|--------|--------------|
|
| 58 |
+
| 1 | `glm-5.1` | hermes | 48.5% | **18.5%** | 24 / 130 |
|
| 59 |
+
| 2 | `deepseek-v4-pro` | hermes | 43.8% | **10.0%** | 13 / 130 |
|
| 60 |
+
| 3 | `openrouter/owl-alpha` | hermes | 14.6% | **4.6%** | 6 / 130 |
|
| 61 |
+
| 4 | `deepseek-v4-flash` | hermes | 3.1% | **1.5%** | 2 / 130 |
|
| 62 |
+
| 5 | `glm-5.1` | openclaw | 0.0% | **0.0%** | 0 / 130 |
|
| 63 |
+
| 6 | `poolside/laguna-m.1:free` | hermes | 0.0% | **0.0%** | 0 / 130 |
|
| 64 |
+
|
| 65 |
+
**Reward** = fraction that intercepted the final HTTP request AND the LLM judge confirmed the payload fulfilled the instruction. **Intercepted** alone = Stage 1 only. See [scoring.md](https://github.com/reacher-z/ClawBench/blob/main/docs/scoring.md) for the two-stage rubric. New runs (Claude Sonnet 4.6, Claude Opus 4.7, GPT 5.5) land daily — track them on the [live leaderboard Space](https://huggingface.co/spaces/NAIL-Group/clawbench-leaderboard).
|
| 66 |
|
| 67 |
**Submit a result** → run [`clawbench-eval`](https://pypi.org/project/clawbench-eval/) on your model and open a PR to [`leaderboard/results.csv`](https://huggingface.co/datasets/NAIL-Group/ClawBench/blob/main/leaderboard/results.csv) — one row per (model × harness × corpus).
|
| 68 |
|
| 69 |
+
> **Companion datasets (raw traces):** [`NAIL-Group/ClawBenchV1Trace`](https://huggingface.co/datasets/NAIL-Group/ClawBenchV1Trace) (V1 runs) · [`NAIL-Group/ClawBenchV2Trace`](https://huggingface.co/datasets/NAIL-Group/ClawBenchV2Trace) (V2 runs, rolling) — `recording.mp4`, `requests.jsonl`, `actions.jsonl`, `agent-messages.jsonl`, `interception.json`, `run-meta.json` per model run.
|
| 70 |
|
| 71 |
## Dataset Structure
|
| 72 |
|