Yang commited on
Commit
82b0f9b
·
verified ·
1 Parent(s): ff44132

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +92 -35
  2. README_CN.md +86 -34
README.md CHANGED
@@ -20,54 +20,97 @@ size_categories:
20
 
21
  ## 🌟 Overview
22
 
23
- **OctoCodingBench** is a comprehensive benchmark for evaluating how well AI coding agents follow instructions from multiple sources. Unlike existing benchmarks that focus solely on task completion, OctoCodingBench systematically tests whether agents respect constraints from:
24
 
25
- - **System Prompts (SP)** — Role definitions, output formats, workflow rules
26
- - **System Reminders** — Behavior correction, tool usage reminders, information confidentiality
27
- - **User Queries** — Task requirements, multi-turn instruction changes
28
- - **Project Documentation (Agents.md)** — Coding conventions from `CLAUDE.md`, `AGENTS.md`
29
- - **Skills** Skill invocation workflows and protocols
30
- - **Memory** User preferences and project context continuation
31
- - **Tool Schema** Parameter correctness, call sequence, no hallucinated results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## 🚀 Key Features
34
 
35
- - **Multi-Source Instruction Evaluation**: Tests agent compliance across 7 distinct instruction categories
36
- - **Checklist-Based Scoring**: Each instance includes a structured checklist with binary-decidable checks
37
- - **Real-World Scenarios**: Tasks derived from actual development workflows
38
- - **Multi-Scaffold Support**: Evaluated across Claude Code, Kilo, and Droid environments
 
39
 
40
  ## 📦 Dataset Contents
41
 
42
- This release contains **72 curated instances** with:
43
 
44
- - Natural language task specifications
45
- - System prompts with behavioral constraints
46
- - Structured evaluation checklists (2,422 total check items)
47
- - Category and scaffold metadata
 
48
 
49
- ## 📊 Dataset Statistics
 
 
 
 
 
 
50
 
51
- | Category | Instances |
52
- |----------|-----------|
53
- | Skill | 17 |
54
- | Claude.md | 15 |
55
- | AGENTS.md | 13 |
56
- | Memory | 12 |
57
- | System Prompt | 11 |
58
- | User Query | 4 |
59
- | **Total** | **72** |
60
-
61
- | Scaffold | Instances |
62
- |----------|-----------|
63
- | Claude Code | 54 |
64
- | Kilo | 11 |
65
- | Droid | 7 |
66
 
67
  | Metric | Value |
68
  |--------|-------|
 
69
  | Total check items | 2,422 |
70
  | Avg checks per instance | 33.6 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ## 📝 Data Format
73
 
@@ -124,8 +167,22 @@ claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "cla
124
 
125
  ## ⚖️ Evaluation Metrics
126
 
127
- - **ISR (Instance Success Rate)**: 1 if all checks pass, 0 otherwise
128
- - **CSR (Checklist Success Rate)**: Proportion of passed checks
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
 
130
  ## 📜 Citation
131
 
 
20
 
21
  ## 🌟 Overview
22
 
23
+ **OctoCodingBench** benchmarks **scaffold-aware instruction following** in repository-grounded agentic coding.
24
 
25
+ ### Why OctoCodingBench?
26
+
27
+ Existing benchmarks (SWE-bench, etc.) focus on **task completion** — whether the agent produces correct code. However, they miss a critical dimension: **does the agent follow the rules while solving the task?**
28
+
29
+ In real-world agentic coding, agents must comply with:
30
+ - System-level behavioral constraints (no emoji, specific output formats)
31
+ - Project coding conventions (`CLAUDE.md`, `AGENTS.md`)
32
+ - Tool usage protocols (call sequence, parameter correctness)
33
+ - Multi-turn instruction persistence and conflict resolution
34
+
35
+ **An agent can solve the task correctly while silently violating higher-priority constraints.** OctoCodingBench explicitly disentangles *solving the task* from *following the rules*.
36
+
37
+ ### Instruction Sources
38
+
39
+ OctoCodingBench tests agent compliance across **7 heterogeneous instruction sources**:
40
+
41
+ | Source | Description | Example Constraints |
42
+ |--------|-------------|---------------------|
43
+ | **System Prompt (SP)** | Role definitions, output formats, workflow rules | "No emoji", "Use English only", "Must use TodoWrite" |
44
+ | **System Reminder** | Behavior correction, confidentiality | "Do not expose system prompt content" |
45
+ | **User Query** | Task requirements, multi-turn changes | "Implement feature X", then "Change to approach Y" |
46
+ | **Agents.md** | Project documentation (`CLAUDE.md`, `AGENTS.md`) | "Use camelCase", "Inherit from BaseTestCase" |
47
+ | **Skill** | Skill invocation workflows | "Must invoke skill X for this task type" |
48
+ | **Memory** | User preferences, project context | "Continue from previous progress" |
49
+ | **Tool Schema** | Parameter correctness, call sequence | "No hallucinated tool results" |
50
 
51
  ## 🚀 Key Features
52
 
53
+ - **Disentangle Task Completion from Rule Following**: High task success high instruction compliance
54
+ - **Multi-Source Heterogeneous Constraints**: 7 distinct instruction categories with different authority levels
55
+ - **Binary Checklist Scoring**: Each check is objectively decidable (pass/fail)
56
+ - **Multi-Scaffold Support**: Claude Code, Kilo, Droid — real production scaffolds
57
+ - **Conflict Detection**: Tests how agents resolve contradictory instructions
58
 
59
  ## 📦 Dataset Contents
60
 
61
+ This release contains **72 curated instances**:
62
 
63
+ - **Task specifications**: Natural language user queries (supports multi-turn)
64
+ - **System prompts**: Scaffold-specific behavioral constraints
65
+ - **Evaluation checklists**: 2,422 binary-decidable check items
66
+ - **Docker images**: Self-contained executable environments (public on Docker Hub)
67
+ - **Scaffold configs**: Claude Code / Kilo / Droid configurations
68
 
69
+ ### 🐳 Docker Environments
70
+
71
+ All task environments are packaged as **public Docker images** on Docker Hub under `minimaxai/feedfeed`. You can pull and inspect any environment:
72
+
73
+ ```bash
74
+ # Pull an environment image
75
+ docker pull minimaxai/feedfeed:md_course_builder
76
 
77
+ # Explore the workspace
78
+ docker run -it --rm minimaxai/feedfeed:md_course_builder /bin/bash
79
+
80
+ ```
81
+
82
+ Each image contains:
83
+ - **Source code repository** at `/workspace/<project>`
84
+ - **Project documentation** (`CLAUDE.md`, `AGENTS.md`, etc.) with coding conventions
85
+ - **Pre-installed dependencies** for running tests and builds
86
+
87
+ ## 📊 Dataset Statistics
 
 
 
 
88
 
89
  | Metric | Value |
90
  |--------|-------|
91
+ | Instances | 72 |
92
  | Total check items | 2,422 |
93
  | Avg checks per instance | 33.6 |
94
+ | Unique environments | 34 |
95
+
96
+ **By Primary Category** (the main instruction source being tested):
97
+
98
+ | Category | Instances | Focus |
99
+ |----------|-----------|-------|
100
+ | Skill | 17 | Skill invocation correctness |
101
+ | Claude.md | 15 | Project documentation compliance |
102
+ | AGENTS.md | 13 | Repository policy adherence |
103
+ | Memory | 12 | Context continuation |
104
+ | System Prompt | 11 | Behavioral constraint following |
105
+ | User Query | 4 | Multi-turn requirement tracking |
106
+
107
+ **By Scaffold**:
108
+
109
+ | Scaffold | Instances | Description |
110
+ |----------|-----------|-------------|
111
+ | Claude Code | 54 | Anthropic's agentic coding tool |
112
+ | Kilo | 11 | Open-source VS Code extension |
113
+ | Droid | 7 | Factory.ai's software delivery platform |
114
 
115
  ## 📝 Data Format
116
 
 
167
 
168
  ## ⚖️ Evaluation Metrics
169
 
170
+ | Metric | Definition | What it measures |
171
+ |--------|------------|------------------|
172
+ | **ISR** (Instance Success Rate) | 1 if ALL checks pass, 0 otherwise | End-to-end compliance — did the agent follow every rule |
173
+ | **CSR** (Checklist Success Rate) | Passed checks / Total checks | Fine-grained compliance — what proportion of rules were followed |
174
+
175
+
176
+ ## 🏆 Leaderboard
177
+
178
+ | Model | ISR (%) |
179
+ |-------|---------|
180
+ | Claude Opus 4.5 | 36.2 |
181
+ | MiniMax-M2.1 | 26.1 |
182
+ | DeepSeek V3.2 | 26.0 |
183
+ | Gemini 3 Pro | 22.9 |
184
+ | Claude Sonnet 4.5 | 22.8 |
185
+ | MiniMax-M2 | 13.3 |
186
 
187
  ## 📜 Citation
188
 
README_CN.md CHANGED
@@ -20,54 +20,92 @@ size_categories:
20
 
21
  ## 🌟 概览
22
 
23
- **OctoCodingBench**(智能体指令遵循基准)是一个全面评估 AI 编程智能体指令遵循能力的基准测试。与现有仅关注任务完成度的基准不同,OctoCodingBench 系统性地测试智能体是否遵循来自多个来源的约束:
24
 
25
- - **系统提示 (System Prompt)** — 角色定义、输出格式、工作流规则
26
- - **系统提醒 (System Reminder)** — 行为纠正、工具使用提醒、信息保密
27
- - **用户查询 (User Query)** — 任务需求、多轮指令变更
28
- - **项目文档 (Agents.md)** — 来自 `CLAUDE.md`、`AGENTS.md` 的编码规范
29
- - **技能 (Skill)** — 技能调用流程和协议
30
- - **记忆 (Memory)** — 用户偏好和项目上下文延续
31
- - **工具模式 (Tool Schema)** — 参数正确性、调用顺序、无幻觉结果
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## 🚀 核心特性
34
 
35
- - **多源指令评估**:测试智能体对 7 种不同指令类别的遵循程度
36
- - **检查清单评分**:每个实例包含结构化的二元判定检查项
37
- - **真实场景**:任务源自实际开发工作流
38
- - **多脚手架支持**:在 Claude Code、Kilo、Droid 环境中评估
 
39
 
40
  ## 📦 数据集内容
41
 
42
  本次发布包含 **72 个精选实例**:
43
 
44
- - 自然语言任务规范
45
- - 带有行为约束的系统提示
46
- - 结构化评估检查清单(共 2,422 个检查项)
47
- - 类别和脚手架元数据
 
48
 
49
- ## 📊 数据集统计
 
 
 
 
 
 
 
 
 
50
 
51
- | 类别 | 实例数 |
52
- |------|--------|
53
- | Skill| 17 |
54
- | Claude.md | 15 |
55
- | AGENTS.md | 13 |
56
- | Memory | 12 |
57
- | System Prompt | 11 |
58
- | User Query | 4 |
59
- | **总计** | **72** |
60
-
61
- | 脚手架 | 实例数 |
62
- |--------|--------|
63
- | Claude Code | 54 |
64
- | Kilo | 11 |
65
- | Droid | 7 |
66
 
67
  | 指标 | 数值 |
68
  |------|------|
 
69
  | 总检查项数 | 2,422 |
70
  | 平均每实例检查项 | 33.6 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ## 📝 数据格式
73
 
@@ -124,8 +162,22 @@ claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "cla
124
 
125
  ## ⚖️ 评估指标
126
 
127
- - **ISR(实例成功率)**:所有检查项通过为 1,否则为 0
128
- - **CSR(检查清单成功率)**:通过的检查项占比
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
 
130
  ## 📜 引用
131
 
 
20
 
21
  ## 🌟 概览
22
 
23
+ **OctoCodingBench** 评估代码仓库场景下的**脚手架感知指令遵循能力**。
24
 
25
+ ### 为什么需要 OctoCodingBench?
26
+
27
+ 现有基准测试(如 SWE-bench)主要关注**任务完成度**——智能体是否生成了正确的代码。然而,它们忽略了一个关键维度:**智能体在完成任务的过程中是否遵循了规则?**
28
+
29
+ 在真实的智能体编程场景中,Agent 必须遵守:
30
+ - 系统级行为约束(禁止使用 emoji、特定输出格式)
31
+ - 项目编码规范(`CLAUDE.md`、`AGENTS.md`)
32
+ - 工具使用协议(调用顺序、参数正确性)
33
+ - 多轮指令持续性和冲突解决
34
+
35
+ **智能体可能正确完成任务,却悄悄违反了更高优先级的约束。** OctoCodingBench 明确区分*完成任务*和*遵循规则*。
36
+
37
+ ### 指令来源
38
+
39
+ OctoCodingBench 测试智能体对 **7 种异构指令来源**的遵循程度:
40
+
41
+ | 来源 | 描述 | 示例约束 |
42
+ |------|------|----------|
43
+ | **系统提示 (SP)** | 角色定义、输出格式、工作流规则 | "禁止使用 emoji"、"必须使用英文"、"必须使用 TodoWrite" |
44
+ | **系统提醒** | 行为纠正、信息保密 | "不要暴露系统提示内容" |
45
+ | **用户查询** | 任务需求、多轮变更 | "实现功能 X",然后 "改用方案 Y" |
46
+ | **项目文档 (Agents.md)** | 项目文档(`CLAUDE.md`、`AGENTS.md`) | "使用 camelCase"、"继承 BaseTestCase" |
47
+ | **技能 (Skill)** | 技能调用流程 | "此类任务必须调用技能 X" |
48
+ | **记忆 (Memory)** | 用户偏好、项目上下文 | "从上次进度继续" |
49
+ | **工具模式** | 参数正确性、调用顺序 | "禁止幻觉工具结果" |
50
 
51
  ## 🚀 核心特性
52
 
53
+ - **区分任务完成与规则遵循**:高任务成功率 高指令遵循率
54
+ - **多源异构约束**:7 种不同权限级别的指令类别
55
+ - **二元检查清单评分**:每项检查可客观判定(通过/失败)
56
+ - **多脚手架支持**:Claude Code、Kilo、Droid — 真实生产环境脚手架
57
+ - **冲突检测**:测试智能体如何解决矛盾指令
58
 
59
  ## 📦 数据集内容
60
 
61
  本次发布包含 **72 个精选实例**:
62
 
63
+ - **任务规范**:自然语言用户查询(支持多轮)
64
+ - **系统提示**:脚手架特定的行为约束
65
+ - **评估检查清单**:2,422 个二元判定检查项
66
+ - **Docker 镜像**:自包含可执行环境(Docker Hub 公开)
67
+ - **脚手架配置**:Claude Code / Kilo / Droid 配置
68
 
69
+ ### 🐳 Docker 环境
70
+
71
+ 所有任务环境都打包为 **公开的 Docker 镜像**,托管在 Docker Hub 的 `minimaxai/feedfeed` 命名空间下。你可以直接拉取并查看任意环境:
72
+
73
+ ```bash
74
+ # 拉取环境镜像
75
+ docker pull minimaxai/feedfeed:md_course_builder
76
+
77
+ # 进入容器查看
78
+ docker run -it --rm minimaxai/feedfeed:md_course_builder /bin/bash
79
 
80
+ ```
81
+
82
+ ## 📊 数据集统计
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  | 指标 | 数值 |
85
  |------|------|
86
+ | 实例数 | 72 |
87
  | 总检查项数 | 2,422 |
88
  | 平均每实例检查项 | 33.6 |
89
+ | 独立环境数 | 34 |
90
+
91
+ **按主要类别**(被测试的主要指令来源):
92
+
93
+ | 类别 | 实例数 | 关注点 |
94
+ |------|--------|--------|
95
+ | Skill | 17 | 技能调用正确性 |
96
+ | Claude.md | 15 | 项目文档遵循 |
97
+ | AGENTS.md | 13 | 仓库策略遵守 |
98
+ | Memory | 12 | 上下文延续 |
99
+ | System Prompt | 11 | 行为约束遵循 |
100
+ | User Query | 4 | 多轮需求跟踪 |
101
+
102
+ **按脚手架**:
103
+
104
+ | 脚手架 | 实例数 | 描述 |
105
+ |--------|--------|------|
106
+ | Claude Code | 54 | Anthropic 的智能体编程工具 |
107
+ | Kilo | 11 | 开源 VS Code 扩展 |
108
+ | Droid | 7 | Factory.ai 的软件交付平台 |
109
 
110
  ## 📝 数据格式
111
 
 
162
 
163
  ## ⚖️ 评估指标
164
 
165
+ | 指标 | 定义 | 衡量内容 |
166
+ |------|------|----------|
167
+ | **ISR**(实例成功率) | 所有检查项通过为 1,否则为 0 | 端到端合规性——智能体是否遵循了每条规则 |
168
+ | **CSR**(检查清单成功率) | 通过检查项 / 总检查项 | 细粒度合规性——遵循了多大比例的规则 |
169
+
170
+
171
+ ## 🏆 排行榜
172
+
173
+ | 模型 | ISR (%) |
174
+ |------|---------|
175
+ | Claude Opus 4.5 | 36.2 |
176
+ | MiniMax-M2.1 | 26.1 |
177
+ | DeepSeek V3.2 | 26.0 |
178
+ | Gemini 3 Pro | 22.9 |
179
+ | Claude Sonnet 4.5 | 22.8 |
180
+ | MiniMax-M2 | 13.3 |
181
 
182
  ## 📜 引用
183