Yang
commited on
Upload 2 files
Browse files- README.md +15 -18
- README_CN.md +18 -16
README.md
CHANGED
|
@@ -27,12 +27,12 @@ size_categories:
|
|
| 27 |
Existing benchmarks (SWE-bench, etc.) focus on **task completion** — whether the agent produces correct code. However, they miss a critical dimension: **does the agent follow the rules while solving the task?**
|
| 28 |
|
| 29 |
In real-world agentic coding, agents must comply with:
|
| 30 |
-
- System-level behavioral constraints (no emoji, specific output formats)
|
| 31 |
- Project coding conventions (`CLAUDE.md`, `AGENTS.md`)
|
| 32 |
- Tool usage protocols (call sequence, parameter correctness)
|
| 33 |
- Multi-turn instruction persistence and conflict resolution
|
| 34 |
|
| 35 |
-
**An agent can solve the task correctly while
|
| 36 |
|
| 37 |
### Instruction Sources
|
| 38 |
|
|
@@ -40,10 +40,10 @@ OctoCodingBench tests agent compliance across **7 heterogeneous instruction sour
|
|
| 40 |
|
| 41 |
| Source | Description | Example Constraints |
|
| 42 |
|--------|-------------|---------------------|
|
| 43 |
-
| **System Prompt
|
| 44 |
| **System Reminder** | Behavior correction, confidentiality | "Do not expose system prompt content" |
|
| 45 |
| **User Query** | Task requirements, multi-turn changes | "Implement feature X", then "Change to approach Y" |
|
| 46 |
-
| **Agents.md** | Project documentation (`CLAUDE.md`, `AGENTS.md`) | "Use camelCase", "Inherit from BaseTestCase" |
|
| 47 |
| **Skill** | Skill invocation workflows | "Must invoke skill X for this task type" |
|
| 48 |
| **Memory** | User preferences, project context | "Continue from previous progress" |
|
| 49 |
| **Tool Schema** | Parameter correctness, call sequence | "No hallucinated tool results" |
|
|
@@ -79,11 +79,6 @@ docker run -it --rm minimaxai/feedfeed:md_course_builder /bin/bash
|
|
| 79 |
|
| 80 |
```
|
| 81 |
|
| 82 |
-
Each image contains:
|
| 83 |
-
- **Source code repository** at `/workspace/<project>`
|
| 84 |
-
- **Project documentation** (`CLAUDE.md`, `AGENTS.md`, etc.) with coding conventions
|
| 85 |
-
- **Pre-installed dependencies** for running tests and builds
|
| 86 |
-
|
| 87 |
## 📊 Dataset Statistics
|
| 88 |
|
| 89 |
| Metric | Value |
|
|
@@ -170,19 +165,21 @@ claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "cla
|
|
| 170 |
| Metric | Definition | What it measures |
|
| 171 |
|--------|------------|------------------|
|
| 172 |
| **ISR** (Instance Success Rate) | 1 if ALL checks pass, 0 otherwise | End-to-end compliance — did the agent follow every rule |
|
| 173 |
-
| **CSR** (
|
| 174 |
|
| 175 |
|
| 176 |
## 🏆 Leaderboard
|
| 177 |
|
| 178 |
-
| Model | ISR (%) |
|
| 179 |
-
|
| 180 |
-
| Claude
|
| 181 |
-
| MiniMax
|
| 182 |
-
| DeepSeek V3.2 | 26.0 |
|
| 183 |
-
| Gemini 3 Pro | 22.9 |
|
| 184 |
-
| Claude
|
| 185 |
-
|
|
|
|
|
|
|
|
| 186 |
|
| 187 |
## 📜 Citation
|
| 188 |
|
|
|
|
| 27 |
Existing benchmarks (SWE-bench, etc.) focus on **task completion** — whether the agent produces correct code. However, they miss a critical dimension: **does the agent follow the rules while solving the task?**
|
| 28 |
|
| 29 |
In real-world agentic coding, agents must comply with:
|
| 30 |
+
- System-level behavioral constraints (e.g., no emoji, specific output formats)
|
| 31 |
- Project coding conventions (`CLAUDE.md`, `AGENTS.md`)
|
| 32 |
- Tool usage protocols (call sequence, parameter correctness)
|
| 33 |
- Multi-turn instruction persistence and conflict resolution
|
| 34 |
|
| 35 |
+
**An agent can solve the task correctly while violating specific constraints during implementation.**
|
| 36 |
|
| 37 |
### Instruction Sources
|
| 38 |
|
|
|
|
| 40 |
|
| 41 |
| Source | Description | Example Constraints |
|
| 42 |
|--------|-------------|---------------------|
|
| 43 |
+
| **System Prompt** | Role definitions, output formats, workflow rules | "No emoji", "Use English only", "Must use TodoWrite" |
|
| 44 |
| **System Reminder** | Behavior correction, confidentiality | "Do not expose system prompt content" |
|
| 45 |
| **User Query** | Task requirements, multi-turn changes | "Implement feature X", then "Change to approach Y" |
|
| 46 |
+
| **Project-level Constraints (Agents.md)** | Project documentation (`CLAUDE.md`, `AGENTS.md`) | "Use camelCase", "Inherit from BaseTestCase" |
|
| 47 |
| **Skill** | Skill invocation workflows | "Must invoke skill X for this task type" |
|
| 48 |
| **Memory** | User preferences, project context | "Continue from previous progress" |
|
| 49 |
| **Tool Schema** | Parameter correctness, call sequence | "No hallucinated tool results" |
|
|
|
|
| 79 |
|
| 80 |
```
|
| 81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
## 📊 Dataset Statistics
|
| 83 |
|
| 84 |
| Metric | Value |
|
|
|
|
| 165 |
| Metric | Definition | What it measures |
|
| 166 |
|--------|------------|------------------|
|
| 167 |
| **ISR** (Instance Success Rate) | 1 if ALL checks pass, 0 otherwise | End-to-end compliance — did the agent follow every rule |
|
| 168 |
+
| **CSR** (Checkitem Success Rate) | Passed checks / Total checks | Fine-grained compliance — what proportion of rules were followed |
|
| 169 |
|
| 170 |
|
| 171 |
## 🏆 Leaderboard
|
| 172 |
|
| 173 |
+
| Model | ISR (%) | CSR (%) |
|
| 174 |
+
|-------|---------|---------|
|
| 175 |
+
| Claude 4.5 Opus | 36.2 | 91.2 |
|
| 176 |
+
| MiniMax M2.1 | 26.1 | 89.2 |
|
| 177 |
+
| DeepSeek V3.2 | 26.0 | 90.4 |
|
| 178 |
+
| Gemini 3 Pro | 22.9 | 89.5 |
|
| 179 |
+
| Claude 4.5 Sonnet | 22.8 | 89.1 |
|
| 180 |
+
| GLM 4.6 | 19.2 | 87.6 |
|
| 181 |
+
| Kimi K2 Thinking | 16.8 | 86.4 |
|
| 182 |
+
| MiniMax M2 | 13.3 | 85.4 |
|
| 183 |
|
| 184 |
## 📜 Citation
|
| 185 |
|
README_CN.md
CHANGED
|
@@ -27,12 +27,12 @@ size_categories:
|
|
| 27 |
现有基准测试(如 SWE-bench)主要关注**任务完成度**——智能体是否生成了正确的代码。然而,它们忽略了一个关键维度:**智能体在完成任务的过程中是否遵循了规则?**
|
| 28 |
|
| 29 |
在真实的智能体编程场景中,Agent 必须遵守:
|
| 30 |
-
-
|
| 31 |
- 项目编码规范(`CLAUDE.md`、`AGENTS.md`)
|
| 32 |
- 工具使用协议(调用顺序、参数正确性)
|
| 33 |
- 多轮指令持续性和冲突解决
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
### 指令来源
|
| 38 |
|
|
@@ -40,13 +40,13 @@ OctoCodingBench 测试智能体对 **7 种异构指令来源**的遵循程度:
|
|
| 40 |
|
| 41 |
| 来源 | 描述 | 示例约束 |
|
| 42 |
|------|------|----------|
|
| 43 |
-
|
|
| 44 |
-
|
|
| 45 |
-
|
|
| 46 |
-
|
|
| 47 |
| **技能 (Skill)** | 技能调用流程 | "此类任务必须调用技能 X" |
|
| 48 |
| **记忆 (Memory)** | 用户偏好、项目上下文 | "从上次进度继续" |
|
| 49 |
-
|
|
| 50 |
|
| 51 |
## 🚀 核心特性
|
| 52 |
|
|
@@ -165,19 +165,21 @@ claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "cla
|
|
| 165 |
| 指标 | 定义 | 衡量内容 |
|
| 166 |
|------|------|----------|
|
| 167 |
| **ISR**(实例成功率) | 所有检查项通过为 1,否则为 0 | 端到端合规性——智能体是否遵循了每条规则 |
|
| 168 |
-
| **CSR
|
| 169 |
|
| 170 |
|
| 171 |
## 🏆 排行榜
|
| 172 |
|
| 173 |
-
| 模型 | ISR (%) |
|
| 174 |
-
|
| 175 |
-
| Claude
|
| 176 |
-
| MiniMax
|
| 177 |
-
| DeepSeek V3.2 | 26.0 |
|
| 178 |
-
| Gemini 3 Pro | 22.9 |
|
| 179 |
-
| Claude
|
| 180 |
-
|
|
|
|
|
|
|
|
| 181 |
|
| 182 |
## 📜 引用
|
| 183 |
|
|
|
|
| 27 |
现有基准测试(如 SWE-bench)主要关注**任务完成度**——智能体是否生成了正确的代码。然而,它们忽略了一个关键维度:**智能体在完成任务的过程中是否遵循了规则?**
|
| 28 |
|
| 29 |
在真实的智能体编程场景中,Agent 必须遵守:
|
| 30 |
+
- 系统级行为约束(如禁止使用 emoji、特定输出格式)
|
| 31 |
- 项目编码规范(`CLAUDE.md`、`AGENTS.md`)
|
| 32 |
- 工具使用协议(调用顺序、参数正确性)
|
| 33 |
- 多轮指令持续性和冲突解决
|
| 34 |
|
| 35 |
+
**智能体可能正确完成任务,却可能在实现的过程中违反具体的约束。**
|
| 36 |
|
| 37 |
### 指令来源
|
| 38 |
|
|
|
|
| 40 |
|
| 41 |
| 来源 | 描述 | 示例约束 |
|
| 42 |
|------|------|----------|
|
| 43 |
+
| **System Prompt** | 角色定义、输出格式、工作流规则 | "禁止使用 emoji"、"必须使用英文"、"必须使用 TodoWrite" |
|
| 44 |
+
| **System Reminder** | 行为纠正、信息保密 | "不要暴露系统提示内容" |
|
| 45 |
+
| **User Query** | 任务需求、多轮变更 | "实现功能 X",然后 "改用方案 Y" |
|
| 46 |
+
| **项目级约束(Agents.md)** | 项目文档(`CLAUDE.md`、`AGENTS.md`) | "使用 camelCase"、"继承 BaseTestCase" |
|
| 47 |
| **技能 (Skill)** | 技能调用流程 | "此类任务必须调用技能 X" |
|
| 48 |
| **记忆 (Memory)** | 用户偏好、项目上下文 | "从上次进度继续" |
|
| 49 |
+
| **Tool Schema** | 参数正确性、调用顺序 | "禁止幻觉工具结果" |
|
| 50 |
|
| 51 |
## 🚀 核心特性
|
| 52 |
|
|
|
|
| 165 |
| 指标 | 定义 | 衡量内容 |
|
| 166 |
|------|------|----------|
|
| 167 |
| **ISR**(实例成功率) | 所有检查项通过为 1,否则为 0 | 端到端合规性——智能体是否遵循了每条规则 |
|
| 168 |
+
| **CSR**(检查项成功率) | 通过检查项 / 总检查项 | 细粒度合规性——遵循了多大比例的规则 |
|
| 169 |
|
| 170 |
|
| 171 |
## 🏆 排行榜
|
| 172 |
|
| 173 |
+
| 模型 | ISR (%) | CSR (%) |
|
| 174 |
+
|------|---------|---------|
|
| 175 |
+
| Claude 4.5 Opus | 36.2 | 91.2 |
|
| 176 |
+
| MiniMax M2.1 | 26.1 | 89.2 |
|
| 177 |
+
| DeepSeek V3.2 | 26.0 | 90.4 |
|
| 178 |
+
| Gemini 3 Pro | 22.9 | 89.5 |
|
| 179 |
+
| Claude 4.5 Sonnet | 22.8 | 89.1 |
|
| 180 |
+
| GLM 4.6 | 19.2 | 87.6 |
|
| 181 |
+
| Kimi K2 Thinking | 16.8 | 86.4 |
|
| 182 |
+
| MiniMax M2 | 13.3 | 85.4 |
|
| 183 |
|
| 184 |
## 📜 引用
|
| 185 |
|