Update README.md
Browse files
README.md
CHANGED
|
@@ -57,7 +57,7 @@ We adopt a three-step workflow labeling process:
|
|
| 57 |
|
| 58 |
This process yields **172 enterprise-grade workflows—primarily multi-task composite workflows**, involving 1,710 spreadsheets and 27 million cells, capturing the intrinsic **messy, long-horizon, knowledge-intensive, and collaborative nature** of real-world finance & accounting work. In this release, we provide full annotations for the first 72 workflows, with the remaining 100 to be released in a subsequent update.
|
| 59 |
|
| 60 |
-
<img src="figs/distribution_chart.
|
| 61 |
|
| 62 |
We conduct both human and automated evaluations of frontier AI systems including GPT5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%, revealing a substantial performance gap for real-world enterprise scenarios.
|
| 63 |
|
|
|
|
| 57 |
|
| 58 |
This process yields **172 enterprise-grade workflows—primarily multi-task composite workflows**, involving 1,710 spreadsheets and 27 million cells, capturing the intrinsic **messy, long-horizon, knowledge-intensive, and collaborative nature** of real-world finance & accounting work. In this release, we provide full annotations for the first 72 workflows, with the remaining 100 to be released in a subsequent update.
|
| 59 |
|
| 60 |
+
<img src="figs/distribution_chart.png" width="1000" />
|
| 61 |
|
| 62 |
We conduct both human and automated evaluations of frontier AI systems including GPT5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%, revealing a substantial performance gap for real-world enterprise scenarios.
|
| 63 |
|