Update README.md
Browse files
README.md
CHANGED
|
@@ -124,4 +124,30 @@ We note that our dataset is skewed towards the top three repositories especially
|
|
| 124 |
|
| 125 |
|
| 126 |
|
| 127 |
-
**Languages** We note that the text data in this dataset consists mostly of: commit messages, comments and is primarily in English. We do however not filter for any human languages explcitly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
|
| 125 |
|
| 126 |
|
| 127 |
+
**Languages** We note that the text data in this dataset consists mostly of: commit messages, comments and is primarily in English. We do however not filter for any human languages explcitly.
|
| 128 |
+
|
| 129 |
+
# Cite Us
|
| 130 |
+
```bibtex
|
| 131 |
+
@inproceedings{lindenbauer-etal-2025-gitgoodbench,
|
| 132 |
+
title = "{G}it{G}ood{B}ench: A Novel Benchmark For Evaluating Agentic Performance On Git",
|
| 133 |
+
author = "Lindenbauer, Tobias and
|
| 134 |
+
Bogomolov, Egor and
|
| 135 |
+
Zharov, Yaroslav",
|
| 136 |
+
editor = "Kamalloo, Ehsan and
|
| 137 |
+
Gontier, Nicolas and
|
| 138 |
+
Lu, Xing Han and
|
| 139 |
+
Dziri, Nouha and
|
| 140 |
+
Murty, Shikhar and
|
| 141 |
+
Lacoste, Alexandre",
|
| 142 |
+
booktitle = "Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)",
|
| 143 |
+
month = jul,
|
| 144 |
+
year = "2025",
|
| 145 |
+
address = "Vienna, Austria",
|
| 146 |
+
publisher = "Association for Computational Linguistics",
|
| 147 |
+
url = "https://aclanthology.org/2025.realm-1.19/",
|
| 148 |
+
doi = "10.18653/v1/2025.realm-1.19",
|
| 149 |
+
pages = "272--288",
|
| 150 |
+
ISBN = "979-8-89176-264-0",
|
| 151 |
+
abstract = "Benchmarks for Software Engineering (SE) AI agents, most notably SWE-bench, have catalyzed progress in programming capabilities of AI agents. However, they overlook critical developer workflows such as Version Control System (VCS) operations. To address this issue, we present GitGoodBench, a novel benchmark for evaluating AI agent performance on Version Control System (VCS) tasks. GitGoodBench covers three core Git scenarios extracted from permissive open-source Python, Java, and Kotlin repositories. Our benchmark provides three datasets: a comprehensive evaluation suite (900 samples), a rapid prototyping version (120 samples), and a training corpus (17,469 samples). We establish baseline performance on the prototyping version of our benchmark using GPT-4o equipped with custom tools, achieving a 21.11{\%} solve rate overall. We expect GitGoodBench to serve as a crucial stepping stone toward truly comprehensive SE agents that go beyond mere programming."
|
| 152 |
+
}
|
| 153 |
+
```
|