git_good_bench-lite / README.md
Liqs's picture
Update README.md
086d113 verified
|
raw
history blame
10.7 kB
metadata
license: apache-2.0
tags:
  - git
  - code
size_categories:
  - n<1K

Dataset Summary

GitGoodBench Lite is a subset of 120 samples for evaluating the performance of AI agents in resolving git tasks (see Supported Scenarios). The samples in the dataset are evenly split across the programming languages Python, Java and Kotlin and the sample types merge conflict resolution and file-commit gram. This dataset thus contains 20 samples per sample type and programming language.

All data in this dataset are collected from 100 unique, open-source GitHub repositories with permissive licenses that have >= 1000 stars, >= 5 branches, >= 10 contributors and are not a fork or archived. We collected the initial list of repositories using SEART.

Evaluation is to be performed by exact-match (EM) of diffs for the merge conflict setting and by LLM-as-a-Judge for the file-commit chain setting. For further details see our paper.

Supported Tasks

GitGoodBench Lite contains two types of samples: 'merge' and 'file_commit_chain'. It is important to note that the sample type 'file_commit_chain' can be used for two scenario types: Performing an interactive rebase to clean up the local tree or iteratively generating commits based on the staged, uncommitted changes.

Merge

Merge scenarios are contain one or more merge conflicts that occurred during a merge. All merge conflicts are guaranteed to be in a Python, Java or Kotlin file. There are only merges with exactly two parents in our dataset (no octopus merges).

A merge scenario looks as follows:

{
  'merge_commit_hash': 'baa37f65fdff5b780a50d5b5c6bf8bc3ade43815', 
  'parents': ['d758810c59a9134f437d60f73a82036749688ccb', '5dcd493c67ff863c69c1214f0892a80e4951087e'], 
  'number_of_files_with_merge_conflict': 2, 
  'total_number_of_merge_conflicts': 2, 
  'files_in_merge_conflict': ['cogs/gpt_3_commands_and_converser.py', 'models/openai_model.py']
}

Where merge_commit_hash contains the ground truth merge commit and the parents are the commits during the merge of which the conflict(s) in files_in_merge_conflict occurred.

File-Commit Chain

File-commit chain scenarios consist of two commits, the oldest and newest commit. In all commits between the oldest_commit and newest_commit (inclusive) file was modified. In total the chain consists of times_seen_consecutively commits. The intended use-cases of these scenarios are to evaluate the agent's capacity to create meaningful, cohesive commits or improve the local tree via rebasing. Thus samples of this sample_type cover two scenario types.

File-commit chains are at least 3 commits long, the file the sample concerns itself with is guaranteed to be of programming_language (this is not the case for other potential files in the commits of the sample) and no commit is a merge commit.

A file_commit_chain scenario looks as follows:

{
  'file': 'composer/models/huggingface.py', 
  'branch': 'origin/vincent-mlflow-logger-verbose', 
  'times_seen_consecutively': 3, 
  'purity': 0.68, 
  'newest_commit': 'c24b29f19c4c131a3ea7098dd8b8a5edde344819', 
  'oldest_commit': 'c1ff80900f46d4e36feb4b326689fe14fc41cbc6'
}

purity indicates the relative amount of changes in the chain that occurred solely in file and is a heuristic for the difficulty of the scenario. We expect noisier scenarios to be more difficult.

Dataset Structure

The following table provides per-field details. Columns marked “Yes” under Is Metadata? are those that provide contextual or descriptive information but are not essential to the primary scenario logic.

Field Type Description Is Metadata?
id string A unique identifier for the dataset entry: -- No
name string The repository name, in “owner/repository” format. No
default_branch string The primary or default branch for the repository. No
license string Repository license. Yes
stargazers integer The number of stars on GitHub. Yes
created_at string The repository creation date. Yes
topics string A semicolon-delimited list of topics or tags associated with the repository. Yes
programming_language string The programming language of the sample. Possible values: "java," "python," or "kotlin." No
scenario string A JSON string describing specific scenario data (e.g., merge-conflict details, parent commits). No
sample_type string The type of sample. Possible values: "merge" or "file_commit_chain." No
project_size string Estimated size based on lines of code. Possible values: "tiny," "small," "medium," "large," or "huge." Yes
difficulty string The complexity level of the scenario. Possible values: "easy," "medium," or "hard." Yes

Note:

  • Fields marked as Is Metadata? = Yes provide contextual information (e.g., project stats, licensing) rather than forming the core logic of a scenario.
  • Fields marked No represent the primary data for the scenario. Use them to inform or categorize the scenario type and project details.

Dataset statistics

We provide some statistics on the diversity of our dataset with respect to repositories and merge conflict resolution samples.

Dataset Skew

We note that our dataset is skewed towards the top three repositories especially, however the skew flattens quickly.

Distribution Statistics

  • Total number of repositories (count): 100
  • Average (mean) samples per repository: 1.2
  • Standard deviation (std): 0.79
  • Minimum (min): 1
  • 25th percentile (25%): 1
  • Median (50%): 1
  • 75th percentile (75%): 1
  • Maximum (max): 8

Top-10 Repositories by Sample Count

Repository Percentage of Total Samples
oss-review-toolkit/ort 6.67%
stripe/stripe-android 2.50%
element-hq/element-android 2.50%
apache/hive 1.67%
coil-kt/coil 1.67%
wikimedia/apps-android-wikipedia 1.67%
facebookresearch/habitat-lab 1.67%
liquibase/liquibase 1.67%
google/guava 1.67%
kotlin/kotlinx.coroutines 1.67%

Difficulty Distribution for "merge" Scenarios

Difficulty Percentage
easy 51.67%
medium 21.67%
hard 26.67%

Languages We note that the text data in this dataset consists mostly of: commit messages, comments and is primarily in English. We do however not filter for any human languages explcitly.

Cite Us

@inproceedings{lindenbauer-etal-2025-gitgoodbench,
    title = "{G}it{G}ood{B}ench: A Novel Benchmark For Evaluating Agentic Performance On Git",
    author = "Lindenbauer, Tobias  and
      Bogomolov, Egor  and
      Zharov, Yaroslav",
    editor = "Kamalloo, Ehsan  and
      Gontier, Nicolas  and
      Lu, Xing Han  and
      Dziri, Nouha  and
      Murty, Shikhar  and
      Lacoste, Alexandre",
    booktitle = "Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.realm-1.19/",
    doi = "10.18653/v1/2025.realm-1.19",
    pages = "272--288",
    ISBN = "979-8-89176-264-0",
    abstract = "Benchmarks for Software Engineering (SE) AI agents, most notably SWE-bench, have catalyzed progress in programming capabilities of AI agents. However, they overlook critical developer workflows such as Version Control System (VCS) operations. To address this issue, we present GitGoodBench, a novel benchmark for evaluating AI agent performance on Version Control System (VCS) tasks. GitGoodBench covers three core Git scenarios extracted from permissive open-source Python, Java, and Kotlin repositories. Our benchmark provides three datasets: a comprehensive evaluation suite (900 samples), a rapid prototyping version (120 samples), and a training corpus (17,469 samples). We establish baseline performance on the prototyping version of our benchmark using GPT-4o equipped with custom tools, achieving a 21.11{\%} solve rate overall. We expect GitGoodBench to serve as a crucial stepping stone toward truly comprehensive SE agents that go beyond mere programming."
}