|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text2text-generation |
|
|
- multiple-choice |
|
|
language: |
|
|
- en |
|
|
- code |
|
|
tags: |
|
|
- code |
|
|
- software engineering |
|
|
size_categories: |
|
|
- n<1K |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: Benchmark |
|
|
path: CodeReviewQA.jsonl |
|
|
|
|
|
extra_gated_fields: |
|
|
First Name: text |
|
|
Last Name: text |
|
|
Affiliation: text |
|
|
Country: country |
|
|
geo: ip_location |
|
|
I agree to NOT directly train my model on the CodeReviewQA benchmark: checkbox |
|
|
|
|
|
--- |
|
|
<center><h1> CodeReviewQA: The Code Review Comprehension Assessment for Large Language Models </h1></center> |
|
|
|
|
|
<p align="center"> |
|
|
<a href="https://huggingface.co/datasets/Tomo-Melb/CodeReviewQA"><img src="https://img.shields.io/badge/%F0%9F%A4%97-Benchmark-%23FFCC4D"></a> |
|
|
<a href="https://arxiv.org/abs/2503.16167"><img src="https://img.shields.io/badge/arXiv-2503.16167-b31b1b?"></a> |
|
|
<a href="https://github.com/hongyi-tom/CodeReviewQA"><img src="https://img.shields.io/badge/GitHub-Repo-blue?logo=github"></a> |
|
|
<a href="https://github.com/hongyi-tom/CodeReviewQA/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-green"></a> |
|
|
</p> |
|
|
|
|
|
The task of automated code refinement aims to automate the developer's perspective in resolving an actionable code review comment provided by a reviewer. |
|
|
This is a generative task, where the LLM is required to revise a pre-review code submission with respect to the natural language code review comment to produce an intended post-review code revision. |
|
|
CodeReviewQA further breaks down this generative task into three intermediate reasoning steps (represented as MCQA problems) to provide early signals for model development. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="./graphics/MCQA_Example.jpeg" alt="MCQA Example" width="70%"/> |
|
|
</div> |
|
|
|
|
|
<center>(The image on the left was generated by <a href="https://openai.com/sora/">Sora</a>)</center> |
|
|
|
|
|
The benchmark features 900 manually curated, high-quality examples across nine programming languages (100 examples each). |
|
|
Each example represents a real interaction between a human reviewer and developer in a collaborative code review scenario. |
|
|
Different from clear instruction-esque prompts, code review comments are often underspecified, ambiguous, and implicit. |
|
|
Thus, this problem assesses LLMs' proficiency in understanding and following conversational instructions in human-oriented software development. |
|
|
For more details, please visit our paper linked below. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="./graphics/pareto.jpeg" alt="Pareto Optimal Radar Chart" width="40%"/> |
|
|
</div> |
|
|
|
|
|
<center>(Our paper includes more comprehensive results from 72 state-of-the-art LLMs)</center> |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
- **Paper:** https://arxiv.org/abs/2503.16167 |
|
|
- **Point of Contact:** [email protected] |
|
|
- **Repository:** https://github.com/hongyi-tom/CodeReviewQA |
|
|
|
|
|
(The repository contains inference scripts used in our experiments) |
|
|
|
|
|
|
|
|
## Tasks |
|
|
|
|
|
**Original Problem** (Text-to-Text Generation) |
|
|
- **Automated Code Refinement (ACR):** Given a pre-review code submission and code review comment, generate the post-review code revision that is being requested. |
|
|
|
|
|
**Intermediate Reasoning Steps** (Multiple Choice Question Answering) |
|
|
- **Change Type Recognition (CTR):** Given a pre-review code submission and code review comment, infer the general code change type that is being requested. |
|
|
- **Change Localisation (CL):** Given a pre-review code submission and code review comment, locate the precise lines of code that need to be revised. |
|
|
- **Solution Identification (SI):** Given a pre-review code submission and code review comment, identify the exact code revision that is being requested. |
|
|
|
|
|
(Both Change Localisation and Solution Identification have easy (E) and hard (H) difficulty variations, where the hard version represents an adversarial setup.) |
|
|
|
|
|
## Included Languages |
|
|
|
|
|
- **Natural Language:** English |
|
|
- **Programming Language:** C, C++, CSharp, Go, Java, JavaScript, PHP, Python, Ruby |
|
|
|
|
|
## Data Fields |
|
|
|
|
|
General |
|
|
- `old` (string): Pre-review code submission (hunk level granularity) |
|
|
- `new` (string): Post-review code revision (hunk level granularity) |
|
|
- `review` (string): Actionable natural language code review comment |
|
|
|
|
|
Change Type Recognition |
|
|
- `type_correct` (string): Ground truth change type |
|
|
- `type_wrong` (list): Two incorrect change types |
|
|
|
|
|
Change Localisation |
|
|
- `loc_correct` (list): Ground truth set of changed lines |
|
|
- `loc_wrong_easy` (list): Three incorrect sets of changed lines (low jaccard similarity between answer sets) |
|
|
- `loc_wrong_hard` (list): Three incorrect sets of changed lines (high jaccard similarity between answer sets) |
|
|
|
|
|
Solution Identification |
|
|
- `solution_correct` (string): Ground truth post-review code revision w/ line no |
|
|
- `solution_wrong_easy` (list): Three incorrect post-review code revisions w/ line no (low cosine similarity with ground truth) |
|
|
- `solution_wrong_hard` (list): Three incorrect post-review code revisions w/ line no (high cosine similarity with ground truth) |
|
|
|
|
|
Additional Information |
|
|
- `lang` (string): Programming language used in the code submission/revision |
|
|
|
|
|
## Authors |
|
|
|
|
|
- Hong Yi Lin, The University of Melbourne |
|
|
- Chunhua Liu, The University of Melbourne |
|
|
- Haoyu Gao, The University of Melbourne |
|
|
- Patanamon Thongtanunam, The University of Melbourne |
|
|
- Christoph Treude, Singapore Management University |
|
|
|
|
|
## Data Source |
|
|
|
|
|
The code review examples are mined from closed pull requests of open source GitHub projects. |
|
|
These examples were originally provided by the authors of the following paper. |
|
|
|
|
|
Guo, Q., Cao, J., Xie, X., Liu, S., Li, X., Chen, B. and Peng, X., 2024, February. Exploring the potential of chatgpt in automated code refinement: An empirical study. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering (pp. 1-13). |
|
|
|
|
|
## Licensing Information |
|
|
|
|
|
The CodeReviewQA benchmark is licensed under the [MIT License](https://opensource.org/license/MIT). |
|
|
|
|
|
## Citation Information |
|
|
|
|
|
``` |
|
|
@article{lin2025codereviewqa, |
|
|
title={CodeReviewQA: The Code Review Comprehension Assessment for Large Language Models}, |
|
|
author={Lin, Hong Yi and Liu, Chunhua and Gao, Haoyu and Thongtanunam, Patanamon and Treude, Christoph}, |
|
|
journal={arXiv preprint arXiv:2503.16167}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|