3.8 Proceedings Paper

Using Pre-Trained Models to Boost Code Review Automation

出版社

IEEE COMPUTER SOC
DOI: 10.1145/3510003.3510621

关键词

Code Review; Empirical Study; Machine Learning on Code

资金

  1. European Research Council (ERC) under the European Union [851720]
  2. NSF [CCF-1955853, CCF-2007246]

向作者/读者索取更多资源

Code review is a widely adopted practice in open source and industrial projects. This paper introduces a method for automating code review tasks using deep learning models and demonstrates that a pre-trained T5 model can outperform previous DL models. Furthermore, experiments were conducted on a larger, more realistic, and challenging dataset of code review activities.
Code review is a practice widely adopted in open source and industrial projects. Given the non-negligible cost of such a process, researchers started investigating the possibility of automating specific code review tasks. We recently proposed Deep Learning (DL) models targeting the automation of two tasks: the first model takes as input a code submitted for review and implements in it changes likely to be recommended by a reviewer; the second takes as input the submitted code and a reviewer comment posted in natural language and automatically implements the change required by the reviewer. While the preliminary results we achieved are encouraging, both models had been tested in rather simple code review scenarios, substantially simplifying the targeted problem. This was also due to the choices we made when designing both the technique and the experiments. In this paper, we build on top of that work by demonstrating that a pre-trained Text-To-Text Transfer Transformer (T5) model can outperform previous DL models for automating code review tasks. Also, we conducted our experiments on a larger and more realistic (and challenging) dataset of code review activities.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据