4.6 Article

An Empirical Study on the Usage of Transformer Models for Code Completion

期刊

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
卷 48, 期 12, 页码 4818-4837

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TSE.2021.3128234

关键词

Code completion; deep learning; empirical software engineering

资金

  1. European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme [851720]
  2. NSF [CCF-1955853, CCF-2007246]

向作者/读者索取更多资源

Code completion aims to predict the next code token(s) the developer is likely to write, speeding up code writing. This study explores the capabilities of state-of-the-art Transformer-based models in supporting code completion at different granularity levels, showing strong performance.
Code completion aims at speeding up code writing by predicting the next code token(s) the developer is likely to write. Works in this field focused on improving the accuracy of the generated predictions, with substantial leaps forward made possible by deep learning (DL) models. However, code completion techniques are mostly evaluated in the scenario of predicting the next token to type, with few exceptions pushing the boundaries to the prediction of an entire code statement. Thus, little is known about the performance of state-of-the-art code completion approaches in more challenging scenarios in which, for example, an entire code block must be generated. We present a large-scale study exploring the capabilities of state-of-the-art Transformer-based models in supporting code completion at different granularity levels, including single tokens, one or multiple entire statements, up to entire code blocks (e.g., the iterated block of a for loop). We experimented with several variants of two recently proposed Transformer-based models, namely RoBERTa and the Text-To-Text Transfer Transformer (T5), for the task of code completion. The achieved results show that Transformer-based models, and in particular the T5, represent a viable solution for code completion, with perfect predictions ranging from similar to 29%, obtained when asking the model to guess entire blocks, up to -69%, reached in the simpler scenario of few tokens masked from the same code statement.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据