4.7 Article

SeTransformer: A Transformer-Based Code Semantic Parser for Code Comment Generation

Journal

IEEE TRANSACTIONS ON RELIABILITY
Volume 72, Issue 1, Pages 258-273

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TR.2022.3154773

Keywords

Codes; Transformers; Computational modeling; Training; Convolutional neural networks; Feature extraction; Convolution; Code comment generation; convolutional neural network (CNN); deep learning; program comprehension; Transformer

Ask authors/readers for more resources

Automated code comment generation technologies can help developers understand code intent, reducing the cost of software maintenance and revision. In this article, a novel improved-Transformer-based comment generation method is proposed, which solves the long-term dependence problem and extracts both text and structure information from program code. Experimental results show that the method outperforms state-of-the-art baselines in performance and comment quality.
Automated code comment generation technologies can help developers understand code intent, which can significantly reduce the cost of software maintenance and revision. The latest studies in this field mainly depend on deep neural networks, such as convolutional neural networks and recurrent neural network. However, these methods may not generate high-quality and readable code comments due to the long-term dependence problem, which means that the code blocks used to summarize information are far from each other. Owing to the long-term dependence problem, these methods forget the previous input data's feature information during the training process. In this article, to solve the long-term dependence problem and extract both the text and structure information from the program code, we propose a novel improved-Transformer-based comment generation method, named SeTransformer. Specifically, the SeTransformer utilizes the code tokens and an abstract syntax tree (AST) of programs to extract information as the inputs, and then, it leverages the self-attention mechanism to analyze the text and structural features of code simultaneously. Experimental results based on public corpus gathered from large-scale open-source projects show that our method can significantly outperform five state-of-the-art baselines (such as Hybrid-DeepCom and AST-attendgru). Furthermore, we also conduct a questionnaire survey for developers, and the results show that the SeTransformer can generate higher quality comments than those of other baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available