4.6 Article

Source Code Authorship Attribution Using Hybrid Approach of Program Dependence Graph and Deep Learning Model

Journal

IEEE ACCESS
Volume 7, Issue -, Pages 141987-141999

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2943639

Keywords

Feature extraction; Encoding; Programming; Malware; Deep learning; Forensics; Code authorship attribution; program dependence graph; deep learning; software forensics and security; software plagiarism

Funding

  1. National Key Research and Development Program [2019QY1400, 2018YFB0804503]
  2. National Natural Science Foundation of China [U1836103]
  3. Technology Research and Development Program of Sichuan, China [18ZDYF3867, 2017GZDZX0002]

Ask authors/readers for more resources

Source Code Authorship Attribution (SCAA) is to find the real author of source code in a corpus. Though, it is a privacy threat to open-source programmers, but, it may be significantly helpful to develop forensic based applications. Such as, ghostwriting detection, copyright dispute settlements, and other code analysis applications. The efficient features extraction is the key challenge for classifying real authors of specific source codes. In this paper, the Program Dependence Graph with Deep Learning (PDGDL) methodology is proposed to identify authors from different programming source codes. First, the PDG is implemented to extract control and data dependencies from source codes. Second, the preprocessing technique is applied to convert PDG features into small instances with frequency details. Third, the Term Frequency Inverse Document Frequency (TFIDF) technique is used to zoom the importance of each PDG feature in source code. Fourth, Synthetic Minority Over-sampling Technique (SMOTE) is applied to tackle the class imbalance problem. Finally, the deep learning algorithm is applied to extract coding styles features for each programmer and to attribute the real authors. The deep learning algorithm is further fine-tuned with drop out layer, learning error rate, loss and activation function, and dense layers for better accuracy of results. The proposed work is analyzed on 1000 programmers data, collected from Google Code Jam (GCJ). The dataset contains three different programming languages, i.e., C, Java, C. The results are appreciable in outperforming the existing techniques from the perspective of classification accuracy, precision, recall, and f-measure metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available