4.6 Article

Open Science in Software Engineering: A Study on Deep Learning-Based Vulnerability Detection

Journal

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
Volume 49, Issue 4, Pages 1983-2005

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TSE.2022.3207149

Keywords

Software; Testing; Replicability; Codes; Training; Security; Deep learning; Open science; availability; executability; reproducibility; replicability; case study; vulnerability detection; deep learning

Ask authors/readers for more resources

Open science is highly beneficial for making scientific research accessible to everyone, and the software engineering community is advocating for open science policies. However, there have been few studies on the status and issues of open science in SE. This paper fills this gap by focusing on deep learning-based vulnerability detection and providing actionable recommendations for improving open science practices.
Open science is a practice that makes scientific research publicly accessible to anyone, hence is highly beneficial. Given the benefits, the software engineering (SE) community has been diligently advocating open science policies during peer reviews and publication processes. However, to this date, there has been few studies that look into the status and issues of open science in SE from a systematic perspective. In this paper, we set out to start filling this gap. Given the great breadth of SE in general, we constrained our scope to a particular topic area in SE as an example case. Recently, an increasing number of deep learning (DL) approaches have been explored in SE, including DL-based software vulnerability detection, a popular, fast-growing topic that addresses an important problem in software security. We exhaustively searched the literature in this area and identified 55 relevant works that propose a DL-based vulnerability detection approach. This was then followed by comprehensively investigating the four integral aspects of open science: availability, executability, reproducibility, and replicability. Among other findings, our study revealed that only a small percentage (25.5%) of the studied approaches provided publicly available tools. Some of these available tools did not provide sufficient documentation and complete implementation, making them not executable or not reproducible. The uses of balanced or artificially generated datasets caused significantly overrated performance of the respective techniques, making most of them not replicable. Based on our empirical results, we made actionable suggestions on improving the state of open science in each of the four aspects. We note that our results and recommendations on most of these aspects (availability, executability, reproducibility) are not tied to the nature of the chosen topic (DL-based vulnerability detection) hence are likely applicable to other SE topic areas. We also believe our results and recommendations on replicability to be applicable to other DL-based topics in SE as they are not tied to (the particular application of DL in) detecting software vulnerabilities.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available