4.5 Review

A systematic literature review of actionable alert identification techniques for automated static code analysis

Journal

INFORMATION AND SOFTWARE TECHNOLOGY
Volume 53, Issue 4, Pages 363-387

Publisher

ELSEVIER
DOI: 10.1016/j.infsof.2010.12.007

Keywords

Automated static analysis; Systematic literature review; Actionable alert identification; Unactionable alert mitigation; Warning prioritization; Actionable alert prediction

Funding

  1. IBM

Ask authors/readers for more resources

Context: Automated static analysis (ASA) identifies potential source code anomalies early in the software development lifecycle that could lead to field failures. Excessive alert generation and a large proportion of unimportant or incorrect alerts (unactionable alerts) may cause developers to reject the use of ASA. Techniques that identify anomalies important enough for developers to fix (actionable alerts) may increase the usefulness of ASA in practice. Objective: The goal of this work is to synthesize available research results to inform evidence-based selection of actionable alert identification techniques (AAIT). Method: Relevant studies about AAITs were gathered via a systematic literature review. Results: We selected 21 peer-reviewed studies of AAITs. The techniques use alert type selection; contextual information: data fusion; graph theory; machine learning: mathematical and statistical models; or dynamic detection to classify and prioritize actionable alerts. All of the AAITs are evaluated via an example with a variety of evaluation metrics. Conclusion: The selected studies support (with varying strength), the premise that the effective use of ASA is improved by supplementing ASA with an MIT. Seven of the 21 selected studies reported the precision of the proposed AAITs. The two studies with the highest precision built models using the subject program's history. Precision measures how well a technique identifies true actionable alerts out of all predicted actionable alerts. Precision does not measure the number of actionable alerts missed by an MIT or how well an MIT identifies unactionable alerts. Inconsistent use of evaluation metrics, subject programs, and ASAs in the selected studies preclude meta-analysis and prevent the current results from informing evidence-based selection of an AAIT. We propose building on an actionable alert identification benchmark for comparison and evaluation of MIT from literature on a standard set of subjects and utilizing a common set of evaluation metrics. (C) 2010 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available