4.5 Article

MS-QuAAF: A generic evaluation framework for monitoring software architecture quality

Journal

INFORMATION AND SOFTWARE TECHNOLOGY
Volume 140, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.infsof.2021.106713

Keywords

Software architecture; Non-Functional requirement; Quality evaluation; Architecture Defects; Quality Metrics; Architecture Erosion; Continuous Evaluation

Ask authors/readers for more resources

The paper addresses the lack of generic frameworks for evaluating software architecture across different development stages. MS-QuAAF is introduced as a quantitative assessment framework to evaluate architecture using generic metrics. It proposes a set of evaluation services for assessing architecture at both design and implementation stages, including the use of Responsibilities Satisfaction Tree to evaluate implemented architectures.
Context: In a highly competitive software market, architecture quality is one of the key differentiators between software systems. Many quantitative and qualitative evaluation frameworks were proposed to measure architecture. However, qualitative evaluation lacks statistical significance, whereas quantitative methods are designed for evaluating specific quality attributes, such as modifiability and performance. Besides, the assessment covers usually a single development stage, either at the design stage or at the implementation stage. Objective: A lack of generic frameworks that can support the assessment of a broad set of attributes and ensure continuous evaluation by covering the main development stages is addressed. Accordingly, this paper presents MS-QuAAF, a quantitative assessment framework destined for evaluating software architecture through a set of generic metrics. Method: The quantitative evaluation consists of checking architecture facets mapped to quality attributes against the early specified meta-models. This process starts by analyzing rules infringements and calculating architecture defects after accomplishing the design stage. Second, the assigned responsibilities supposed to promote stakeholders' quality attributes are assessed quantitatively at the end of the implementation stage. Third, the final evaluation report is generated. Results: We made specifically three main contributions. First, the proposed metrics within the framework are generic, which means that the framework has the ability to assess any inputted quality. Second, the framework proposes a set of evaluation services capable of assessing the architecture at two main development stages, which are design and implementation. Third, we proposed a quantitative assessment tree within the framework called the Responsibilities Satisfaction Tree (RST) that uses NFR responsibilities nodes to evaluate the implemented architectures. Conclusion: The conducted experiment showed that the framework is capable of evaluating quality attributes based on architecture specification using the proposed metrics. Furthermore, these metrics contributed to enhancing architecture quality during the development stages by notifying architects of the discovered anomalies.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available