Measuring Software Complexity Differences with the Analytical Hierarchy Process Open Access
Downloadable ContentDownload PDF
Software systems can become overly complex throughout the software development process, leading to increased error rates and failures. This praxis improves on the measurement of complexity, the first step in resolving this problem. Most software development technologies allow for incorporation of ever more interactive, error-prone, and interwoven designs that can introduce problems and increase the complexity of a software baseline. Because software typically holds latent defects, increasing the complexity of the software source code creates new adverse risks that undermine the usability, understandability, and reliability of software products. Despite following best practices during the software engineering process, the introduction of change to the software baseline can unnecessarily increase the complexity of software source code.Software metrics are critical to fault prediction models that seek to improve the quality of software products by predicting the location of bugs in the source code. Measuring software complexity assists software developers by providing insights into projects by identifying areas of source code baseline that have potential issues with reliability, readability, understandability, and implementation. However, producing a set of software measurement data is not sufficient for making a reliable decision. Interpretation of the results of complexity measurements can be problematic because an accurate understanding of the numerical values that characterize specific software attributes such as coupling, reliability, and overriding can be difficult.This praxis develops a technique for identifying the parts of a software baseline that experience the most complex changes between successive versions. This objective isviiaccomplished using a software complexity measurement technique that combines multiple aspects of the source code to quantify the variations in complexity of a software baseline between succeeding revisions. Building on prior successful efforts, this praxis develops a model for locating the portions of a software baseline that have undergone the most complex changes. The analytical hierarchy process (AHP) is used to combine the complexity metrics of number of metrics (NOM), number of children (NOC), coupling between objects (CBO), level of cohesion metrics (LCOM), weighted method complexity (WMC), and number of methods overridden (NMO) to determine the most complexly changed files for four open-sourced projects from the Apache Commons family of projects including IO, Net, Compression, and Discovery.This praxis observed that the AHP process can be reliably applied to the set of six metrics used for the projects evaluated and produced AHP consistency ratings within the AHP process recommended ranges. Aggregate complexity values were successfully calculated for each source code file within the projects, and the most complexly modified files were identified. Several threats to the validity of the findings of this empirical study prompt the suggestion that the conclusions of this praxis should be treated as indications toward a pragmatic remedy to the problem of software complexity measurement between software versions rather than a final conclusive technique. To improve the reliability and the generalizability of this study, similar empirical studies should be conducted, using relatively large commercial systems developed in other programming languages such as C++, as well as selected from different domains.