Evaluation of a Suite of Metrics for Component Based Software Engineering (CBSE)

Article excerpt

Introduction

Component-Based Software Engineering (CBSE) is a methodology that emphasizes the design and construction of computer-based systems using reusable software components. This principle embodies an element of "buy, don't build" that shifts the emphasis from programming software to composing software systems (Pressman, 2001). It is also an approach for developing software that relies on software reuse and it emerged from the failure of object-oriented development to support effective reuse. The behavior and the stability of an application cannot be assessed unless it is tested comprehensively. The quality of the application is high when it yields the expected results, is stable and adaptable and leads to reduce maintenance costs. If a change has been introduced in a component, which has been integrated in an application, the impact of the change on the whole application has to be determined by the developer to assess the stability of the application. Consequently, there is certainly a need to measure quality and assess the component's impact on the overall system. Metrics are needed to measure several types of quality issues. Metrics are also needed to study the characteristics of a given software system under different scenarios (Ali & Ghafoor 2001 Bertoa, Troya, & Vallecillo 2003 Lorenz & Kidd 1992). Most of the existing metrics are applicable to small programs or components (Kan 2002), while the objective of CBSE metrics is to evaluate the behavior and reliability of the component when integrated into a large software system. Consequently (Weyuker, 1998), the lack of appropriate mathematical properties fails quality metrics. Metrics that have a sound theoretical basis become applicable to real life organizations (Pfleeger & Fenton, 1998). Some of the metrics rely on parameters that could never be measured or are too difficult to measure in practice. Since a component's internal structure may not be available, there is a need for black box testing and a number of existing metrics may not be applicable directly.

A software component is a coherent package of software implementation that offers well-defined and published interfaces, is reusable and that can be independently developed and delivered; such components are put together to form an application. However, there are no good metrics available to validate their effectiveness, when components are integrated together to form a complete system. Due to the inherent differences in the development of component based and non-component based systems, the traditional software metrics prove to be inappropriate for component-based systems. The component metrics alone are not sufficient for an integrated environment, because there is a need to measure the stability and adaptability of each component when it is integrated with other components.

Narasimhan and Hendradjaya (2007) noticed the lack of metrics that aids in reducing the maintenance costs and defined metrics whose values are collected during the execution phase. Such metrics are useful for assessing the maintenance cost of individual components and that of the application in which the component is integrated. This paper supports and critiques the ideas of Narasimhan and Hendradjaya in providing metrics for the integration of software components. A component when executed may yield the expected results, but its behavior and functionality, when integrated with other components to make a complete application, may yield unexpected results. Therefore, there is a need for metrics to assess the functionality of each component when integrated with other components and functionality of the application on the whole. The paper provides a comparison of various metrics and observes several views on the traditional metrics and the metrics proposed by Narasimhan and Hendradjaya are useful in assessing the quality of components in an integrated application. Benchmarks software programs have been used as inputs to instrumentation programs and metric values have been collected. …