\chapter{Computational Metrics}
\label{Computational Metrics}

This chapter presents two kinds of computational metrics: software quality metrics
and metrics related to computer performance.

\section{Software Quality Metrics}

Managing software production requires adequate metrics that shed light on the
development progress as well as on the produced assets quality. Useful feedback
can then help in adopting better design strategies and techniques for future
projects. Measurement is often meant to cover such matters as cost estimation,
test management and administrative decision-making.
\cite{benlarbi:1998}

Many sets of OO metrics have been proposed in the last few years. In Appendix
\ref{List of Object-Oriented  Metrics Currently Investigated in the Literature},
there is a complete list provided by \citet{benlarbi:1998} of OO metrics that are
currently investigated or in use. In the chapter \ref{Model} is explained only
those metrics that has been choose to measure this work.

\subsection{Traditional Metrics and OO Software Systems}

\citet{rosenberg:1997} has found that there is considerable disagreement in the
field about software quality metrics for object-oriented systems. Some
researchers and practitioners contend traditional metrics are inappropriate for
object-oriented systems. There are valid reasons for applying traditional
metrics, however, if it can be done. The traditional metrics have been widely
used, they are well understood by researchers and practitioners, and their
relationships to software quality attributes have been validated.

In Table \ref{StructuredObjectDifferences} shows the differences between OO
software design and structured software design concepts. For example, coupling is
an internal product feature that has been extensively studied since the advent of
structured design. However, due to the specific encapsulation and visibility
rules in the OO paradigm, coupling in OO systems differs from the module coupling
although the notion of interconnection between components holds. The elementary
encapsulation level being the class level, most of the basic OO metrics that have
been found useful and are being investigated are considered at the class level.
Another major difference is that, in the traditional structured software design,
methods are separated from data. Therefore, measures such as Halstead's volume
and size cannot be directly applied to evaluate OO software systems volume and
size because data are fully integrated to methods. \cite{benlarbi:1998}

\begin{table} [hbtp]
\begin{center}
\caption{Differences between Object-Orientation and Structured Design}
Source: \cite{benlarbi:1998}
\label{StructuredObjectDifferences}
\begin{scriptsize}
\begin{tabular}{|p{1.2in}|p{2in}|p{2in}|}
\hline
& \bf OO Features	& \bf Structured Design Features
\tabularnewline \hline
\textbf{Entities} &Classes, Agents or Objects (instances of class); Messages
(requests for action); Methods (set of operations) &Data types, and variables
(instances of data types); Functions and procedures calls (requests for
action); functions and procedures (set of operations)
\tabularnewline \hline
\textbf{Design rules} & Abstraction (class, hierarchy, clusters); Encapsulation
(Interface); Modularity (Problem Domain Classification) & Abstraction (Data
types, Procedures and Modules); Encapsulation (very little); Modularity (Functional grouping)
\tabularnewline \hline
\textbf{Design mechanisms} & Inheritance, association, aggregation, polymorphism
and message passing & Control flow, data flow, data structure
\tabularnewline \hline
\textbf{Coding}	&Method binding, overriding, reuse, computation as
simulation	&Function call, data management, computation as process-state
\tabularnewline \hline
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}

In an object-oriented system, traditional metrics are generally applied to the
methods that comprise the operations of a class. A method is a component of an
object that operates on data in response to a message and is defined as part of
the declaration of a class. Methods reflect how a problem is broken into segments
and the capabilities other classes expect of a given class.
\cite{rosenberg:1997}.


\subsubsection{Cyclomatic Complexity (CC)}
Cyclomatic complexity (McCabe) is used to evaluate the complexity of an algorithm in
a method. A method with a low cyclomatic complexity is generally better, although it
may mean that decisions are deferred through message passing, not that the method is
not complex. Cyclomatic complexity cannot be used to measure the complexity of a
class because of inheritance, but the cyclomatic complexity of individual methods can be
combined with other measures to evaluate the complexity of the class. In general, the
cyclomatic complexity for a method should be below ten, indicating decisions are
deferred through message passing. This metric is used to evaluate the quality
attribute Complexity.

\subsubsection{Size}
Size of a method is used to evaluate the ease of understandability of the code
by developers and maintainers. Size can be measured in a variety of ways. These include
counting all physical lines of code, the number of statements, and the number of blank
lines. Thresholds for evaluating the size measures vary depending on the coding language
used and the complexity of the method. However, since size affects ease of
understanding, routines of large size will always pose a higher risk in the attributes of
Understandability, Reusability, and Maintainability.

\subsubsection{Comment Percentage}
The line counts done to compute the Size metric can be expanded to include a count of
the number of comments, both on-line (with code) and stand-alone. The comment
percentage is calculated by the total number of comments divided by the total lines of
code less the number of blank lines. \citet{rosenberg:1997} has found a comment percentage of
about 30\% is most effective. Since comments assist developers and
maintainers, this metric is used to evaluate the attributes of Understandability, Reusability, and
Maintainability.

\subsection{Object-Oriented Specific Metrics}

Many different metrics have been proposed for object-oriented systems. The
object-oriented metrics that were chosen by \citet{rosenberg:1997} measure principle structures that, if
improperly designed, negatively affect the design and code quality attributes.
The selected object-oriented metrics are primarily applied to the concepts of classes, coupling,
and inheritance. For some of the object-oriented metrics discussed here, multiple definitions are
given, since researchers and practitioners have not reached a common definition or counting
methodology. In some cases, the counting method for a metric is determined by the software
analysis package being used to collect the metrics.

A class is a template from which objects can be created. This set of objects share a common
structure and a common behavior manifested by the set of methods. Three class metrics described
here measure the complexity of a class using the class's methods, messages, and
cohesion as a criteria.

\subsubsection{Method}

A method is an operation upon an object and is defined in the class declaration.

The \textbf{Weighted Methods per Class (WMC)} is a count of the methods
implemented within a class or the sum of the complexities of the methods (method
complexity is measured by cyclomatic complexity). The second measurement is
difficult to implement since not all methods are accessible within the class
hierarchy due to inheritance. The number of methods and the complexity of the
methods involved is a predictor of how much time and effort is required to
develop and maintain the class. The larger the number of methods in a class, the
greater the potential impact on children since children inherit all of the
methods defined in a class. Classes with large numbers of methods are likely to
be more application specific, limiting the possibility of reuse. This metric
measures Understandability, Maintainability, and Reusability.

\subsubsection{Message}
A message is a request that an object makes of another object to perform an operation. The
operation executed as a result of receiving a message is called a method. The next metric looks at
methods and messages within a class.

The \textbf{Response for a Class (RFC)} is the carnality of the set of all
methods that can be invoked in response to a message to an object of the class or by some method in the class. This includes all
methods accessible within the class hierarchy. This metric looks at the combination of
the complexity of a class through the number of methods and the amount of
communication with other classes. The larger the number of methods that can be
invoked from a class through messages, the greater the complexity of this class. If a
large number of methods can be invoked in response to a message, the testing and
debugging of the class becomes complicated since it requires a greater level of
understanding on the part of the tester. A worst case value for possible responses will
assist in the appropriate allocation of testing time. This metric evaluates
Understandability, Maintainability, and Testability.

\subsubsection{Cohesion}
Cohesion is the degree to which methods within a class are related to one another and work
together to provide well-bounded behavior. Effective object-oriented designs maximize cohesion
since it promotes encapsulation. The third class metrics investigates cohesion.

\textbf{Lack of Cohesion of Methods (LCOM)} measures the degree of similarity
of methods by data input variables or attributes (structural properties of classes. Any measure of separateness of methods helps identify
flaws in the design of classes. There are at least two different ways of measuring
cohesion:

\begin{itemize}
  \item Calculate for each data field in a class what percentage of the methods use that data
  field. Average the percentages then subtract from 100\%. Lower
  percentages mean greater cohesion of data and methods in the class.
  \item Methods are more similar if they operate on the same
  attributes. Count the number of disjoint sets produced from the intersection of the sets of attributes used by the
  methods.
\end{itemize}

High cohesion indicates good class subdivision. Lack of cohesion or low cohesion increases
complexity, thereby increasing the likelihood of errors during the development process. Classes
with low cohesion could probably be subdivided into two or more subclasses with increased
cohesion. This metric evaluates Efficiency and Reusability.

\subsubsection{Coupling}
Coupling is a measure of the strength of association established by a connection
from one entity to another. Classes (objects) are coupled three ways:

\begin{itemize}
  \item When a message is passed between objects, the objects are said to be
  coupled.
  \item Classes are coupled when methods declared in one class use methods or
  attributes of the other classes.
  \item Inheritance introduces significant tight coupling between superclasses and their
  subclasses.
\end{itemize}

Since good object-oriented design requires a balance between coupling and
inheritance, coupling measures focus on non-inheritance coupling. The next object-oriented metric measures coupling
strength.

\textbf{Coupling Between Object Classes (CBO)} is a count of the number of
other classes to which a class is coupled. It is measured by counting the number of distinct non-inheritance related class hierarchies on
which a class depends. Excessive coupling is detrimental to modular design and prevents
reuse. The more independent a class is, the easier it is reuse in another application. The
larger the number of couples, the higher the sensitivity to changes in other parts of the
design and therefore maintenance is more difficult. Strong coupling complicates a
system since a module is harder to understand, change or correct by itself if it is
interrelated with other modules. Complexity can be reduced by designing systems with
the weakest possible coupling between modules. This improves modularity and
promotes encapsulation. CBO evaluates Efficiency and Reusability.

\subsubsection{Inheritance}
Another design abstraction in object-oriented systems is the use of inheritance. Inheritance is a
type of relationship among classes that enables programmers to reuse previously defined elements
including variables and operators. Inheritance decreases complexity by reducing the number of
operations and operators, but this abstraction of objects can make maintenance and design
difficult. The two metrics used to measure the amount of inheritance are the depth and breadth of
the inheritance hierarchy.

The \textbf{Depth of Inheritance Tree (DIT)} within the inheritance hierarchy is
the maximum length from the class node to the root of the tree and is measured by the number of ancestor classes. The
deeper a class is within the hierarchy, the greater the number methods it is likely to
inherit making it more complex to predict its behavior. Deeper trees constitute greater
design complexity, since more methods and classes are involved, but the greater the
potential for reuse of inherited methods. A support metric for DIT is the number of
methods inherited (NMI). This metric primarily evaluates Efficiency and Reuse but also
relates to Understandability and Testability.

\textbf{Number of Children (NOC)} is the number of immediate subclasses subordinate to a class in
the hierarchy. It is an indicator of the potential influence a class can have on the design
and on the system. The greater the number of children, the greater the likelihood of
improper abstraction of the parent and may be a case of misuse of subclassing. But the
greater the number of children, the greater the reusability since inheritance is a form of
reuse. If a class has a large number of children, it may require more testing of the
methods of that class, thus increase the testing time. NOC, therefore, primarily evaluates
Efficiency, Reusability, and Testability.



\subsection{Metrics Evaluation Criteria}

\citet{benlarbi:1998} searched tools for application of metrics and traced
criteria to select the appropriate ones for the context of their work that was the
choice of a tool for metric analysis being incorporated to the product of
their company.

\begin{enumerate}
  
  \item \textbf{Covered dimensions and targets:}
  What are the size and volumn of dimensions (attributes, characteristics) captured
  by the metrics?
  The metrics deal specifically the mechanisms of object-oriented design as
  listed or not?
  What's the abstraction scope or level covered by the metrics (function,
  class, module, sub-system, system)?
  
  \item \textbf{Completeness and precision of the metrics:}
  Which phase of the life cycle can the metrics be collected (requirements
  analysis, high level design, low level design, code)?
  Is there any theoretical or empirical evidence that allow us to establish
  the design and code characteristics that the OO metrics are built in a fact
  contribute to the volumn and size dimension they target?
  Does the OO metric(s) that is addressing a given size or volume dimension,
  cover the whole targeted domain or not?
  Does the measurement method (counts, scale, range values) reflects how much the
  size or volume dimension is being contributed by the offered OO metrics?
  
  
  \item \textbf{Flexibility and Extendability:} How should we open the tool in order to allow
  manipulating available OO metrics or adding new basic OO metrics? Does the tool
  allows aggregating basic OO metrics among the available to allow building
  dependability evaluation models? How flexible is the tool to allow constructing
  OO dependability models?
  
  \item \textbf{Other questions which has been adapted to the context of this
  work:} How difficult is it to set up and run the tool? How long does it take to
  set up and run the tool to perform the OO metrics extraction on a given
  project? How long does it take to understand and train a person on performing
  the OO metrics extraction? What visualization features does the tool have to
  allow displaying the metrics on design graphs or structural graphs? Given that
  many metrics will require the visualization of large and complex graphs, can
  the tool render these in a reasonable time?
  
\end{enumerate}

In the technical report of \citet{xenos00objectoriented} is carried through a
research in literature for traditional and object-oriented specific metrics.
In the report the authors make an evaluation of related metrics using
meta-metrics and its result serves of support for quality controlling to
select the appropriate metric for the project.

\citet{xenos00objectoriented} beyond traditional metrics as LOC (Lines of code)
and FUP (Function Points), analyze object-oriented metrics subdivided in:

\begin{itemize}
  \item Class Metrics
  \item Method Metrics
  \item Coupling Metrics
  \item Inheritance Metrics
\end{itemize}

Inside of these classes of metrics, they will be chosen at least one metric of each,
according to the evaluation of the quality of them using meta-metrics. Below some
meta-metrics are listed as defined by \citet{xenos00objectoriented}:
\begin{itemize}
  \item Measurement Scale
  \item Measurements Independence
  \item Automation
  \item Value of Implementation
  \item Monotocity
  \item Simplicity
  \item Accuracy
\end{itemize}

Following a similar aproach to the previous article, \citet{rosenberg:1997} makes an
analysis of the appropriate metrics for measurement of software quality on
object-oriented systems. In its article they evaluate the metrics through these
criteria:

\begin{itemize}
  \item Efficiency - Are the constructs efficiently designed?
  \item Complexity - Could the constructs be used more effectively to decrease the architectural
  complexity?
  \item Understandability - Does the design increase the psychological
  complexity?
  \item Reusability - Does the design quality support possible reuse?
  \item Testability/Maintainability - Does the structure support facilities for testing and changes?
\end{itemize}



\section{Metrics for Performance Analysis}

To measure the effects in performance of Appman we need to get some metrics.
\citet{foster:1995} demonstrates in his book some performance models for parallel
programs, specifying the following metrics which are detailed in the
next subsection:

\begin{itemize}
  \item Execution time (Computation Time, Communication Time, Idle Time)
  \item Efficiency
  \item Speedup
\end{itemize}

These metrics will be used to measure the performance
of our case study. In the bibliographical research made by
 \citet{fernandes:2003} is demonstrated that these
metrics are consolidated and they adjust perfectly to this kind of paralell
system.

The next subsection presents metrics of performance according to 
\citet{foster:1995} definitions.

\subsection{Execution Time}

\subsubsection{Computation Time}

The computation time of an algorithm is the amount of time spent performing computation
rather than communicating or idling. If we have a sequential program that
performs the same computation as the parallel algorithm, we can determine by
timing that program. Otherwise, we may have to implement key kernels.

Computation time will normally depend on some measure of problem size, whether
that size is represented by a single parameter N or by a set of parameters. 
If the parallel algorithm replicates computation, then computation time
will also depend on the number of tasks or processors. In a heterogeneous
parallel computer (such as a local network of workstations), computation time can vary
according to the processor on which computation is performed.

Computation time will also depend on characteristics of processors and their
memory systems. For example, scaling problem size or number of processors can
change cache performance or the effectiveness of processor pipelining. As a
consequence, one cannot automatically assume that total computation time will
stay constant as the number of processors changes.

\subsubsection{Communication Time}

The  communication time of an algorithm is the amount of time that its tasks spend
sending and receiving messages. Two distinct types of communication can be
distinguished: interprocessor communication and intraprocessor communication. In
interprocessor  communication, two communicating tasks are located on different
processors. This will always be the case if an algorithm creates one task per
processor. In intraprocessor  communication, two communicating tasks are located
on the same processor. For simplicity, we assume that interprocessor and
intraprocessor communication costs are comparable. Perhaps surprisingly, this
assumption is not unreasonable in many multicomputers, unless intraprocessor
communication is highly optimized. This is because the cost of the
memory-to-memory copies and context switches performed in a typical
implementation of intraprocessor communication is often comparable to the cost of
an interprocessor communication. In other environments, such as
Ethernet-connected workstations, intraprocessor communication is much faster.

\subsubsection{Idle Time}

Both computation and communication times are specified explicitly in a parallel
algorithm; hence, it is generally straightforward to determine their contribution
to execution time. Idle time can be more difficult to determine, however,
since it often depends on the order in which operations are performed.

A processor may be idle due to lack of computation or lack of data. In the first
case, idle time may be avoided by using load-balancing techniques. In the second case, the processor is idle while the
computation and communication required to generate remote data are performed.

This idle time can sometimes be avoided by structuring a program so that
processors perform other computation or communication while waiting for remote
data. This technique is referred to as overlapping computation and communication,
since local computation is performed concurrently with remote communication and
computation. Such overlapping can be achieved in two ways. A simple
approach is to create multiple tasks on each processor. When one task blocks
waiting for remote data, execution may be able to switch to another task for
which data are already available. This approach has the advantage of simplicity
but is efficient only if the cost of scheduling a new task is less than the idle
time cost that is avoided. Alternatively, a single task can be structured so that
requests for remote data are interleaved explicitly with other computation.


\subsection{Efficiency and Speedup}

Execution time is not always the most convenient metric by which to evaluate
parallel algorithm performance. As execution time tends to vary with problem
size, execution times must be normalized when comparing algorithm performance at
different problem sizes. Efficiency (the fraction of time that processors
spend doing useful work) is a related metric that can sometimes provide a more
convenient measure of parallel algorithm quality. It characterizes the
effectiveness with which an algorithm uses the computational resources of a
parallel computer in a way that is independent of problem size. We define relative
efficiency as where is the execution time on one processor and is the time on P
processors. 

The related quantity relative speedup, is the factor by which execution 
time is reduced on P processors:

\begin{center}
\begin{math}
E_{relative} = \frac{T_1}{TPp^1}
\end{math}

\begin{math}
S_{relative} = PE_1
\end{math}
\end{center}

The quantities defined by the equations above are called relative
efficiency and speedup because they are defined with respect to the parallel algorithm
executing on a single processor. They are useful when exploring the scalability
of an algorithm but do not constitute an absolute figure of merit. For example,
assume that we have a parallel algorithm that takes 10,000 seconds on 1 processor
and 20 seconds on 1000 processors. Another algorithm takes 1000 seconds on 1
processor and 5 seconds on 1000 processors. Clearly, the second algorithm is
superior for P in the range 1 to 1000. Yet it achieves a relative speedup of only
200, compared with 500 for the first algorithm.

When comparing two algorithms, it can be useful to have an algorithm-independent
metric other than execution time. Hence, we define absolute efficiency and
speedup, using as the baseline the uniprocessor time for the best-known
algorithm. In many cases, this ``best'' algorithm will be the best-known
uniprocessor (sequential) algorithm. In this text we frequently
use the terms efficiency and speedup without qualifying them as relative or
absolute. However, we will always be calculating relative efficiency and speedup.