
%\documentclass[10pt]{IEEEtran}
\documentclass[10pt, conference, compsocconf]{IEEEtran}\IEEEoverridecommandlockouts
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
% Add the compsocconf option for Computer Society conferences.
\newtheorem{defn}{Definition}
\newtheorem{exam}{Example}
\newtheorem{hypothesis}{Hypothesis}
\newtheorem{prop}{Proposition}

\usepackage{latex8}
\usepackage{epsfig, amsmath, amssymb, amsfonts, color}
\usepackage{verbatim, alltt, moreverb}
\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{lscape}


% *** GRAPHICS RELATED PACKAGES ***
%
\ifCLASSINFOpdf
  % \usepackage[pdftex]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../pdf/}{../jpeg/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
\else
  % or other class option (dvipsone, dvipdf, if not using dvips). graphicx
  % will default to the driver specified in the system graphics.cfg if no
  % driver is specified.
  % \usepackage[dvips]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../eps/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.eps}
\fi
% graphicx was written by David Carlisle and Sebastian Rahtz. It is
% required if you want graphics, photos, etc. graphicx.sty is already
% installed on most LaTeX systems. The latest version and documentation can
% be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/required/graphics/
% Another good source of documentation is "Using Imported Graphics in
% LaTeX2e" by Keith Reckdahl which can be found as epslatex.ps or
% epslatex.pdf at: http://www.ctan.org/tex-archive/info/
%
% latex, and pdflatex in dvi mode, support graphics in encapsulated
% postscript (.eps) format. pdflatex in pdf mode supports graphics
% in .pdf, .jpeg, .png and .mps (metapost) formats. Users should ensure
% that all non-photo figures use a vector format (.eps, .pdf, .mps) and
% not a bitmapped formats (.jpeg, .png). IEEE frowns on bitmapped formats
% which can result in "jaggedy"/blurry rendering of lines and letters as
% well as large increases in file sizes.
%
% You can find documentation about the pdfTeX application at:
% http://www.tug.org/applications/pdftex





% *** MATH PACKAGES ***
%
%\usepackage[cmex10]{amsmath}
% A popular package from the American Mathematical Society that provides
% many useful and powerful commands for dealing with mathematics. If using
% it, be sure to load this package with the cmex10 option to ensure that
% only type 1 fonts will utilized at all point sizes. Without this option,
% it is possible that some math symbols, particularly those within
% footnotes, will be rendered in bitmap form which will result in a
% document that can not be IEEE Xplore compliant!
%
% Also, note that the amsmath package sets \interdisplaylinepenalty to 10000
% thus preventing page breaks from occurring within multiline equations. Use:
%\interdisplaylinepenalty=2500
% after loading amsmath to restore such page breaks as IEEEtran.cls normally
% does. amsmath.sty is already installed on most LaTeX systems. The latest
% version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/required/amslatex/math/





% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{How  We Can Respond to a Code-level Change Proposal: Change Impact Analysis and Changeability Assessment*\thanks{*This work is supported partially by
National Natural Science Foundation of China under Grant No.
60773105 and No. 60973149, and partially by Doctoral Fund of
Ministry of Education of China under Grant No. 20100092110022.}}

% author names and affiliations
% use a multiple column layout for up to two different
% affiliations

\author{\IEEEauthorblockN{Xiaobing Sun, Bixin Li,  Wanzhi Wen,  Chuanqi Tao}
\IEEEauthorblockA{ Southeast University, School of
Computer Science and Engineering, Nanjing, China \\
\{sundomore, bx.li\}@seu.edu.cn} }
%\and \IEEEauthorblockN{Sai Zhang}
%\IEEEauthorblockA{Computer Science and Engineering Department \\
%University of Washington \\
%Washington, USA\\ szhang@cs.washington.edu} }


% make the title area
\maketitle


\begin{abstract}
Given a change proposal, what can we do to deal with this change
proposal before change implementation? Since modifications made to
software will inevitably have some unpredicted and potential effects
on software, some predictive measurement of these ripple effects
should be conducted and a decision of acceptance or rejection should
be made on this change proposal. Our study in this paper is targeted
at the source code level and the change proposal is assumed to be
composed of a set of changed classes. Then,  change impact analysis
(CIA) is performed to estimate the  ripple effects induced by these
changed classes. And potential impacted methods computed by our CIA
technique are ranked according to an  impact factor metric, which
corresponds to the priority of these methods to be impacted. In
addition, we propose an  impactness metric to assess the
changeability of the change proposal. And this metric can guide
maintainers to make a decision on the change proposal. Case studies
on two real-world programs show the effectiveness of our impact
analysis technique and changeability assessment model.
\end{abstract}

\begin{IEEEkeywords}
Formal concept analysis; change impact analysis; changeability assessment; impactness
\end{IEEEkeywords}


% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle

\section{Introduction}

Software needs to be maintained and changed   as software evolves.
One of the most critical issues of the maintenance process is the
predictive  measurement and metrics to perform  when change proposal
is proposed~\cite{sch87,fio01,luc02}. In order to deal with the
change proposal, some problems as follows should be responded by our
maintainers before change implementation:

(1) Which parts  in the original system may be impacted by this change proposal?

(2) Should we accept or reject this change proposal?

To solve these two problems, some predictive measurement of the
change ripples should be conducted. There have been a large amount
of research work on the measurement and metrics for software
development~\cite{chi94,ola07,hab08}, but only a few on software
maintenance~\cite{ban03,sub03}. Accurate measurement is a
prerequisite for all engineering disciplines, and software
maintenance is not an exception. Since modifications made to
software will inevitably have some unpredicted and potential effects
on software and these effects are hard to accurately predict, it
brings some difficulties to measurement for software maintenance.
Therefore, when a change proposal is given, software maintenance
contains such critical processes to respond to these two problems
proposed above: to make a preliminary estimation of the ripple
effects affected by the modification, and to  determine whether to
accept, reject, or further evaluate this given change proposal.

In this paper, we focus on change impact analysis and changeability
assessment to perform the predictive measurement for the change
proposal. Software change impact analysis (CIA),  often simply
called {\it impact analysis}, is an approach used to identify the
potential effects caused by changes made to software~\cite{boh96a}.
CIA starts with a set of changed elements in a software system,
called the {\it change set}, and attempts to determine a possibly
larger set of elements, called the {\it impact set}, that requires
attention or maintenance effort due to these changes~\cite{boh96a}.
A critical threat to the change impact analysis is the accuracy of
the its impact set, i.e., the impact set may have some
false-positives (a.k.a., the elements in the estimated impact set
are not really impacted) and false-negatives (a.k.a., some of the
real impacted elements are not identified in the impact set). Hence,
to accurately estimate the impact set becomes our main concern to
perform the impact analysis. Changeability assessment is an
evaluation of the ease to implement a change
proposal~\cite{Board_1990}. And through changeability assessment, we
should make a decision on  the change proposal before change
implementation. Currently, some researchers rely on some design
complexity metrics or coupling measurement to check the
changeability of the original system~\cite{ban03,Riaz09,Cha2000}.
These work considers few about the concrete change proposal to
conduct the predication of the changeability of the system. But the
maintenance effort and cost of different change proposals may be
different.  Therefore, the changeability assessment in our study is
closely related to individual change proposal.

The focus of this study is on change impact analysis
and changeability assessment for the code-level change proposal.
As  {\it class} is the basic element in object oriented programming environment, which
is also one of the key factors to transform from the requirement or
design domain to code domain, we assume that the change proposal is composed of a set of changed classes.
Then, we use {\it formal concept analysis} (FCA) to  calculate a ranked list of the impact set from these changed classes.
{\it FCA} is a field of applying
mathematics to deal with the study of the relation between entities
and entity properties to infer a hierarchy of
concepts~\cite{FCA}. Based on the hierarchical property of the FCA, we define an {\it impact factor}
 metric for each impacted method in the impact set. And this metric corresponds to the priority of the method to be impacted.
Finally, we propose an {\it impactness} metric to indicate the
changeability of this change proposal to be implemented. The main
contributions of this paper are as follows:

\begin{itemize}
\item Present a CIA technique, which starts with class-level change set and computes a ranked list of potentially impacted methods.
\item Propose an {\it impactness} metric to evaluate the changeability of a change proposal.
\item Show the effectiveness of CIA technique and changeability assessment model with two real open Java programs.
\end{itemize}

The paper is organized as follows: In the next section, we present
some primary knowledge of the formal concept analysis. Section
\uppercase\expandafter{\romannumeral3} gives an introduction about
our CIA technique and changeability assessment model. We  conduct
some experimental studies in Section
\uppercase\expandafter{\romannumeral4} to show the effectiveness of
our approach. In Section \uppercase\expandafter{\romannumeral5}, we
introduce some related work of the CIA techniques and changeability
assessment approaches. Finally, we  present our conclusion and
future work in Section \uppercase\expandafter{\romannumeral6}.

\section{Basics for Formal Concept Analysis}

In this section, we provide some basic definitions about the {\it
Formal Concept Analysis} (FCA). FCA  is used to deal with the study
of the relation between entities and entity properties to infer a
hierarchy of concepts~\cite{FCA}.  The basic notions of concept
lattice are those of the {\it formal context} and the {\it formal
concept}, which are defined as follows:

\begin{defn}[Formal Context] A formal context is defined as a triple $\mathbb{K}=(\mathcal {O}, \mathcal {A}, \mathcal
{R})$, where  $\mathcal {R}$ is a binary relation  between a set of
formal objects $\mathcal {O}$ and  a set of formal attributes
$\mathcal {A}$. Thus $\mathcal {R}\subseteq \mathcal {O}\times
\mathcal {A}$.
\end{defn}

\begin{defn}[Formal Concept] A formal concept, which is simply called concept, is a maximal  collection of formal objects sharing common formal attributes.
It is defined as a pair $(O,A)$ with $O\subseteq\mathcal {O}$,
$A\subseteq\mathcal {A}$, $O=\tau(A)$ and $A=\sigma(O)$, where
$\tau(A)=\{o\in\mathcal {O}|\forall a\in A:(o,a)\in \mathcal
{R}\}\wedge \sigma(O)=\{a\in\mathcal {A}|\forall o\in O:(o,a)\in
\mathcal {R}\}$.
\end{defn}

$\tau(A)$is often said to be the {\it extent} of the concept and
$\sigma(O)$ is said to be its {\it intent}. Relation between
concepts often forms a partial order  on the set of all concepts. We
often use the following definition to construct a concept
lattice~\cite{FCA}:

\begin{defn}[Subconcept] Given two concepts ($O_{1}, A_{1}$) and
($O_{2}, A_{2}$) of a formal context, ($O_{1}, A_{1}$) is called the
subconcept of ($O_{2}, A_{2}$), provided that $O_{1}\subseteq O_{2}$
(or $A_{1}\supseteq A_{2}$). we usually mark such relation as:
$(O_{1}, A_{1})\leqslant (O_{2}, A_{2}) \Longleftrightarrow
O_{1}\subseteq O_{2} \Longleftrightarrow  A_{1}\supseteq A_{2}$
\end{defn}

The set of all concepts of a formal context forms a partial order
and constitutes a complete lattice~\cite{bir1940}, defined as:

\begin{defn}[Concept Lattice] The concept lattice $\mathcal {L}(Co)$ is a complete lattice.
$\mathcal {L}(Co)=\{(O, A)\in 2^{\mathcal {O}}\times 2^{\mathcal
{A}}|O=\tau(A)\wedge A=\sigma(O)\}$, where infimum   and supremum of
two concepts ($O_{1}, A_{1}$) and ($O_{2}, A_{2}$) are defined as:
$(O_{1}, A_{1})\wedge (O_{2}, A_{2})=(O_{1}\cap O_{2},
\sigma(O_{1}\cap O_{2}))$, and $(O_{1}, A_{1})\vee  (O_{2},
A_{2})=(\tau(A_{1}\cap A_{2}), A_{1}\cap A_{2})$, respectively.
\end{defn}

The complete information for each concept of $\mathcal {L}(Co)$ is
given by their extents and intents. However, if the concepts are all
labeled with such complete information, the lattice is really very
hard to understand. Fortunately, there is a simple way to represent
their extents and intents in a more compact and readable form. A
lattice element is labeled with $a\in\mathcal {A}$ ($o\in\mathcal
{O}$), if it is the most general (special) concept having $a$ ($o$)
in its intent (extent). The lattice element marked with $a$ is
thus~\cite{FCA}:

$\mu(a)=\vee \{co\in\mathcal {L}(Co)|a\in int(co)\}$

In this formula, $int(co)$ represents the intent of the concept
$co$. And it indicates that all concepts smaller than $\mu(a)$ have
$a$ in its intent. Similarly, the lattice element marked with $o$
is:

$\gamma(o)=\wedge \{co\in\mathcal {L}(Co)|o\in ext(co)\}$

Here, $ext(co)$ represents the extent of the concept $co$. This
formula indicates that all concepts greater than $\gamma(o)$ have
$o$ in its extent.  With such labeling approach, suprema in the
lattice denotes that certain objects have common attributes while
the infima shows that some attributes fit to common formal objects.
And FCA has  the following properties \footnote{Of course, some
other properties from concept lattice can also be uncovered. These
three properties are enumerated because they will be used in this
article.}~\cite{FCA}, and it can

\begin{itemize}
\item  identify maximal groupings of formal objects sharing maximal sets of
formal attributes,
\item  display a hierarchical classification of formal attributes, and
\item  show a compact and complete representation of dependencies between formal objects and formal
attributes.
\end{itemize}

A common process of applying formal concept analysis in practical
applications is shown in Figure 1. Firstly, a formal context with
formal object and formal attribute is provided. Then concept lattice
is generated by applying concept lattice construction algorithm to
the formal context. The concept lattice is composed of a set of
formal concepts with their extents and intents. Finally, we perform
some applications based on the above properties of the generated
concept lattice.

\begin{figure}
\centering
\includegraphics[width=8cm]{fca.eps}
\caption{The process of applications of formal concept analysis}
\label{fca.eps}
\end{figure}

\section{Change Impact Analysis and Changeability Assessment}

Changeability is one of the critical features of the software because changes are continuously made to cope with
new requirements, existing faults and change requests, etc.  However, changes made to software
will inevitably have some unpredicted and undesirable ripple-effects
on other parts of the software. These ripples effects may sometimes
confines to a small scope, or sometimes affect a large part of the
system. Therefore, prior to the change implementation, maintainers must conduct a predictive measurement of the change ripples, assess and  make a decision on
the  change proposal. Given a change proposal, we include two necessary activities before changes implementation:

1)  To predict the change ripples impacted by the change proposal.

2)  To assess the change proposal and make a decision on whether to accept the change proposal.


In this section, we present how to effectively perform these two
activities above. For the first activity, we use change impact
analysis to estimate the change ripples affected by the change
proposal. For the second activity, we define a metric to show to
what degree the changes affect the whole program. The process of
these two activities is shown in Figure 2. Assuming that the change
proposal has been mapped into the source code changes, which are
composed of a set of changed classes. Then, given the original
system and the  change set, we employ formal concept analysis to
perform impact analysis. This procedure includes constructing an
intermediate representation ({\it lattice of class and method
dependence, LoCMD}) for the original system and computing the impact
set. Finally, we assess the changeability of the system based on the
impact set produced from impact analysis. According to the
changeability results, maintainers make a decision on the change
proposal. In this paper, the change proposal is assumed to be
composed of a set of some changed classes. In the following, we
discuss  change impact analysis and changeability assessment on the
code-level change proposal in detail.

\begin{figure}
\centering
\includegraphics[width=6cm]{process.eps}
\caption{The process of change impact analysis and changeability assessment} \label{fca.eps}
\end{figure}


\subsection{Formal Concept Analysis for CIA}

Change impact analysis is an important predictive measurement of the ripple effects induced by the proposed changes.
In this section, we  focus on using formal concept analysis to
provide an eclectic way for impact analysis, which we give a simple introduction
here. More details can be referred to ~\cite{sun11}. The change set of our approach is composed
of a set of proposed changed classes. Then impact analysis is performed
with support of formal concept analysis. In this paper, formal
concept analysis is instantiated as follows: formal objects are
classes and formal attributes are methods. The {\it dependence
relation} between class and member method is defined as follows:


\begin{defn}[Dependence between class and method]
Given that a class $c$ and a method $m$ in a program, class $c$
depends on method $m$, if and only if, at least one of the following
conditions is satisfied:

1. $m$ belongs to $c$;

2. $m$ belongs to any superclass of $c$;

3. $c$ depends on another method $k$ calling $m$;

4. $c$ depends on another method $k$ called by $m$.
\end{defn}

With the instantiation of formal context, concept lattice is
generated based on the bottom-up lattice constructing
algorithm~\cite{FCA}. This lattice is called as  {\it Lattice of
Class and Method Dependence} ({\it LoCMD}). From the definitions of
concept lattice in the previous section, the following corollary
about the lattice {\it LoCMD} ($\mathcal {L}(Co)$) can be obtained:

\newtheorem{corollary}{Corollary}
\begin{corollary}
Given {\it LoCMD}: $\mathcal {L}(Co)=(N, E)$. $C$ represents a set
of classes. For $c\in C$, $M(c)$ represents a set of methods that
class $c$ depends on. Then, we have

$n\in N\Leftrightarrow ext[n]=\{c\in C|\forall m\in int[n]: m\in
M(c)\}\wedge int[n]=\{m\in M|\forall c\in ext[n]: m\in M(c)\}$

$(n, m)\in E \Leftrightarrow int[n]\subset int[m]\wedge \nexists
k\in N: int[n]\subset int[k]\subset int[m]$
\end{corollary}

From this corollary, we know that the containment relationship
between concept intents forms a partial order on the sets of all
formal concepts, which is shown to be the {\it LoCMD}. Based on this
property, we see that {\it LoCMD} structures methods into a
hierarchical order, and  two assumptions are proposed: (1) Upward
reachable methods are shared by an increasing number of unchanged
classes, they are expected to be less and less affected by these
changes; and (2) If upward methods are reachable from the nodes
labeled by an increasing number of changed classes, these methods
are more probably affected by these joint classes changes. The first
assumption shows that the probability of  the methods impacted by
the changed classes decreases as we move from lattice nodes labeled
by these changed classes to upper reachable methods. The second
assumption illustrates that methods upward reachable from an
increasing number of changed classes are more probably affected.
These two assumptions have been validated in ~\cite{sun11}.
According to these two assumptions, we propose an {\it Impact
Factor} ($IF_{j}$) metric  of the lattice node $j$, which is defined
as:

\begin{displaymath}  IF_{j}=n+\frac{1}{\sum_{i=1}^{n} min (dist(j, i))+1} \end{displaymath}

In this formula, $n$ is the number of changed classes upward
reachable to lattice node $j$;  $min (dist(j,i))$ is the least
number of edges needing to traverse straightforward upward from
lattice node $i$ to node $j$. And methods in the impact set are
ranked according to this {\it impact factor} metric. Then, the
impact set is computed based on the upward reachability from the
nodes labeling those changed classes on the {\it LoCMD}. And each
method in the impact set is marked with the $IF$ value showing their
probability to be affected. In the following, we give a simple Java
program to demonstrate the process of the CIA.

\begin{figure}
\centering
\includegraphics[width=8cm]{example.eps}
\caption{A simple Java example program} \label{example.eps}
\end{figure}

\begin{table*}
\begin{center}
\caption{Formal context for the simple Java example}
\begin{tabular}{|l|llllllllll|}
\hline
    &  $M1$ & $M2$ & $M3$ & $M4$ & $M5$ & $M6$ & $M7$ & $M8$ & $M9$ & $M10$ \\\hline
  $C1$ & $\times$ & $\times$ &  & $\times$ & $\times$ &  &  &  & & \\
  $C2$ & $\times$ & $\times$ &$\times$  &$\times$  &   & $\times$ &  & & $\times$ & \\
  $C3$ & $\times$ &   &  &  & $\times$ &   & $\times$ & & & \\
  $C4$ &   &  & $\times$ &  &   & $\times$ &  &  & & \\
  $C5$ &  &  &  &  & $\times$  &  & $\times$ & $\times$ & $\times$ & \\
  $C6$ &   &  & $\times$ &  &   &  &  & $\times$ & $\times$& $\times$ \\\hline
\end{tabular}
\end{center}
\end{table*}

\begin{figure*}
\centering
\includegraphics[width=10cm]{ca.eps}
\caption{LoCMD for the simple Java example program ((a) shows the
graph representation for the {\it LoCMD}, and (b) describes the
detailed extents and intents for each concept)} \label{ca.eps}
\end{figure*}

Figure 3 gives a simple Java example. Firstly a formal context need
to be provided. A formal context can be easily represented by a
relation table $\mathcal {T}$, in which rows are headed by  classes
and columns headed by methods. A {\it cross} in row $o$ and column
$a$ means that the formal object $o$ (corresponding to class $c$)
has formal attribute $a$ (corresponding to method $m$), in other
words, class $c$ depends on method $m$.  Table
\uppercase\expandafter{\romannumeral1} shows its corresponding
relation table between classes and methods for the Java program in
Figure 3. Such a table forms the formal context to be analyzed. Then
we apply FCA technique to the formal context in Table
\uppercase\expandafter{\romannumeral1}, and generate a set of
concepts, which is composed of sets of classes (extents) sharing
sets of methods (intents) in Figure 4(b). All these concepts are
shown as a {\it LoCMD}, which is denoted as $\mathcal {L}(Co)$. On
the {\it LoCMD}, nodes are associated with these concepts while
edges correspond to the containment relation between concept
intents. Each lattice node on the {\it LoCMD} in Figure 4(a) is
marked with the $\mu(a)$ ({\it I} Set) labeling and $\gamma(o)$
({\it E} Set) labeling. And CIA is just performed on  this {\it
LoCMD}.


Assuming that $\{C1, C2, C6\}$ are proposed to be changed in the
simple Java example in Figure 3. According to our definition of the
{\it impact factor}, we can compute the impact factor of  upward
reachable concepts labeling the potential impacted methods as shown
in Table \uppercase\expandafter{\romannumeral2}. Columns in this
table represent the priority to  check the impacted methods
($Priority$), some lattice nodes reachable from the concepts labeled
by changed classes ($Node$), potential impacted methods labeling
these reachable concepts ($IM$),   and their impact factor values
($IF$), respectively. From Table
\uppercase\expandafter{\romannumeral2}, we  see a ranked list of
potentially impacted methods with their {\it impact factor} values.
These impacted methods can be prioritized as presented in Column 1
in this table. The results show that methods $\{M2, M4\}$ have the
highest {\it impact factor} value, which illustrates that $\{M2,
M4\}$ are the most probably impacted methods which need to be
checked and modified.

\begin{table}
\begin{center} \caption{Impact Factor
Computation for \{$C1,C2,C6$\} Changes}
\begin{tabular}{|c|l|l|c|}
\hline {\it Priority} & {\it Node} & {\it IM} & {\it IF}

\\\hline
 $1$ & $co5$ & $M2, M4$ & $2.3$  \\\hline
 $2$ & $co1$ & $M1$  & $2.2$  \\\hline
 $2$ & $co3$ & $M3$ & $2.2$ \\\hline
 $2$ & $co4$ & $M9$  & $2.2$ \\\hline
 $5$ & $co15$ & $M10$  & $2$   \\\hline
 $6$ & $co8$ & $M6$  & $1.5$ \\\hline
 $6$ & $co10$ & $M8$ & $1.5$  \\\hline
 $8$ & $co2$ & $M5$ & $1.3$  \\\hline
\end{tabular}
\end{center}
\end{table}


\subsection{Changeability Assessment}

Changeability assessment is an instructive metric to assess  to what
degree this change proposal affect other parts of the original
system. The  changeability result can help maintainers to answer
whether a change proposal is accepted or to determine which change
schedule is more suitable to employ. If a change proposal affect a
great part of the system, the changeability of this change proposal
to the original system is bad, and we may reject this change
proposal, or consider other change schedule, or even redevelop a new
system. If the change proposal only affects a small part of the
system, it illustrates that the changeability is appropriate and we
may  accept the change proposal. Changeability assessment needs a
metric to measure the  ease  of a system to absorb a change
proposal. Changeability assessment in practical maintenance
activities may include two situations: one is to evaluate whether to
implement a change proposal, and the other is to assess some
different change proposals to decide which one produces fewer ripple
effects to the system. And changeability assessment in this paper
also considers these two aspects.

In this section, we propose a metric, {\it impactness}, to perform the
changeability assessment of a change proposal. {\it Impactness} is a metric
to measure to what degree a change proposal affect the original
system. {\it Impactness}  is defined based on the impact set
predicted by the CIA process in the previous section, and is defined as follows:

\begin{displaymath}  Impactness= \frac{\sum_{i=1}^m w_{i}IF_{i}}{\sum_{j=1}^n w_{j}IF_{j}}\times 100\% \end{displaymath}


In this formula, $IF_{i}$ and $IF_{j}$ are the impact factor value for some method in the impact set; $w_{i}$ and $w_{j}$
are nonnegative weights for  different impacted methods with different impact factor values, which are closely related to
the precision of the impact set with that $IF$ value. $m$  is the number
of methods potentially impacted by the proposed class changes. Therefore,
the numerator in this formula represents the whole impact of the
proposed changes. For the denominator in this formula, it shows the
impact of the changes when all the classes in the system are assumed
to be changed. In the denominator part, $n$ is the number of methods
potentially impacted by all the class changes.
The range of the {\it impactness} value is
between $0$ and $1$. When the {\it impactness} value approaches
to $0$, it illustrates that the proposed changed classes have very
few impact on the system; when the {\it impactness} approaches to
$1$, it shows that almost all the system is affected by the proposed
changes. Then we may reject the given change proposal and may need an alternative change schedule. The smaller the {\it
impactness} of the changes to the system is, the better for the
changeability to the system. Our {\it impactness} metric can easily
deal with these two situations of the changeability assessment
proposed above. When given a change proposal, the value of the {\it
impactness} can guide maintainers to make a decision whether to
implement such change proposal. When given some alternative change
proposals, maintainers can select the change proposal with the
smallest {\it impactness} value.


We also assume that $\{C1, C2, C6\}$ are proposed to be changed in
the simple Java example in Figure 3. Then we assess the
changeability of this change proposal. Here we need to compute the
{\it impactness} value for changeability assessment. In the {\it impactness} formula, $w_{i}$ and $w_{j}$
are  closely related to
the precision of the impact set with the $IF$ value. That is to say, if a method has
high $IF$ value, $w_{i}$ will be high for this method.
As a method with high $IF$ value is more probably affected, thus a rough way to set the $w_{i}$ value is to use $IF_{i}$ to replace $w_{i}$ instead.
Then the {\it impactness} formula can be turned into:

\begin{displaymath}  Impactness= \frac{\sum_{i=1}^m IF_{i}^{2}}{\sum_{j=1}^n IF_{j}^{2}}\times 100\% \end{displaymath}

Firstly we compute the value of the numerator, which presents the impact of these three classes,
and the result is $35.3$. Then, we compute the value of the denominator part, which show the ripple effects affected by all the six classes,
and the result is $62.6$. Then according to the $impactness$ formula, the $impactness$ of the proposed changes is $56\%$. This shows that these three classes
changes may affect a half part of the system. We may reject this change proposal and consider another change schedule.
Therefore, maintainers can make their own decision with the help of the $impactness$ result.

\section{Case Study}

In order to evaluate the effectiveness of our CIA technique and the changeability assessment model, we conduct
some case studies to investigate the following
research questions:

RQ1: Whether the {\it impact factor} metric used in our CIA technique has effect on the accuracy of the impact results?

RQ2: Is the {\it impactness} an  effective indicator of the system to absorb a change proposal?

\subsection{Subject Programs}

We resort to two Java subjects selected from open projects as shown in
Table \uppercase\expandafter{\romannumeral3} for our case studies.
The table shows the name of the project ($Name$), the number of
versions ($Version$), the number of classes ($Class$), the number of
methods ($Method$), and lines of code ($LoC$).  And the number of
classes, methods, and lines of code are averaged across their different
versions.

The  first subject program  is  {\it
Siena}\footnote{http://www.inf.usi.ch/carzaniga/siena} (Scalable
Internet Event Notification Architecture), which is an
Internet-scale event notification middleware for distributed
event-based applications deployed over wide-area networks,
responsible for selecting notifications that are of interest to
clients and then delivering those notifications to the clients via
access points. The other subject is {\it
NanoXML}\footnote{http://nanoxml.sourceforge.net/orig}, which is a
small XML parser for Java. We extract some versions  from their CVS
repositories for our case studies.

\begin{table}
\begin{center}
\caption{Research subjects}

\begin{tabular}{|l|c|c|c|c|}

\hline
{\it Name} & {\it Version}  & {\it Class} & {\it Method} & {\it LoC}   \\

\hline $SIENA$
   &8
   &9
   &111
   &2107
    \\
\hline $NanoXML$
   &6
   &26
   &241
   &7631
    \\
\hline

\end{tabular}
\end{center}
\end{table}

\subsection{Methods and Measures}

We use {\it precision} and {\it recall}, two widely used metrics in an information retrieval scenario\footnote{http://en.wikipedia.org/wiki/Precision\_and\_recall},
to validate the {\it accuracy} of the CIA techniques.
These two evaluative metrics are defined as follows:

\begin{displaymath}  P=\frac{|Actual  \quad Set \quad \cap \quad Estimated
\quad Set|}{|Estimated \quad Set|}\times 100\%
\end{displaymath}
\begin{displaymath}  R=\frac{|Actual \quad Set \quad \cap \quad Estimated
\quad Set|}{ |Actual  \quad Set|}\times 100\%
\end{displaymath}

{\it Actual Set} is the set of methods which are really
changed during versions evolution. This is obtained by comparing the
changes of methods between consecutive program versions.  {\it
Estimated Set} is the set of methods potentially impacted by the
{\it change set} based on our CIA technique. The {\it change set} of
our CIA technique is obtained by extracting  changed classes between
different versions.  {\it Precision} is an inverse measure of
false-positives while {\it recall} is an inverse measure of
false-negatives.  False-positives in the impact set show that these
entities are not really impacted but identified by the CIA
technique, and false-negatives mean that the real impacted entities
are not successfully identified in the impact set.
These two metrics are used to see the extent of the predicted
results related to the actual results in a posteriori way. We compute the
{\it precision} and {\it recall} for the impact set
of different {\it impact factor} ranges between different consecutive versions to see the accuracy of our CIA technique.

In addition, to validate the effectiveness of the {\it impactness}
we propose the following formula to collect to what degree actual classes changes
affect the system to compare with the {\it impactness}:

\begin{displaymath}  ActualImpact\%=\frac{|Actual  \quad Set |}{|Method \quad Set|}\times 100\%
\end{displaymath}

In this formula, {\it Actual Set} is the set of methods which are really
changed during versions evolution, which is the same as that defined in the {\it precision} and {\it recall} formula.
{\it Method Set} incudes all the methods of the system of different versions. And the $ActualImpact\%$
shows that to what degree the changes have happened during the evolution of the software. For the {\it impactness} metric, we also use
$Impactness= \frac{\sum_{i=1}^m IF_{i}^{2}}{\sum_{j=1}^n IF_{j}^{2}}\times 100\% $ to indicate the changeability of  the change proposal.
Then we compare the $ActualImpact\%$
with {\it impactness} to see whether our {\it impactness} metric embodies the actual impacts in real changes environment.
The validation approach is also conducted in a posteriori way.

\subsection{Results and Analysis}

In this section, we  gather and analyze the results collected from
the case studies to answer {\it RQ1} and {\it RQ2}. Figure 5 shows the
number of changed classes for  $SIENA$ and $NanoXML$
programs during their versions evolution. For each version, there are some changed classes, for example,
for $SIENA-V1$, there are three classes to be changed. Then we employ our $IF$ and {\it impactness} formula to compute the ripple effects
and the {\it impactness} for this change proposal, respectively.

\begin{figure}
\centering
\includegraphics[width=8cm]{change.eps}
\caption{Number of changed classes for $SIENA$ and $NanoXML$ programs
transaction during their versions evolution} \label{change.eps}
\end{figure}

\subsubsection{Study 1}

The purpose of the {\it impact factor} metric is
expected to express the probability of its corresponding method to be impacted. That is to say,
the precision of the impact set with high $IF$ values is better than that with
low $IF$ values. So we firstly research on  the precision of the impact set.
According to the {\it impact factor} formula and the results
obtained from the case studies, most of the $IF$ values (up to
$95\%$) belong to $\{1,1.5,2,2.2,2.3\}$. Therefore, we mainly
analyze the impacted methods for our case studies with
the $IF$ values belonging to these cases.

\begin{figure}
\centering
\includegraphics[width=8cm]{precision.eps}
\caption{Precision of the impact set of $SIENA$ and $NanoXML$
programs transaction of different $IF$ values} \label{precision.eps}
\end{figure}

Figure 6 shows the
precision  of different $IF$ values for  $SIENA$ and $NanoXML$
programs during their versions evolution. From this figure, we see
that precision of the impact set  of different $IF$ values varies. In most cases, precision of the impact set with $IF=2.3$ is  above
$40\%$ for both $SIENA$ and $NanoXML$ programs. And when $IF$ value
decreases to $2.2$, the precision of the impact set decreases to
between $20\%$ and $40\%$. Then, when $IF=2$, their precision values
are only between $10\%$ and $30\%$.  And with $IF$ values continuously decreasing, i.e.,  $IF=1.5$ or $IF<1.5$, all
their precision values are lower than $10\%$, and even approach to $0$.  It illustrates that most estimated
impacted methods with $IF<1.5$ belong to the false-positives.
Generally speaking, with the decrease of the $IF$
values, the precision also decreases. From these results, we see that the $IF$
value has an effect on the precision of the impact set.

Analysis above discusses the precision of the impact set, and we learn that
increase of the $IF$ value also improves the precision of the impact set. Now, we check the {\it recall}
of the impact set, i.e., whether the $IF$ value has an effect on the {\it
recall} of the CIA technique. Figure 7 shows  the recall in different
$IF$ ranges for  $SIENA$ and $NanoXML$ programs during their
versions evolution. From this figure, we see that with the decrease of the $IF$
ranges, the recall of the impact set increases, and finally
approaches to $100\%$. More to say, when  $IF\geq 2.3$, most of the recall values
are only under  $20\%$. Then, when $IF\geq 2.2$,
most of the recall values increase to $40\%$. With the decrease of the $IF$ ranges,  the recall values continuously increase.
 when the $IF\geq1$, all of the recall values reach $100\%$.  This phenomenon is inverse to that of the precision variation.
Therefore, selection of the impact set of different $IF$ values also has an effect on
the recall of the impact results.

Form the analysis above, we know that when the $IF$ has high value, the precision is also very high. This
illustrates that we can preferentially check these impacted methods
with high $IF$ values. But when
$IF=1.5$ or $IF<1.5$, the precision of the impact set is  approaching to $0$.  Impact
set in such an $IF$ range is of little use for practical
application. So maintainers can neglect those methods with
$IF<1.5$ when they inspect them. Moreover, The impact set with $IF\geq 1.5$ keeps the recall
above $80\%$, which may well give the maintainers enough confidence to track  impacts of
the proposed changes.

\begin{figure}
\centering
\includegraphics[width=8cm]{recall.eps}
\caption{Recall of the impact set of $SIENA$ and $NanoXML$ programs
transaction in different $IF$ ranges} \label{recall.eps}
\end{figure}

\subsubsection{Study 2}

In this section, we research on the second research question. That said, whether
the definition of the {\it impactness} can be effectively used for changeability assessment.
As proposed above, we define an $ActualImpact\%$ metric to show how many actual method changes
are implemented during programs evolution.

Figure 8 shows the {\it impactness} and  $ActualImpact\%$ values of
different classes changes for  $SIENA$ and $NanoXML$ programs during
their versions evolution. From this figure, we see that the
estimation of the {\it impactness} for all change proposals is
higher than  the $ActualImpact\%$ value. In other words, we
overestimate the impact induced by the change proposals.  However,
from Figure 8, we learn that the variation tendency of the {\it
impactness} values is in accord with the $ActualImpact\%$ values.
That is to say, when given two or more change proposals, our
changeability assessment model can accurately reflect which change
proposal has fewer impact on original system. In addition, in spite
of the overestimation of the impact induced by the change proposal,
we can regard our {\it impactness} as a conservative evaluation
approach of the changeability of the system to absorb a change
proposal. For example, when the {\it impactness} value of a change
proposal is lower than $50\%$, this change proposal can be deemed as
having few impact on original system.

One important reason of the overestimation of the impact of the change proposal is
the improper choice of the weights for  different impacted methods with different impact factor values. And
the weights are closely related to
the precision of the impact set with the $IF$ value. Here we use the $IF$ value to replace the weight value.
And when the $IF$ value is lower than two, the precision of the impact set with that $IF$ range is very low, most of which approaches
to $0$. Therefore, in this case, we should not simply use $IF$ value to replace the weight value. To choose  more appropriate
weight values for different $IF$ values will become one of our  critical work in the future.

\begin{figure}
\centering
\includegraphics[width=8cm]{impact.eps}
\caption{{\it Impactness} and $ActualImpact\%$ for $SIENA$ and $NanoXML$ programs
transaction during their versions evolution} \label{impact.eps}
\end{figure}


\subsection{Threats to Validity}


Like any empirical validation, ours has its limitations. In the
following, some threats to the validity of our case study are
discussed.

First,  we only apply our technique to two small or medium size
programs. Thus we cannot guarantee that the results in our case
studies can be generalized to other more complex or arbitrary
subjects. However, these subjects are selected from open source
projects and  widely employed for experimental studies.

A second concern is about the changes studied -- the study examines
class level changes, including changes made within class body and
deletion of a class. Deleting unused fields within some classes is
also considered as class changes, but actually such change type do
not impact other entities in original programs. Therefore, this will
cause a great imprecision when using our CIA technique.

Finally, we use the $IF$ value to replace  the weight value,
which leads to the overestimation of the {\it impactness} value. Selection of different weight values may leads to
different results of the {\it impactness}. And we may select the weight from a different perspective, e.g., the empirical results of
the precision for different $IF$ values.

\section{Related Work}

In this section, we introduce some related work from two aspects: 1)
change impact analysis, and 2) changeability assessment.

\subsection{Change Impact Analysis}

Current researches in change impact analysis have varied from
relying on static information~\cite{ton03,she08,pos09,pet09,kagd10}
to dynamic information~\cite{ors03,law03a,api05}. Some also utilized
both static and dynamic information in
combination~\cite{ren04,sai08,cav09}.

Static CIA techniques take all possible behaviors and inputs into
account, and they are often performed by analyzing the syntax and
semantic dependence of the program~\cite{ton03,she08}.  In previous
time, concept analysis has been combined with program slicing
technique to perform a fine-grain {\it CIA} in intraprocedural
level~\cite{ton03}. In addition, some work also tries to generate a
ranked list of impact results. Poshyvanyk et al. proposed a novel
conceptual coupling measures to capture some dependencies which can
not be captured by structural coupling measures~\cite{pos09}. The
coupling measures are then used to produce a ranked list of classes
based on different types of dependences among classes. The
granularity level they analyzed is at class level, while ours reach
class and class method level. Petrenko et al.  presented a study on
the {\it variable granularity}, i.e., classes, class members, and
code fragments,  to improve the precision of impact
analysis~\cite{pet09}. Their approach generates the impacted
entities labeling with different priorities. Recently, there is an
increasing concern with {\it Mining software repositories} ({\it
MSR}) technique to support CIA~\cite{zim05,she08}. Some {\it
evolutionary} dependences between program entities that can not be
distilled by traditional program analysis technique can be mined
from these repositories. These {\it CIA} techniques often generate
the impact set at a coarse class (file) level. Kagdi et al. utilized
both single  and multiple versions analysis for impact
analysis~\cite{kagd10}. They investigate two different combinations,
i.e., disjunctive and conjunctive, to compute the impact set. Their
research results show that such combined methods provide
improvements to the accuracy of impact set compared to single  and
multiple versions impact analysis, respectively.

For the imprecision of the impact set computed by static CIA
techniques, i.e., the impact set has too many
false-positives~\cite{law03a}, some people propose dynamic CIA
techniques which focus on a set of specific program executions.
Dynamic CIA techniques consider parts of the inputs in practical
use, and rely on the analysis of the information collected during
the execution  to calculate the impact
set~\cite{ors03,law03a,api05}. And their impact set is more precise
and suitable for analysis. However, the cost for dynamic CIA
techniques is higher for its overhead on the execution information
collected during program execution. Moreover, the impact set  they
compute often includes some false-negatives.

Different from performing static CIA and dynamic CIA, independently,
some work combines them together~\cite{ren04,sai08,cav09}. However,
the purpose of these work are also different from those traditional
CIA techniques. They try to find the impacted test cases impacted by
changes~\cite{ren04,sai08}, and the failure or difference induced by
changes made to software~\cite{ren04,sai08,cav09}.

\subsection{Changeability Assessment}

Changeability is an important software quality attribute to measure
software maintainability~\cite{Board_1990}. Research into software
changeability assessment includes proposing changeability predictors
based on measurable factors that have a bearing on the software
maintenance activity~\cite{Riaz09}.

Currently, most of the researches on changeability assessment rely on
some design property metrics as changeability indicators, e.g., cohesion, complexity, coupling, etc. ~\cite{Cha2000}.
In addition, Fluri proposes a changeability assessment model based on a taxonomy of different change types and a classification of
these in terms of change significance levels for consecutive versions of software entities~\cite{Fluri07}.
With this changeability model, each source code entity is classified  as low, medium, or
high. Then maintainers select appropriate modification strategy according to the changeability of the source code.

Changeability assessment above mainly focus on the changeability of the evolving software without
considering the software to absorb the individual change proposal.
And Chaumun et al. propose a changeability assessment model which relies on computing the impact of classes changes~\cite{cha99}.
They define the change impact model for each class change type by analyzing the types of the dependence between classes. Then
changeability assessment is  predicted based on the impact results of the classes changes. In this paper, we also measure the changeability based on
the impact results of the classes changes. However,
we propose a {\it impactness} metric to implicate changeability of the software to absorb a change proposal.


\section{Conclusion and Future Work}

Given a change proposal, some measurement of predictive change
ripples and the changeability of the system to absorb this change
proposal must be conducted before change implementation. This paper
proposed a predictive measurement of the ripple effects and the
changeability of the change proposal which is in the form of the
changed classes. Ripple effects are estimated with the change impact
analysis technique. We compute the impact set from a set of changed
classes, and the impact set is a ranked list of potential impacted
methods. Then, we define an {\it impactness} metric to indicate the
changeability of the original system to absorb this change proposal.
This metric represents to what degree the change proposal affects
the original system. Finally, we conduct some case studies to
validate the effectiveness of the CIA technique and the
changeability assessment model.

Though we have shown the effectiveness of our CIA technique and the
changeability assessment model through some real programs, it can
not indicate its generality for other real environment. And we will
conduct experiments on other more complex and large-scale programs
to evaluate the generality of our technique. In addition, we may further
research on the selection of the weight for the {\it impactness} formula to
more effectively and accurately assess the changeability of a system to abosrb a change proposal. Also,   we would like
to research on which design metrics, e.g., {\it coupling between
object classes}, {\it lack of cohesion of methods}, {\it depth of
inheritance tree}, can be used as good indicators of software
changeability according to our changeability assessment model.


\nocite{ex1,ex2}
%\begin{thebibliography}{1}
\bibliographystyle{unsrt}
\bibliography{typeinst}

%\end{thebibliography}




% that's all folks
\end{document}
