\chapter{Refactoring}
\label{Refactoring}

% TODO: Remove Author Comments, Ajust Paragraphs, Bold Face in Refactoring
% Methods


According to \citet{fowler:2004} refactoring is ``the process of changing a
software system in such a way that it does not alter the external behavior of the
code yet improves its internal structure. It is a disciplined way to clean up
code that minimizes the chances of introducing bugs. In essence when you refactor
you are improving the design of the code after it has been written.''.

Refactoring helps improving the design of software, which is the main goal of
this work, makes it easier to understand, and, because of it, helps finding bugs
too. \cite{fowler:2004}

\section{Refactoring Methods}

\subsection{Composing Methods}

A large part of \citet{fowler:2004} refactoring method is composing methods to package
code properly. 

The main methods used by the author are:
\begin{itemize}
  \item Extract Method;
  \item Inline Method;
  \item Inline Temp;
  \item Replace Temp with Query;
  \item Introduce Explaining Variable;
  \item Split Temporary Variable;
  \item Remove Assignments to Parameters;
  \item Replace Method with Method Object;
  \item Substitute Algorithm.
\end{itemize}

Almost all the time the problems come from methods that are too
long. Long methods are troublesome because they often contain lots of
information, which gets buried by the complex logic that usually gets dragged in.
The key refactoring is \textbf{Extract Method}, which takes a clump of code and
turns it into its own method. Inline Method is essentially the opposite. You take
a method call and replace it with the body of the code. \citet{fowler:2004} needs
\textbf{Inline Method} when has done multiple extractions and realize some of
the resulting methods are no longer pulling their weight or if he needs to
reorganize the way he has broken down methods.

The biggest problem with \textbf{Extract Method} is dealing with local variables,
and temporary variables (temps) are one of the main sources of this issue. When he is working on a
method, \citet{fowler:2004} likes \textbf{Replace Temp with Query} to get rid of
any temporary variables that he can remove. If the temp is used for many things,
he uses \textbf{Split Temporary Variable} first to make the temp easier to
replace. Sometimes, however, the temporary variables are just too tangled to
replace. He needs \textbf{Replace Method with Method Object}. This allows him to
break up even the most tangled method, at the cost of introducing a new class
for the job.

Parameters are less of a problem than temps, provided you don't assign to them.
If you do, you need \textbf{Remove Assignments to Parameters}. Once the method is
broken down, he can understand how it works much better. He may also find that the
algorithm can be improved to make it clearer. He then use \textbf{Substitute
Algorithm} to introduce the clearer algorithm.

\subsection{Moving Features Between Objects}

One of the most fundamental decision in object design is
deciding where to put responsibilities. Responsability, in this context, 
is the coherence between the behavior of one method related to its class. 
To support moving features between objects, \citet{fowler:2004} 
offers the following methods:
\begin{itemize}
  \item Move Method;
  \item Move Field;
  \item Extract Class;
  \item Inline Class;
  \item Hide Delegate;
  \item Remove Middle Man;
  \item Introduce Foreign Method;
  \item Introduce Local Extension.
\end{itemize}  

\textbf{Move Method} and \textbf{Move Field} to move the behavior around. 
If both are needed, one can use \textbf{Move Field} first and then \textbf{Move Method}. 
Often classes become swollen with many responsibilities. In this case 
\textbf{Extract Class} can be used to separate some of these 
responsibilities. If a class becomes too
irresponsible, \textbf{Inline Class} can merge it into another class. If
another class is being used, it often is helpful to hide this fact with
\textbf{Hide Delegate}. Sometimes hiding the delegate class results in constantly
changing the owner's interface, in which case is necessary to use \textbf{Remove
Middle Man}. The last two refactorings in this chapter, \textbf{Introduce Foreign
Method} and \textbf{Introduce Local Extension} are special cases. They are
used when the source code of a classis not available yet and we want to move
responsabilities to this unchangeable class. If it is only one or two methods, we
use \textbf{Introduce Foreign Method}; for more than one or two methods, we use
\textbf{Introduce Local Extension}.
  
\subsection{Organizing Data}  

\citet{fowler:2004} also presents several refactorings for data organization and manipulation:
\begin{itemize}
  \item Self Encapsulate Field;
  \item Replace Data Value with Object;
  \item Change Value to Reference;
  \item Change Reference to Value;
  \item Replace Array with Object;
  \item Duplicate Observed Data;
  \item Change Unidirectional Association to Bidirectional;
  \item Change Bidirectional Association to Unidirectional;
  \item Replace Magic Number with Symbolic Constant;
  \item Encapsulate Field;
  \item Encapsulate Collection;
  \item Replace Record with Data Class;
  \item Replace Type Code with Class;
  \item Replace Type Code with Subclasses;
  \item Replace Type Code with State/Strategy;
  \item Replace Subclass with Fields.
 \end{itemize}
  
\textbf{Self Encapsulate Field} has long been a matter of debate 
about whether an object should access its own data directly or through accessors. 
Sometimes we do need the accessors,
and then we can get them with \textbf{Self Encapsulate Field}. The 
direct access makes it simple to do this refactoring when needed. 

The \textbf{Replace Data Value with Object} allows one to turn data
into articulate objects, and if these objects are instances that will
be needed in many parts of the program, \textbf{Change Value to
Reference} may be used to make them into reference objects.

For arrays acting as a data structure, it is possible to make it 
clearer with \textbf{Replace Array with Object}. In all these cases the object
is the first step. The author indicates that the real advantage comes when
\textbf{Move Method} is used to add behavior to the new objects.

Magic numbers, or numbers with special meaning, have long been a problem. 
We can use \textbf{Replace Magic Number with Symbolic
Constant} to get rid of magic numbers whenever I figure out what they are doing.

Links between objects can be one-way or two-way. Although one-way links 
are easier, it may be needed to use \textbf{Change Unidirectional Association to
Bidirectional} to support a new function. \textbf{Change Bidirectional 
Association to Unidirectional} removes unnecessary complexity that could be
found when one longer needs the two-way link.

The author indicates that GUI classes often do business logic that they were not
supposed to do. To move the behavior into proper domain classes, one needs to have the
data in the domain class and support the GUI by using \textbf{Duplicate
Observed Data}. 

One of the key principles of object-oriented programming is encapsulation. If any
public data is exposed, one can create basic data manipulation methods for it
using \textbf{Encapsulate Field}. If that data is a collection, \textbf{Encapsulate
Collection} must be used, because that has special protocol. If it is an entire 
record, one can use \textbf{Replace Record with Data Class}.

One form of data that requires particular treatment is the type code: a special
value that indicates something particular about a type of instance. If the codes
are for information and do not alter the behavior of the class, we can use
\textbf{Replace Type Code with Class}, which provides better type checking and
a platform for moving behavior later. If the behavior of a class is affected by a type code,
the use of \textbf{Replace Type Code with Subclasses} is recomended. As an alternative,
\textbf{Replace Type Code
with State/Strategy} can be used for flexibility.

\subsection{Simplifying Conditional Expressions}

Conditional logic may also be simplified in order to obtain better 
results on software quality metrics: 
\begin{itemize}
  \item Decompose Conditional;
  \item Consolidate Conditional Expression;
  \item Consolidate Duplicate Conditional Fragments;
  \item Remove Control Flag;
  \item Replace Nested Conditional with Guard Clauses;
  \item Replace Conditional with Polymorphism;
  \item Introduce Null Object;
  \item Introduce Assertion.
 \end{itemize} 

The core refactoring is \textbf{Decompose Conditional}, which entails breaking a conditional into pieces, separating the switching logic from the details of execution. 
The use of \textbf{Consolidate Conditional Expression} when we have several tests, 
all having the same effect. \textbf{Consolidate Duplicate Conditional
Fragments} must be used to remove duplication within the conditional code. 

\textbf{Replace Nested Conditional with Guard Clauses} is recomended to clarify special 
case conditionals and \textbf{Remove Control Flag} to get remove unwanted control flags.
According to the ahtuor, Object-oriented programs often have less conditional behavior than procedural
programs because much of the conditional behavior is handled by polymorphism.
Polymorphism is better because conditional behavior is transparent for the caller,
 and it is thus easier to extend the conditions. As a
result, object-oriented programs rarely have switch (case) statements. To support that,
\textbf{Replace Conditional with Polymorphism} may be used.
One of the most useful uses of polymorphism is to use
\textbf{Introduce Null Object} to remove checks for a null value.
  
\subsection{Making Method Calls Simpler}  

One of the main characteristics of objects are interfaces, wich, 
according to \citet{fowler:2004}, are a key skill in developing good 
object-oriented software.

The main methods used by the author for making method calls simpler are:
\begin{itemize}
  \item Rename Method;
  \item Add Parameter;
  \item Remove Parameter;
  \item Separate Query from Modifier;
  \item Parameterize Method;
  \item Replace Parameter with Explicit Methods;
  \item Preserve Whole Object;
  \item Replace Parameter with Method;
  \item Introduce Parameter Object;
  \item Remove Setting Method;
  \item Hide Method;
  \item Replace Constructor with Factory Method;
  \item Encapsulate Downcast;
  \item Replace Error Code with Exception;
  \item Replace Exception with Test.
\end{itemize}
  
\citet{fowler:2004} puts the name of a method as the simplest and most important thing 
one can do, giving meaning to it, and \textbf{Rename Method} can be used to suppport 
renaming methods. Variables and classes should be renamed. Due to the simplicity of
this text replacements, the author does not suggest any extra refactorings for 
it, but all well known refactoring tool have facilities for that. 
Parameters are strongly related to interfaces. \textbf{Add Parameter} and 
\textbf{Remove Parameter} are common refactorings.

As a solution to long parameter lists, \textbf{Preserve Whole Object} must be used to
reduce all the values to a single object, that can be created using \textbf{Introduce Parameter Object}. 
Parameters can also be eliminated when data can be obtained from another object to which the method already has access, using \textbf{Replace Parameter with Method}. When parameters are used
to determine conditional behavior, one can use \textbf{Replace Parameter with
Explicit Methods}. combination of similar methods may be done by adding a parameter
with \textbf{Parameterize Method}. 
To avoid problems when reducing parameter lists on concurrent programming the
author usually replace them with immutable objects, but otherwise he recomends to be cautious
about this group of refactorings, by clearly separating methods that change state (modifiers) from
those that query state (queries). Whenever they get combined, \textbf{Separate Query from Modifier} 
may be used to remove them. 

According to \citet{fowler:2004}, ''good interfaces show only what they have to and no more''. \textbf{Hide Method} and \textbf{Remove Setting Method} may be used when we need to make things visible for a while.

Constructors are another important characteristic of OO programming, but since they
force the need of knowing the class of an object one need to create, it happens to
push against the flexibility of OO paradigm.  
The need to know can be removed with \textbf{Replace Constructor
with Factory Method}. 

When aiming refactoring on casting, \citet{fowler:2004} suggests that, ''as much as 
possible try to avoid making the user of a class do downcasting if you
can contain it elsewhere by using \textbf{Encapsulate Downcast}''. 

Most OO programming languages have an exception-handling mechanism to make error handling
easier but, according to the author, programmers who are not used to this often 
use error codes to signal trouble. The use of \textbf{Replace Error Code with Exception} is 
recomended to use the new exceptional features, and \textbf{Replace Exception with Test} 
for a previous test.
  
    
\subsection{Dealing with Generalization}

Generalization produces its own batch of refactorings, mostly dealing with moving methods
around a hierarchy of inheritance: 

\begin{itemize}
  \item Pull Up Field;
  \item Pull Up Method;
  \item Pull Up Constructor Body;
  \item Push Down Method;
  \item Push Down Field;
  \item Extract Subclass;
  \item Extract Superclass;
  \item Extract Interface;
  \item Collapse Hierarchy;
  \item Form Template Method;
  \item Replace Inheritance with Delegation;
  \item Replace Delegation with Inheritance.
\end{itemize}

\textbf{Pull Up Field} and \textbf{Pull Up
Method} both promote function up a hierarchy, and \textbf{Push Down Method} and
\textbf{Push Down Field} push function downward. \textbf{Pull Up Constructor Body} 
can be used to promote constructors. Rather than pushing down a constructor, 
it is often useful to use \textbf{Replace Constructor with Factory Method}. 
When one have a similar outline body but vary in details, \textbf{Form Template
Method} may be used to separate the differences from the similarities. 

\textbf{Extract Subclass}, \textbf{Extract Superclass}, and
\textbf{Extract Interface} allows the change the hierarchy by creating new classes,
moving function around a hierarchy by forming new elements out of various
points. \textbf{Extract Interface} us used to 
mark a small amount of function for the type system. 

Unnecessary classes in the hierarchy, can be removed using \textbf{Collapse Hierarchy}. 

Delegation may be an alternative when inheritance is not the best way of
handling a situation. For this changes, the author offers \textbf{Replace
Inheritance with Delegation} and \textbf{Replace Delegation with
Inheritance} methods.


\section{Refactoring and Performance}

The effect refactoring has on the performance of a program is a common concern. 

To make the software easier to understand, changes will often cause the program
to run more slowly, wich is an important issue. Preformance cannot be ignored in favor 
of design purity or in hopes of faster hardware. 
``Refactoring certainly will make software go more slowly, but it also makes the software
more amenable to performance tuning. The secret to fast software is to write tunable software first and then to tune it sufficient speed.'' \cite{fowler:2004}

\subsection{How Object Orientation impacts Performance}   

This subsection demonstrates the impact of object orientation in software
performance according to \citet{booth:2006} whose focus is on HPC software design.

\subsubsection{Memory issues}
According to \citet{booth:2006}, the layout of data in the memory space of a program 
can have a big impact on the programs performance, especially on the matters of HPC (High Performance
Computing) applications. Almost all OO languages will implement an instance of a
user defined type as a contiguous region of memory. This makes it very easy for
the compiler to allocate space for user defined types. Even though the
type is implemented as a contiguous region of memory it is always possible to
have member variables that reference dynamically allocated data.

Either way when designing an OO code the choice for types tends to restrict 
for the memory layout of the application data. \cite{booth:2006} Some applications lead
naturally to OO modeling, while others have a more procedural approach which makes
the use of arrays more natural.

The object oriented approach results in good data locality, leading to good cache use, 
because all of the data associated with an application element is packed into a
contiguous region of memory. On the other hand if a major computational loop only uses a
subset of the component data then the unwanted components are still likely to be 
loaded into the cache because they share cache lines with the required data. 
This can waste a fraction of the available memory bandwidth and reduce the
performance of code sections where cache misses occur. It will also reduce the
size of problem that can remain resident in the cache during the computational
loop.\cite{booth:2006}

This high degree of memory locality may also result in the cache misses being 
very closely clustered together in time which on some architectures can result 
in much poorer performance than a traditional array based code.\cite{booth:2006}

\subsubsection{Code structure issues}

OO programming style also impacts the structure of the executable code. 
According to \citet{booth:2006}, most naive compilation strategies
for OO languages will convert each different operation on a type into a code
block roughly equivalent to a subroutine or function in a procedural language.
However, the OO programming style encourages the programmer to keep the definition
of each operation relatively simple and to build complexity out of multiple
levels of types and operations on those types. 
As result, the naive compilation strategies will generate large numbers of 
subroutine calls. Consequences are high cost for processing the subroutine calls, 
inhibiting many of the code optimisations that the compiler would otherwise perform, and 
much greater demands on the capabilities of the compiler than conventional
procedural code. The compiler not only has to be very good at subroutine inlining
but also capable of performing additional optimisation steps after the in lining
has taken place.

Tunning the memory layout of a type to some extent by changing the order of
the component data or by storing some of the data by reference makes possible to
arrange data required/not-required by most critical loops live on separate cache 
lines and may succeed depending on the nature of the loop and the target cache
architecture.

\subsubsection{Pointers}

The use of pointers and reference types are widely used on OO programming to store 
the addresses of other objects and avoid having to make a copy of the object.
\citet{booth:2006} afirm that the use of pointers and references can have a 
detrimental effect on compiler optimisation, because in many cases the compiler 
cannot determine at compilation time if two pointers may reference the same 
location or, when calling a subroutine, it may have side effects 
on the region of memory referenced by a pointer. In these cases the compiler produces 
additional read and write instructions to ensure that values stored in CPU 
registers remain consistent with
 
values in memory. It is an important 
issue for many OO languages that make heavy use of pointers and references. 

This way, it is understood that pointers and reference types must be used parsimony if
there is a concern about software performance.

\subsubsection{High and low level types}

\citet{booth:2006} afirm that for a high level type there is little performance 
disadvantage to the use of OO. On the other hand very low level types like vectors 
play an integral part in the most time critical
parts of the code and have to function efficiently. In particular objects of this
type may be created destroyed and copied within the code hotspots. For these
performance critical low-level types only those language features that do not
impact performance can be used. \cite{booth:2006}

However, according to the author, these low-level types are usually very simple 
and can even be replaced by explicit code using intrinsic types 
without too much damage to the code.
Higher level objects which can have quite complex behaviour.
This means that a class takes part in all of the code hotspots so it
may be necessary to modify the type interface and do some damage to the
encapsulation of the class type to improve the performance of the code.
\cite{booth:2006}

\section{Related Work}

\citet{stroggylos:2007} analyzed source code version control system logs of
popular open source software systems to detect changes marked as refactorings
and examine how the software metrics are affected by this process, in order to
evaluate whether refactoring is effectively used as a means to improve software
quality within the open source community.

The results indicate a significant change of certain metrics to the worse.
Specifically he seems that refactoring caused a non trivial increase in metrics
such as LCOM (Lines of code), Ca (Afferent Couplings) and RFC (Response for
Class), indicating that it caused classes to become less coherent as more
responsibilities are assigned to them. The same principles he seems to apply in
procedural systems as well, in which case the effect is captured as an
increase in complexity metrics. Since it is a common conjecture that the
metrics used can actually indicate a system's quality, these results suggest
that either the refactoring process does not always improve the quality of a system 
in a measurable way or that developers still have not managed to use refactoring 
effectively as a means to improve software quality. To further validate these
results, more systems and even more revisions must be examined,  because the
number examined so far is relatively small. He suggests that using a refactoring
detection technique to identify the refactorings performed each time, one could also
correlate each kind of refactoring to a specific trend in the change of
various metrics and thus deduce which ones are more beneficial to the overallquality of a system.
Another good related work is from \citet*{demeyer:2004} which analyzes
how refactoring methods manipulate coupling/cohesion characteristics and how to
identify refactoring opportunities that improve these characteristics. They
also provide practical guidelines for the optimal usage of refactoring in a
software maintenance process.

At the end, they identified specific applications of Move Method, Replace
Method with Method Object, Replace Data Value with Object and Extract Class to
be beneficial. However, they also experienced that guidelines they created can
be insufficiently specific. This was the case for a specific application of
Extract Method, which was harmful for cohesion.

They demonstrated in the conclusion that by exploiting the results from
coupling/cohesion impact analysis, it is possible to achieve quality
improvements with restricted refactoring efforts and that this effort is
restricted to the analysis and resolutions of a limited set of refactoring
opportunities which are know to improve the associated quality attributes.

The related work from \citet{stroggylos:2007} shows that refactoring may not
improve software quality. Besides that, the developers probably are not
looking refactoring as a opportunity to improve software quality. In this work
the refactoring methods applied must improve the metric and if not this change
is rolled back.

\citet{demeyer:2004} work give a good reference to use to improve this quality
metrics because of its guidelines and profiles described. These guidelines can
help making the refactoring process more accurate.

