\documentclass[a4paper,10pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage[cm]{fullpage}
\usepackage{graphicx}
\usepackage{listings}
\lstset{language=Java, numbers=left, stepnumber=1, frame=single,}

%opening
\title{Acme Telecom Project Report \\
475 Advanced Topics in Software Engineering}
\author{Eli Gutin \texttt{eg08}, \\ Michael Kwan \texttt{mk08}, \\ John Wong \texttt{jw808} }

\begin{document}

\maketitle

\section{Introduction to our approach}
In order to tackle the task of modifying the legacy code, we used a number of techniques that were presented in the lectures and from our own experience. Our first priority was to understand at a high-level what the purpose of the application was and we wrote a \texttt{main} method for running some of it. This involved just logging some phone calls with \texttt{Thread.sleep} in between and then seeing what was printed to the console after invoking the \texttt{createCustomerBills} method. The next step was to explore the code deeper, by running the executable while using the debugger. In particular this helped us to understand the entire "workflow" of the system from when a call was recorded to how they were later retrieved for the billing process.

Upon noticing the complete lack of tests, we started the first main phase of our work. Using our preliminary understanding of the functionality, we introduced acceptance tests using the FIT framework, which we explain more in section 2. At the same time, some of us started writing unit tests which we will address in more detail in section 3. We were not able to write these fixtures without refactoring the code, which is covered in section 4. The second phase began with changing the tests so that they failed with the new requirement in place, namely, that customers are charged only for the duration of the call which was during peak hours. We then proceeded to implement the change and this is discussed more in section 5.  Along the way we made some more general restructuring and improvement of the code, including the use of a DSL (domain-specific language), which is explained in section 6. Finally in the last section we outline how we would plan to keep up with future changes and requirements.
\section{System Tests}
We assessed several potential options for writing acceptance tests for the system including Cucumber and Fit. Ultimately, we opted to use a library called reFit
mainly because it supports integration with the Spring framework but also because of prior experience with Fit from the tutorial in week 6. Fit itself does not
innately mesh well with dependency injection, but reFit enhances the original library providing special runners and loaders to allow Spring to inject beans from the
application context into fixtures.

From a higher level view, our aim in writing acceptance tests was to gain the ability to perform end-to-end tests on the system. Primarily, this safeguards against
breaking existing functionality but also gives us confidence in making future changes (for example, changes to the system to fit the new specification).

\begin{quote}
 \section*{Dependency Injection}

We used Spring framework \texttt{http://www.springsource.org/} for our IoC container. It proved useful
because it separated the application configuration from the code and kept it in one place. This way our objects only contained business logic, became decoupled and easier to test. An example of this would be the static reference to \texttt{HtmlPrinter} inside \texttt{BillingSystem} which forced the latter to have an intimate knowledge of a non-business related technicality namely, the use of HTML. We will describe later and in more detail this particular refactoring among others in section 4. Another benefit of the dependency injection approach becomes more apparent when you require different setups for different environnments, i.e. development, QA or production and you need to wire the components in slightly differently ways. We used constructor over setter injection in order to achieve more clarity and correctness in the code by clearly specifying what is needed to instantiate a particular valid object from its construction. For specifying the container, we used XML instead of annotations because this enabled integration with reFit and the Fit framework.

In our particular case, the frontend to the application was the \texttt{BillingSystem} singleton bean which
was instantiated with \texttt{CallLogger}, \texttt{BillGenerator} , \texttt{CentralCustomerDatabase} and \texttt{CentralTariffDatabase} beans.
\end{quote}

The acceptance tests are more about testing whether the system meets the specification rather than testing the aesthetic output of the real system. This means we are not
necessarily interested in testing what is displayed by \texttt{BillGenerator} and \texttt{HtmlPrinter}. Instead, we are more interested in testing the underlying data sources
they draw from, and whether those sources are holding the billing information we expect them to. Given this, we created fakes for \texttt{BillGenerator}
and \texttt{HtmlPrinter}, which extend \texttt{BillGenerator} and implement \texttt{Printer} respectively. When using reFit, we extend \texttt{RowFixture} to test result
tables from queries. Without going into too much detail about the implementation details of a \texttt{RowFixture}, subclasses essentially need to override \texttt{getTargetClass}
which should return the class which table rows are mapped to. Subclasses should also override \texttt{query} which returns an array of objects of the type returned from
\texttt{getTargetClass}. This means our data is more easily testable if it can be put into rows on a table. The system initially prints out data in the following form:

\begin{quote}
 \section*{Acme Telecom}

 \subsection*{Fred Bloggs/447711232343 - Price Plan: Standard}

 \begin{tabular}{ | l | l | l | l | }
  \hline
  \bf{Time} & \bf{Number} & \bf{Duration} & \bf{Cost} \\
  \hline
  1/1/70 1:00 AM & 447223432532 & 5:00 & 0.60 \\
  \hline
 \end{tabular}

\bf{Total: 0.60}
\end{quote}

We changed our reFit expected rows to be of this format instead:

\begin{quote}
 \begin{tabular}{ | l | l | l | l | l | l | l | l | }
  \hline
  \multicolumn{8}{|l|}{fit.acmetelecom.TheBillsShow} \\
  \hline
  Name & Number & PricePlan & Time & Callee Number & Duration & Cost & Total \\
  \hline
  Fred Bloggs & 447711232343 & Standard & 2/12/11 5:00am & 447223432532 & 60:00 & 7.20 & 7.20 \\
  \hline
 \end{tabular}
\end{quote}

The new format compresses the same fields to a single row (although in this case, the input data is different). Multiple call records are placed on different rows with the
same \textit{total}.

\begin{quote}
 \section*{Hooking up reFit with Spring}

After getting Spring working with the \texttt{SpringJUnit4ClassRunner} custom runner, we needed to get it hooked up to our fit test suite. Our fit suite is runnable either via a JUnit wrapper or programmatically via instantiation of a \texttt{TreeRunner}. reFit allows integration with Spring in both cases, with an \texttt{@FitConfiguration} annotation for JUnit or by installing a non-standard \texttt{FixtureLoader} for the \texttt{TreeRunner}. The \texttt{TreeRunner} allows an interface to be passed in with a \texttt{beforeTest} method which is executed before each acceptance test. We placed our code to load and initialise the faked out \texttt{BillingSystem} in this method, fetching the beans via \texttt{BeanFactory.getBean}. Even though this allows us to easily configure the implementation of a given dependency through the XML, dependency injection through this method is a compromise and less than optimal. By using \texttt{ApplicationContext.getBean}, our code is less complex and more easily debuggable, but we rely explicitly and directly on Spring to provide the dependency and break \textit{Inversion of Control}. Ideally, we would use the `magic' of Spring to perform the wiring up through annotations without the runner knowing or caring how it gets the \texttt{BillingSystem} object, but this is a lot more complex in the context of our runner which needs to perform the initialisations multiple times whilst running a suite. A feasible design alternative would have been to split the initialisation to happen within a fixture.
\end{quote}

\pagebreak
An example of the final format of a typical fit acceptance test in our system is as follows:

\begin{samepage}
\begin{quote}
 \section*{BUSINESS CALL}

 \begin{tabular}{ | l | }
  \hline
  fit.acmetelecom.GivenBillingSystemInitialised \\
  \hline
 \end{tabular}

 \begin{tabular}{ | l | l | l | }
  \hline
  \multicolumn{3}{|l|}{fit.acmetelecom.GivenTheCustomers} \\
  \hline
  Name & Number & PricePlan \\
  \hline
  John Smith & 447722113434 & Business \\
  \hline
  Jane Dixon & 447223432532 & Leisure \\
  \hline
 \end{tabular}

 \begin{tabular}{ | l | l | l | l | l | }
  \hline
  \multicolumn{5}{|l|}{fit.acmetelecom.LogStartedCalls} \\
  \hline
  FromName & FromNumber & ToName & ToNumber & Started \\
  \hline
  John Smith & 447722113434 & Jane Dixon & 447223432532 & 2/12/11 6:40am \\
  \hline
 \end{tabular}

 \begin{tabular}{ | l | l | l | l | l | }
  \hline
  \multicolumn{5}{|l|}{fit.acmetelecom.LogEndedCalls} \\
  \hline
  FromName & FromNumber & ToName & ToNumber & Ended \\
  \hline
  John Smith & 447722113434 & Jane Dixon & 447223432532 & 2/12/11 7:30am \\
  \hline
 \end{tabular}

 \begin{tabular}{ | l | }
  \hline
  fit.acmetelecom.CreateCustomerBills \\
  \hline
 \end{tabular}

 \begin{tabular}{ | l | l | l | l | l | l | l | l | }
  \hline
  \multicolumn{8}{|l|}{fit.acmetelecom.TheBillsShow} \\
  \hline
  Name & Number & PricePlan & Time & Callee Number & Duration & Cost & Total \\
  \hline
  John Smith & 447722113434 & Business & 2/12/11 6:40am & 447223432532 & 50:00 & 9.00 & 9.00 \\
  \hline
 \end{tabular}
\end{quote}
\end{samepage}

The acceptance tests first needed to be able to pass for the existing system, then in a TDD style, would need to fail for the unimplemented `overlapping' specification changes.
The next step was to determine the on-peak/off-peak prices for each tariff. To do this, we simply ran the acceptance tests for on-peak/off-peak times and observed the reports
(which of course failed). We found the price for 60 minutes empirically and then verified our findings through examination of the bytecode in \texttt{Tariff.class}:

\begin{tabular}{ | l | l | l | }
 \hline
 \bf{Peak Tariff} & \bf{60 minutes} & \bf{Bytecode Values} \\
 \hline
 \bf{Standard} & 18.00 & 0.5 \\
 \hline
 \bf{Business} & 10.80 & 0.3 \\
 \hline
 \bf{Leisure} & 28.80 & 0.8 \\
 \hline
 \bf{Off-Peak Tariff} & \bf{60 minutes} & \bf{Bytecode Values} \\
 \hline
 \bf{Standard} & 7.20 & 0.2 \\
 \hline
 \bf{Business} & 10.80 & 0.3 \\
 \hline
 \bf{Leisure} & 3.60 & 0.1 \\
 \hline
\end{tabular}

After getting the tariff values, we tried to add acceptance tests for as many `user stories' as possible:

\includegraphics{Failing-Acceptance-Tests.png}

\section{Unit Tests}
We introduced unit tests in parallel with acceptance tests in order to gain as much coverage as possible early on. We considered this introduction of unit tests highly important even though this is not something that's immediately noticeable to the customer. That is to say, that with FIT acceptance tests you have something tangible to show to and discuss with the client. While it is possible, through the use of a fluent API to achieve something similar with unit tests, they are still typically less understandable to non-developers and concentrate on low-level implementation details. However, if we had acceptance tests alone then we would be lacking the benefits of unit tests which are to be able to zone in on one specific object and test its and only its behaviour alone over many edge cases. Unit tests and TDD in general, allow for rapid design and verification of code and give developers confidence in making changes. As well as this, they are more robust than acceptance tests in the sense that they won't all break due to one small bug restricted to one place.

\begin{quote}
\section*{The Setup}
The frameworks we used were JUnit and Mockito for the collaborator mocking. We required a mocking framework in order to be able to test each class in complete isolation. Our choice of Mockito specifically was due to our experience with it and its fluent API.

We placed the unit test classes in a separate source folder but in the same package as the classes under test.
This was a convention we decided to adopt because it ensured that everything was organised logically and there was no mixing of production and test code at the same time.
\end{quote}

We won't enumerate all the test cases we produced since there was one for every object in the system, instead we will illustrate our approach with arguably one of the most difficult classes to test (at least initially): the \texttt{BillingSystem}. To start with, we wrote the basic outline of the fixture with a field for a \texttt{BillingSystem}. There was a method annotated with \texttt{@Before} and another with \texttt{@Test} which would, respectively, set up each test case with a new instance of a \texttt{BillingSystem} and then run and validate some methods on it.

Our intention in this case was to log the start and end of a call from a customer called \texttt{tim} to \texttt{bob}. This seemingly simple scenario had a number of obstacles to being able to test for it. Firstly, every dependency of the \texttt{BillingSystem} such as the \texttt{HtmlPrinter}, \texttt{CentralCustomerDatabase} and so on was buried deeply within it. We took immediate action by making these constructor arguments and moving their instantiation to a higher level (which ultimately would be the container or test fixture setup code). Back in the test code, we injected mocked instances of the delegates which we also set up to return fixed values for testing. After running the \texttt{createCustomerBills} method, we verified that the \texttt{BillGenerator} delegate was being called with the correct parameters and the correct number of times - so exactly once here. 

The purpose of this first test was more of a sanity check to ensure that our test setup was working correctly.
Following on from that we introduced more elaborate cases so that we could test for edge conditions and really make sure we didn't break anything in the future. One such edge case was to make sure that the BillingSystem wouldn't confuse the order of calls, or would handle the case where a call was started but not yet completed when the \texttt{createCustomerBills} method was called.

\begin{lstlisting}[caption=Set up method in BillingSystemTest.java]
@Before
public void before() {
    MockitoAnnotations.initMocks(this);
    tim = aCustomerNamed("Tim").withPhoneNo("+4413345").withThePricePlan("Business");
    bob = aCustomerNamed("Bob").withPhoneNo("+4413645").withThePricePlan("Business");
    when(tariffDb.tarriffFor(tim)).thenReturn(tariff);
    when(customerDb.getCustomers()).thenReturn(Arrays.asList(tim, bob));
    billSys = new BillingSystem(callLogger, billGenerator, tariffDb, customerDb);
}
\end{lstlisting}
\begin{quote}
 \section*{Fruits of our labour}
  We measued the amount of code coverage from our combined acceptance and unit tests using the Emma plugin for Eclipse \texttt{http://www.eclemma.org/}. Currently this stands at roughly 97\% branch and line coverage.
% 97% code coverage
\end{quote}
\section{Code Refactoring}
We are going to list some of the major changes we made to the code to make it easier to develop.
% Feel free to add more! This is by no means an exhaustive list!
\begin{enumerate}
\item{We replaced every use of the singleton pattern via a \texttt{getInstance} method with a field that was initialized through constructor injection. As well as this we replaced every occurence of an instantiation of a delegate object with the \texttt{new}  keyword (e.g. \texttt{DaytimePeakPeriod} inside \texttt{BillingSystem}) with a constructor injected dependency. The reason for this was to make the code easier to test and less coupled to specific implementations. The wiring of objects was handled by the DI framework, as discussed earlier.}
\item{We found that the \texttt{BillingSystem} was doing a lot and behaving as a 'God' class. To make
it easier to manage and test we decided to delegate some of the functions it performed to other objects.
To this end we wrote a \texttt{CallLogger} whose job it was to retrieve all calls for a specific customer.

With the \texttt{CallLogger} we went even further. The method for getting all the calls made by a customer
iterated through a list of \texttt{CallEvent}s. For each one it would use \texttt{instanceof} to determine wether or not it was a start or end event. Apart from making the code procedural (instead of Objected Oriented) this was not as efficient as it could be. To fix this "code smell" we introduced two \texttt{Map}s; one for mapping a customer to the calls he started and another to all those that he finished. We then used method overloading to log either a \texttt{CallStart} or \texttt{CallEnd} and then respectively add to the first and second map. Retrieval complexity is now also $O(1)$ instead of $O(n)$}
\item{A major problem we found in the code was that there was heavy use of primitives, especially strings.
In particular, the constructor to \texttt{CallEvent} objects was taking customer telephone numbers as \texttt{String}s. This is problematic because one could, for example, pass in some \texttt{String} which doesn't correspond to any customer's number and would thus cause a bug. To prevent this from happening we modified the constructor to take \texttt{Customer} objects, instead of strings.}
\item{We found a cyclic dependency, from using JDepend, between \texttt{BillingSystem} and \texttt{BillGenerator} because the former called methods on the latter while the latter knew about \texttt{LineItem} objects that were defined in the former. To fix this we extracted the \texttt{LineItem} class from inside the \texttt{BillingSystem}. We also felt it did not make sense for it to belong there as it should not be considered a part of the \texttt{BillingSystem}.}
\item{We noticed that the \texttt{BillingSystem} was calculating the total bill and sending it to the \texttt{BillGenerator} along with each individual item in the bill. With the expert principle in mind, we decided it would make far more sense and would be cleaner if the \texttt{BillingSystem} only sent the list of items and let the \texttt{BillGenerator} sum up the total for itself. This also helped to reduce the amount of work that the \texttt{BillingSystem} was doing.}
% added ant build to run all junit and acceptance tests
\end{enumerate}
\section{Changing the Code}
The deprecation of many of the methods in \texttt{java.util.Date} makes the class cumbersome to use, requiring heavy collaboration with \texttt{java.util.Calendar}. As recommended by the specification, \texttt{org.joda.time.DateTime} is far more suited for our task, allowing us to fetch useful information through straightforward field accessors. We were easily able to incorporate \texttt{asTime} as part of our DSL which converts a \texttt{String} to a \texttt{DateTime} object. This method was imported statically and used extensively around the code, promoting abstraction of the specific date library we used. As well as making it easy to extract different parts of a timestamp, Joda-Time allows us to easily stringify timestamps into the full date format as required by the original specification (since we want to maintain existing behaviour).

The main task for this project was to add new code to reflect the updated specification. We had already written acceptance tests in a TDD fashion for overlapping calls and we now needed to make them pass. We ended up modifying \texttt{createBillFor} within \texttt{BillingSystem} to take into account on peak and off peak settings:

\begin{lstlisting}[caption=Calculating cost of calls]
            cost = new BigDecimal( DaytimePeakPeriod
			.getOffPeakSeconds( call.startTime(), call.endTime() ) )
			.multiply( tariff.offPeakRate() ).add(
            		new BigDecimal( DaytimePeakPeriod
			.getOnPeakSeconds( call.startTime(), call.endTime() ) )
			.multiply( tariff.peakRate() ) );
\end{lstlisting}

\texttt{getOnPeakSeconds} and \texttt{getOffPeakSeconds} became static methods of \texttt{DaytimePeakPeriod} which we defined to return the on-peak/off-peak seconds of a given call. We had confidence that the new code was correct because of the acceptance tests we wrote earlier. As well as continuing to pass for the tests for non-overlapping calls, the acceptance tests now also passed for overlapping calls, even ones that overlapped several days. \texttt{getOnPeakSeconds} and \texttt{getOffPeakSeconds} were written in a TDD manner with unit tests written first, before proceeding to fill out the method contents.

\section{Other Improvements}
There was one major non-functional improvement to the code and this was the domain specific language that we created. The motivation behind it was the fact that there were lots of places in the code where you needed to build \texttt{CallEvent}, \texttt{Call} or \texttt{Customer} objects. There you had to call \texttt{new} with an arbitrary order for the constructor arguments and this process generally felt non-intuitive. There was nothing to guide you on how to build these up except for the source code itself. There were two solutions to this problem; the first being to write some \texttt{javadoc} or to write some code that could be read as if it was English. The second solution is more time-consuming but we strongly felt that this would be completely appropriate and beneficial here especially because this would create a language that would significantly improve the readability of our code, would make it easier for other developers to start working on it in the future and was integral to the problem domain (as it dealt precisely with phone calls and customers).

We achieved this through two Builder interfaces. A \texttt{CallEventBuilder} and \texttt{CustomerBuilder}. The call event builder would perform two roles: to create a \texttt{CallEvent} and \texttt{Call} that was made from the events. To create a \texttt{CallEvent} it had a public static method \texttt{aCallFrom(\textbf{String} customer)}. This would return another builder which would store the name of the customer provided and itself expose a method called \texttt{to(String cutomer)}. This in turn would return another builder that has the name of the caller and callee but completes the chain by giving the user the choice of one of two methods \texttt{endedAt(DateTime time)} or \texttt{thatStartedAt(DateTime time)}. The first one returns a \texttt{CallEnd} and the second a \texttt{CallStart}. As well as this, to create a \texttt{Call} there was a method \texttt{first(CallStart start)} which returned its own auxilliary builder that had a \texttt{then(CallEnd end)} method that would return the completed \texttt{Call} object. The idea behind the \texttt{CustomerBuilder} was very similar. The listing below shows how our fluent API appears.

\begin{lstlisting}[caption=Example use of our DSL]
tim = aCustomerNamed("Tim").withPhoneNo("+4413345").withThePricePlan("Business");
bob = aCustomerNamed("Bob").withPhoneNo("+4413645").withThePricePlan("Business");
call = first ( aCallFrom( tim ).to( bob ).startedAt( asTime("20/11/2011 4:05pm") )
  .then( aCallFrom( tim ).to( bob ).thatEndedAt( asTime("20/11/2011 4:06pm") ) );
\end{lstlisting}

Another minor improvement we made to the code was to simply organise the source files better. We decided to group objects together that performed related tasks in a single package. This way it would 
become easier to introduce layering in the code and allow for reusability of these packages as the software grows larger.

\begin{quote}
\section*{Build Automation}
We decided to use build automation for this project, which amongst other advantages allows us to minimize and promptly identify `bad builds' for cases where the build has been broken or code has been checked in which fails tests. We opted to use the portable Apache Ant to manage our codebase and automate the build process, with our buildfiles compiling the code and invoking a comprehensive test-run of our suite of unit tests and acceptance tests. The build process is hence collated in a single place which is easily editable.
\end{quote}

\begin{quote}
\section*{Dependency Management}
During our work on the codebase, we introduced many external libraries which we added to the project classpath and utilized. In the final state, we were making use of over 30 libraries (Spring contributes many sub-dependencies). To simplify this and consolidate the many dependencies, we wanted to use a dependency manager such as Apache Ivy, a sub-project of Apache Ant. The theory is to have an artifact repository which Ivy resolves and downloads resources from. Downloaded resources are mandated by a single XML file listing the project dependencies. Unfortunately, we had some trouble setting up the repository and eventually put our efforts to pursuing other additions to the project.
\end{quote}

\begin{quote}
\section*{Structure Analysis}
During our changes to the code, we attempted to eliminate circular dependencies (for example, the nested static \texttt{LineItem} class) as well as avoid creating new ones in order to encourage a well-structured system. We tried our best to create packages with high cohesion such that inter-modular interactions were minimal. We were able to confirm we were not producing a `ball of mud' through structure analysis tools such as Structure101.

\begin{center}
\includegraphics[scale=0.5]{Dependencies.png}
\end{center}
\end{quote}


\begin{quote}
\section*{Static Analysis}
We used FindBugs to perform static analysis on the bytecode of the project. It mainly highlighted potential problems we were able to dismiss after considering the context and it did not uncover any major issues. However, using the tool gave us more confidence about the code we had produced, decreasing the likelihood of deployment of obviously incorrect code.

\includegraphics[scale=0.7]{FindBugs.png}

All issues uncovered related to the usage of reFit and its associated `problems'. The naming of the fields is a minor issue, and the use of a static field is to share information between fixtures. Both styles were gleaned from the code given in the original tutorial on week 6.
\end{quote}

\section{Managing Future Changes and New Features}

Given that this project is a collaborative effort, one of the first things we did was to set up an SVN and check in the code to it. The scale of the project and limited number of developers (3) allowed us to keep track of each others' work purely through the SVN. If we wrote a particular part which we wanted code-reviewed, we communicated this to a teammate `manually', over the phone for example. This is acceptable and feasible for small projects, but if this project were to get larger and more complex, we would need to scale accordingly, taking advantage of professional code-review tools such as Crucible which integrate with version control software to allow for pre and post commit code reviews.

The overlapping calls problem demonstrates a simple example of an issue which needed to be addressed. In a real system, we would have many of these issues, and one good way of keeping track of what needs to be done and by whom is to use an issue tracking system such as JIRA (\texttt{http://www.atlassian.com/software/jira/overview}). This allows effective and efficient collaboration by creating fluid, independent workflows starting with requirement capture/gathering leading to the creation of a ticket and the eventual resolving and subsequent closing of the ticket. As a tool we had used during our industrial placements, we are able to recognise the pragmatic application of powerful features within JIRA such as the ability to prioritise issues and the visualisation and code integration tools.

Our use of build automation also futureproofs the codebase for features such as continuous integration where we could have our build file being run headlessly for each check-in to the repository (triggered automation) or as a nightly task (scheduled automation). We researched systems such as Jenkins (\texttt{http://www.jenkins-ci.org}), a continuous integration tool designed primarily for Java, which supports SVN and Apache Ant. For these reasons, this tool is well suited for the project we worked on.

One problem with the original codebase was the lack of documentation. Ideally, we would write Javadocs which formally document the API of our Java source code. The generation process could then be linked in to the build process. This documentation would help other developers use our code as a library without having to root through the source itself.
\end{document}
