\documentclass[a4paper,10pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage[cm]{fullpage}

%opening
\title{Course Diary - Entry \#2\\
475 Advanced Topics in Software Engineering}
\author{Eli Gutin \texttt{eg08}, \\ Michael Kwan \texttt{mk08}, \\ John Wong \texttt{jw808} }

\begin{document}

\maketitle

\section{Three key points from the lecture}

\begin{itemize} 
\item{The concept of test-driven development and the typical cycle (test first, fail, write code to pass it, refactor, repeat). 
It was also emphasised that the tests would serve as additional documentation. It lets you know when you are finished, helps detects bugs early and leaves a regression suite behind you.}
\item{We were reminded of the \textit{Law of Demeter}, that objects should ideally only send messages/commands to its immediate collaborators. A domain model was compared to a message passing system. 
Ultimately, this increases code cohesion and allows objects to be mocked out easier leading to tests being simpler to write.}
\item{We were told how to use mock objects (as collaborators) to test a specific object. As well as that we were shown how to set them up to return specific values and use them to verify that certain calls were made to them.}
\end{itemize}

\section{Related Reading}
We read \textit{Test-driven development: by example} by \textit{Kent Beck} which reinforced what we had learnt in the lecture through a worked example. 
This showed us the thought process that a developer goes through by first writing a test case that fails, writing code that makes it green, refactoring and starting over.
And not only that, it showed us which test cases we should start writing and how we treat it as a story - as if we're telling a story about how the unit behaves. 
The book starts off by the author writing some really simple test cases and then adding more test cases depending on what is required of the application. 
He shows us that you can informally write down a list of functionalities that your software needs to achieve and then write a failing test and fix it as you go through each one in the list.

We also read an article, \textit{Mocks Aren't Stubs} by \textit{Martin Fowler}, which describes the differences and common confusions between the two testing methodologies. This article allowed 
us to clarify our conceptual understanding about how behavioural verification differs to state verification. Last week, we were asserting the states of objects under test after performing 
actions upon them. This week, we are verifying an object's (SUT - system under test, as Martin refers to them) behaviour by verifying the expected calls are made in its collaborators. That article really made it clear 
to us about the dichotomy of the classical TDD approach and the mockist TDD, sometimes called BDD (behaviour driven development) approach. Fowler lists four kinds of collaborators including two that we had not formally known about before,
using the same vocabulary as from a book by Gerard Meszaros: dummy and fake types. A dummy is just something that you create for the purpose of filling parameters 
(but they don't actually do anything as such) and fake objects have a real implementation which is usually not suitable in production (such as an in-memory database).
\section{Similar Past Projects}
\subsection{Mike's experience}
During my Industrial Placement, we were encouraged to do test driven development in conjunction with issue tracking through JIRA. At the start, the mindset of testing first and writing code afterwards seemed almost 
counter-intuitive but as the projects got larger, I found the tests I was writing really helped drive the direction of my code in terms of decoupling and helped with keeping the code cohesion high. One side effect 
of writing tests first is it ensures the final code is clean and well encapsulated as a unit when you get round to writing it. I had not encountered mocking in the past, and had only used stubs during tests but 
looking back in hindsight, I can see situations where knowing such a technique would have allowed me to test behaviours in a much less verbose way. In some situations, the nature of the implementation means it 
would have been more appropriate to test behaviours as opposed to verifying object states by asserting values (e.g. in cases where I had code reacting or being triggered by events from the environment).
\subsection{Eli's experience}
I first became aware of TDD and its wide use in my first internship. It was quite an eye opening experience because I saw first hand how code that would have otherwise been unmanagable, 
due to its scale and size, was extremely modular and decoupled with high test coverage. In that internship, I was told that my main objective, among others, was to write test-driven code.
I was working on a C\# UI at that time and I found it quite puzzling how it was to be tested (using NUnit). My buddy explained to me that the main idea was to separate the model and views and to this end I used a 
common pattern in WPF known as MVVM (Model view view-model). While the view was purely XAML (a declarative language), the other two components were written using an object oriented approach in the manner prescribed 
by TDD. This way it became much easier to ensure that the individual components, models and view models were working correctly and the design was good. It was a that time too that I learned of mocking frameworks and it taught 
me how to test behaviours and interactions as well the state of the object.
\subsection{John's experience} 
<<<<<<< .mine
I worked in the testing department of the product WMQ during my Industrial Placement. We used the self-designed language called Trivial Object Description Language (TODL)Many of the tests that I explored uses mock objects as collaborators such as queue managers, queues or even the client applications. 
=======
I worked in the testing department of the product WMQ in my Industrial Placement and I used an internal language called Test Object Description Language (TODL) in order to test various behaviours of the product. 
This language was designed only for testing WMQ by providing declaration interfaces for mocks and dummies of the objects used in WMQ such as Queue Managers, Channels, SSL Certificates, etc. 
Unlike other unit test suites (e.g. JUnit), TODL only uses simple statements to design a procedural test. However, it is capable of carrying out tests ranging from some basic instructions (e.g. create Queues and send Messages) 
to some complex remote procedure calls (e.g. running inter-process communication methods between remote machines to validate the connectivity and speed while sending messages across). It was a nice experience to have seen a 
testing language which is designed and oriented to set up dummies and mock objects into a specific scenario to examine the product's behaviours and capabilities. Having learnt this language, I understood how a test can be 
collaborated with mock objects and the efficiency it can extend on a test.
>>>>>>> .r33

\section{Tutorial Exercise Description}
The aim of this exercise was to drive out the implementation of a class within an audio guide player application using unit tests. We also needed to 
create mock objects to interact with the object in the unit test itself. 

The application was an audio guide player which receives real-time updates from GPS of the user's current location and then depending on certain circumstances
will play a track for the current location. We needed to write the code and tests for a class, which coordinates the updates from GPS (by implementing the \texttt{LocationAware} interface) to play certain tracks using the
\texttt{MediaPlayerControl}. This involved implementing a callback (via the \texttt{MediaPlayerListener} interface), which receives notifications when a track had finished playing. It maps locations to tracks via 
a DAO called \texttt{MediaLibrary}.

There were a number of expected behaviours that our implementation would need to meet and we would need to aim for maximum test coverage. We also needed to do this in a test driven fashion, by first writing a 
failing test, getting the test to pass (and if the test is difficult to write, change the code), then refactoring the code and repeating the steps until all requirements are met.

\section{Main Challenges}
\begin{enumerate}
 \item{Learning to use JMock as this was a new framework for us. We had to understand what it did, how to use the API and familiarise ourselves with the constructs. The concepts of using mock objects and verifying interactions
 in testing were also new to us.}
 \item{The logic of the behaviour was slightly more complicated than we expected and we had to think carefully about what the tests would be asserting.}
 \item{Adapting to the style of test-driven development. Trying the approach of writing a failing test first and then coding.}
\end{enumerate}

\section{Approaches}

The first step after importing the project in Eclipse was to do what was shown in the lecture, namely, set up a failing test for an initial location. This test asserted that the track would immediately be played.
Once this test failed we, again, did the same as in the lecture demonstration by filling in the code in the \texttt{AudioGuide} class which made the test green.  We did this to make sure that be basic case of a track playing in an initial location was working.

The next step was to test for another situation; the track finishes playing after the location changes. So for this we wrote a new test which first defines the expected interactions (on the Mock objects), then 
simulates the behavior we are testing by calling the methods on \texttt{AudioGuide} in the appropriate order.
So for example, after the new location was entered while the track was playing, we verified the order and number of times methods in the collaborators were called (twice for \texttt{mediaPlayer.play}, \texttt{mediaLibrary.trackForLocation} 
and \texttt{mediaLibrary.play} respectively). As we expected, the first run of this test failed and so we created an instance variable for the \texttt{AudioGuide}, 
which provided a queuing functionality in the cases that a track was already playing. As soon as the track finished playing, we would play the last queued track. We chose this approach because it was simple and it was the quickest way to get the test passing. The test then passed.

The third test was almost identical to the previous one, with a simple switch in the order of when the track finished (before visiting the new location). However, the test passed without further modifications 
because we had already intuitively added code to update the track queue in the \texttt{trackFinished/locationChanged} methods during the previous step.

The fourth case was to meet the fourth requirement in the specification. We tested that when the media player could not find a track associated with the current location it would not do anything. We wrote the test by setting up the \texttt{MediaLibrary} mock to throw a 
\texttt{TrackForLocationNotFoundException} from \texttt{trackForLocation} (this was our own subclass of \texttt{Exception}).
To compile the code, we were forced to add a \texttt{try-catch} block to our code. In the \texttt{catch} block, we simply returned and the test went green. We decided to handle this by just returning because it was the most direct way of achieving what was needed. If we got some other requirement later on, for example to log this exception or notify the user, then we would modify the test and code to cater for this use case.

Finally, we added a fifth test case where we would call \texttt{locationChanged} on the same location twice with \texttt{trackFinished} in between and verified that the track was only played once. This was to ensure that the same track would not be repeated for a given run of our audio guide. Unfortunately this failed and we fixed it by adding a \texttt{HashSet} to our \texttt{AudioGuide} to maintain which tracks have been played already and ensure tracks are not repeated. The test went green and we refactored the code as per the TDD cycle. We extracted local variables and methods. We felt this was a good opportunity to do so since I tests would catch anything that we might have broken in the process.

We first completed the exercise using \texttt{JMock} and then decided to start looking at other mocking frameworks and ultimately 
we ended up writing another suite of tests by using the \texttt{Mockito} framework because one of us already had experience with it and 
we also found that it was quite simple to pick up \\ \texttt{http://mockito.googlecode.com/svn/branches/1.5/javadoc/org/mockito/Mockito.html}.
The style of \texttt{Mockito} was different. Instead of expressing all the expectations in one go, you would typically set up the mock to return some values (or throw Exceptions) at the start
and then perform some operations on the object under test and verify that certain methods were called with the right parameters, in the right order etc, after each operation (if you wanted).
\section{Feedback on the lecture/excercise}
We thought the lecture succinctly described the different mindset a developer must adopt in TDD and made clear the cycle involved in this methodology. The benefits of the style were clearly listed and we felt the 
tutorial supported the content of the lectures really well. We liked the fact that the slides didn't have too much text in them and that the lecturer explained the concepts clearly. The demonstration also tied in well
with the flow of the background theory that was taught. Also, we found the initial code very helpful and easy to work with. The fact it included a basic test already meant we could get started quickly.

Our only complaint would be some of the wording in the requirements of the tutorial exercise, which we found could be interpreted in arbitrary ways. We had to ask for clarification about the functionality to support 
queuing of tracks, whether tracks for new locations should be queued one after another or whether just the last location visited should be queued. However, the exercise was basic enough to allow us to focus on the 
concepts of TDD and we were able to complete it in a timely fashion. Overall, we found the lecture and exercise very interesting and we imagine what we have learnt will be very applicable in future work.
\end{document}
