\documentclass[a4paper,10pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage[cm]{fullpage}

%opening
\title{Course Diary - Entry \#1\\
475 Advanced Topics in Software Engineering}
\author{Eli Gutin \texttt{eg08}, \\ Michael Kwan \texttt{mk08}, \\ John Wong \texttt{jw808} }

\begin{document}

\maketitle

\section{Three key points from the lecture}

\begin{itemize} 
\item{We learned that in practice it's important to write code that is maintainable and amenable to change. This is because requirements change often and with pressures on time and resources from the market, management and clients it becomes exponentially harder to do this without introducing bugs. Sometimes you need to work with legacy code that needs to be modified before it gets to a state where it's possible to work with it effectively. It is a universally agreed rule that changing code is harder than writing new code.}
\item{We learned that there are tools that help us to understand the structure of the legacy code. Examples of these were NDepend and XDepend. These generate a dependency graph and structure matrix which visually present how the code interacts with itself in terms of modules and packages.}
\item{How to write effective unit tests in legacy code systems that lack them. This is in order to safeguard against breaking existing functionality and also to give confidence in making future changes. We explored how to stub out implementations of injected dependencies so that the stubs would serve to collect information that could be verified later. In the test we call production code (of the class under test) which then calls the stubbed out delegates.}
\end{itemize}

\section{Related Reading}
The lectures were my first introduction to static analysis tools such as NDepend and XDepend.
After learning of their existence, I read up further on their use and did some experimentation. 
Previously when I had to explore and understand the code structure of a project, it was largely guesswork.
I would essentially jump around the code attempting to find snippets that looked similar to the functionality I was attempting to locate.
As I read on a website, this is analogous to 'using a microscope when I needed a telescope'.
After testing out XDepend on an existing project I had been working on, I was able to appreciate to a further extent the value of these sorts of tools.
The generated dependency graph allowed me to quickly ascertain the 'big picture' of the code and roughly how different parts fitted together.

A good source that we found about working with legacy code was from a book by Michael Feathers\\ \texttt{http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf}.
It defined what legacy code was and explained what the differences are between writing tests for new code (like in TDD), compared with writing coverage tests for legacy systems.
It also talked about different situations and flavours for changing legacy code; one in which you're changing something that's widely used in production and users have become accustomed to its features versus
a project which hasn't been released where you have greater freedom to refactor mercilessly. 

In this article, Michael introduced the concept of Inflection Point when it comes to modifying the legacy code. Inflection Points are defined as a narrow interface to a set of classes, which means that the changes of any of those classes will only be detectable at that point. After the inflection point is found, it needs to be covered by writing tests for it. In order to do this, Michael mentioned two ways: 
Breaking External Dependencies, this means adding an interface from the original class.
Breaking Internal Dependencies, this means creating a  subclass to override code which creates the dependency (i.e. calling or creating another instance or class). Afterwards, we can make the changes and re-factor.

Another perspective we found was from an article by Elliotte Rusty Harold \\ \texttt{http://www.ibm.com/developerworks/java/library/j-legacytest/index.html}. There he stresses that when starting work on legacy code 
you need not aim for perfection straight away or indeed 100\% test coverage. He suggested starting off writing a test which isn't a \textit{unit} test in the strict sense but rather something which runs a smoke test on the
application's entry point. After that you can start introducing test fixtures. One way he suggests is testing by functionality; looking at how the application behaves given certain inputs, user interaction etc. This resonates with the 
tutorial where we wrote a test for the bit of the code where a slideshow is started. Afterwards he suggests another method which is testing by structure, that is testing modules, packages individually where you aim for greater coverage and to uncover more subtle bugs.
The message we took away from this was that you firstly write tests from the perspective of the user and then from the programmer's perspective.  
 
\section{Similar Past Projects}
\subsection{Mike's experience}
During my industrial placement I worked in an application infrastructure team where I developed libraries and frameworks for the use of the rest of the firm.
Despite the fact that the work I did was on new projects, I was well aware of how critical unit testing was. Without unit testing, I could potentially be releasing software which hundreds or thousands of people depend on which was buggy.
Unit testing gave me the confidence to make changes and be able to push them out to production quickly.
I would simply push it to the central repository which had nightly builds and tests run against it from a continuous integration server.

\subsection{Eli's experience}
I worked on a project where I had to change existing code. It wasn't yet deployed and was full of bugs so I was able to refactor with impunity. As I went along I also added tests to make sure
that I both understood the old code, set an invariant for it and make sure that nothing broke with my changes. When I released it there were a few issues but these were easier to resolve once
the test suite was there. This was a useful experience as it taught me how to write tests for someone else's code and not just my own. 

\subsection{John's experience} 
During the Industrial Placement, I worked as a Functional Verification Tester of a software product (WebsphereMQ) and I came across maintenances for various features of the product while they were being changed or implemented. So I had the opportunity to experience the whole process of a software system change and evolution, and to inspect the way that the development managers handle them, including the risks involved. 
I have learnt that every components in the tests materials, and even the product itself need to be documented with descriptions, mechanisms and perhaps other notices that the developer wishes to mention (such as recent changes) in order to be kept as references, which assists other people to overcome issues such as team turnovers, the system’s regulations and integrations, and the lost in code values.
I was also responsible of several changes and improvements of the existing test materials, which I confronted a lot of unfamiliar code-base. A section of this lecture reflected some of my approaches, especially when I invest time investigating all the callback functions in the code in order to understand the linkages and dependencies of the different objects.

\section{Tutorial Exercise Description}
For the tutorial we had to introduce tests to existing legacy code.
The program was to display a slideshow of images and the aim of the exercise was to firstly test an existing piece of functionality and then produce one extra feature that was also tested.
The class whose method was to be tested was \texttt{JPhotoFrame} which was the entry point for the application.
The code we needed to test was the one that was triggered when the user clicked the menu item to start the slideshow.

\section{Main Challenges}
The main challenge was that the view and the model for the slideshow menu item were too tightly coupled, that is the code to display the window and the code to check for a non-empty collection of photos were in the same method.
To make matters worse, this code was deeply nested inside a very large method containing lots of if-else statements.
It was also difficult to write a quick unit test that required no user interaction because windows were popping up and the developer would need to manually dismiss them during the running of the test suite.
One underlying issue we had with writing the tests was that the constructor of \texttt{JPhotoFrame} was calling a initialisation method which generated windows.

\section{Approaches}
The first thing was to create a JUnit test class called \texttt{JPhotoFrameTest}.
In this class we had an instance of the \texttt{JPhotoFrame} which was instantiated inside a method annotated with \texttt{@Before}.
We then realised that the production code needed to be refactored and we did this by extracting the statements that were called after the desired menu-item was clicked (line 582 of \texttt{JPhotoFrame}) into a method called \texttt{startSlideshow}.
One of the problems with this approach was the view was being displayed inside this method, so we went further and created a delegate that was responsible for creating a window and displaying it.
This was done by writing an interface called \texttt{PhotoShowStarter} and having two implementations; one with the original functionality and one which simply recorded that this method was called and also what parameters were given.

\texttt{startSlideshow} has a check for whether the number of photos is 0, and pops up an error message if this is the case.
Since this is not something we want to test, and is part of the view, we moved it back to the main method block \texttt{actionPerformed}.
Once we did this, we were able to proceed by instantiating a \texttt{JPhotoFrame} which used the stubbed out delegate.
The tests asserted that when there were no photos, the stub was not called and when there were, that it was called and with the correct parameters.

Adding the new behaviour was fairly trivial, involving adding a menu item to \texttt{JPhotoMenu},
and the corresponding handler for it in \texttt{JPhotoFrame} under \texttt{actionPerformed}. 
he functionality was identical to \texttt{startSlideshow} but with a shorter interval given to the \texttt{PhotoShowStarter}.
Similarly, the tests asserted the shorter interval was passed to the delegate.

To resolve the issue with \texttt{JPhotoFrame}'s initialisation generating windows, we used setter injection such that when \texttt{JPhotoFrame} loads up, we have a greater control over the lifecycle of the object. In this case, we instantiated an 'empty' \texttt{JPhotoFrame} which we then set the \texttt{PhotoShowStarter} and \texttt{PhotoCollection} of. This way our test became very fast and could run without any action from the developer. 
\section{Feedback on the lecture/excercise}
We felt that this was a very informative lecture that not only dealt with the concepts but also the pragmatic application of them in real world problems. It was useful to see live coding in action especially because we saw how powerful the use of shortcuts in an IDE was. We were also convinced that an IDE was really the best way to write code as it changes your mindset from one of text formatting to thinking about algorithms and code structure. The fact that there were two lecturers was effective because while one spoke, the other demonstrated on his laptop.

The exercises allowed us to practice writing unit tests for legacy code and exploring a legacy code-base we were originally unfamiliar with. However, the exercise specification could have been clearer for people who did not have background knowledge of how to import and set-up the environment. We felt some parts of the guidance were not technical and specific enough, which left us to explore our approach to achieve the tasks. As this was the first exercise, the exercise could have been more fundamental and informative in order to make it more understandable for newcomers. The helpers assisted us efficiently when we were puzzled on the aim of the task and additionally helped out by telling us the specification required and various approaches.

We felt the tutorial complemented the lecture well, demonstrating the fundamental importance of testing as well as the boost to maintainability after refactoring, clearly evidenced by the ease by which we modified the program to include the additional behaviour.
\end{document}
