\documentclass[a4paper,10pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage[cm]{fullpage}

%opening
\title{Course Diary - Entry \#7\\
475 Advanced Topics in Software Engineering}
\author{Eli Gutin \texttt{eg08}, \\ Michael Kwan \texttt{mk08}, \\ John Wong \texttt{jw808} }
\begin{document}

\maketitle

\section{Three key points from the lecture}

\begin{itemize} 
\item{Before you can release software to your users in production it's imperative 
that it's first promoted through various testing and quality assurnce stages. This process can often
become a bottleneck so it's advisable to automate it. It's important to not leave the release to the last
minute and practice it often enough so that the project gets delivered on time.
Deciding whether to start working on new features and do more analysis work should
depend on the existing number of user stories backing up in QA. One such rule 
is called \emph{WIP}, Work In Progress, where you can only start working on a new feature given there is enough space.}
\item{
 It is important to release in small chunks and often as users will see a continuous stream of progress. In successful agile teams it is the customers who decide when to release. This can only happen with 
continuous integration and tested builds being released often. In an ideal scenario,
both users and QA can initiate the release themselves whenever they feel comfortable. 
Large modern software companies such as Facebook and \emph{IMVU} place a great deal of thought
and engineering to their release methodology. For example at IMVU there is an automated system that runs 
thousands of tests constantly and automatically releases when they pass.
}
\item{While agile teams focus on some main goals, it is a good practice
to release often to obtain feedback from customers. This is particularly common
in startups where once a particular feature or group of features are released, the team 
listen to how the market responds and use that to tune their business plan
as well as the product. A novel approach is to rather than decide what to build on
each iteration, the team decides what to learn. With this, they release typically the smallest piece
of functionality necessary to assess whether one of their hypotheses is supported. }
\end{itemize}

\section{Related Reading}
We read a list of general tips for startups on 
\texttt{http://www.paulgraham.com/startuplessons.html}. It was interesting to note that the first two
were entirely about release management. There it spelled out how critical it is to release early but it also stresses not to compromise on correctness (by releasing bugs). The reasons
it gives are more than just to assess the user's reactions and feedback. One of these was because once something is live it creates a great sense of urgency and your productivity and sense of what should be prioritised is improved. It goes on to mention a second point which is that the startup needs to release often to keep up with user's demands. We find that this advice is extreme in the sense that you also need to also balance the release early and often strategy with well-thought out and planned goals. It's known that Google for instance became successful not just because of the way they handled their release management and other practices but for for their product and novel approaches too.
\section{Similar Past Projects}
\subsection{Mike's experience}
During my industrial placement, I worked on a small project which I controlled the releases for through the blue-green release method. This was achieved by users working 
off a `latest-release' symlink. When I pushed out new releases, I simply switched the source file of the symlink - any new launches of my software would then be 
directed to the updated release. I also participated in larger projects which approximately followed the flow of the self-service release cycle as described in the 
lecture. It was interesting to learn of the various alternative release cycles.
\subsection{Eli's experience}
In my industrial placement at a financial services company, there were pretty strict and arguably bureacratic controls on releasing. One had to fill in a form every time there was a release to production that had to be signed off by their manager and operations. This is in contrast to small start up companies where there is 
more emphasis on doing this often and gauging the response of the customer. In large companies such as the one I worked at, it's more important to aim for stability and to keep the business going. 
\subsection{John's experience} 
As I was working as a tester in one of the IBM's production line - WebSphereMQ, I explored their methodologies to maintain a continuous flow of delivering new patches or versions
and I found out that they followed a monthly routine which included refactoring, testing, validating and lastly, releasing to public. 
Every week within the month, managers from separate divisions gathered to re-organise task objects and reschedule the deployments of the unreleased features.
In the end, they set new deadlines for the tasks objects and notified all developers and testers about the new schedules made.

\section{Tutorial Exercise Goals}
The goal of the tutorial was to respond to HTTP requests from a market server. The market server inspects the replies and assigns marks based off of its `correctness'. 
Our aim was to compete against other candidates and obtain as many points as possible while the requests were sent continuously and the contents within changed frequently.
One key point of the game is that you would continuously lose points if your server responded incorrectly or even worse, not at all. 
Therefore this exercise required immediate changes frequently in an iterative manner to gain the highest mark possible. 

\section{Main Challenges}
Our main challenge was keeping up with the rate at which the questions were being updated (rapid change of requirements). Especially in the cases when a 
new question was a close variant of an existing question, if the filtering code was too generic, the new question would be processed by a function handling 
an existing question. In general, the questions were trivial, either requiring simple arithmetic/sequence mathematics or were quiz-style and could be answered 
by returning hardcoded values. However, towards the end some of the questions became more complex and required more thought (e.g. the scrabble score question).
\section{Approaches}
Our approach to this tutorial was to filter the contents of different questions with \texttt{String.contains()} and then to dispatch each type of question to 
a different function. Since many of the questions required integers to be parsed, we created a function returning a \texttt{Regex Matcher} of a \texttt{String} 
under the \texttt{".*?(\textbackslash \textbackslash d+)"} pattern. The construction of the actual \texttt{Pattern} object is an expensive operation so we made it into a 
\texttt{private static final}. Each answer function then performed some sort of calculation based off of what \texttt{Matcher.group()} returned or simply returned 
a \texttt{String} which was written to the \texttt{PrintStream}.

Our team was 1st at one point (towards the beginning), but eventually we fell behind to within the top 10 instead. This was partially due to us pausing development to 
write unit tests and to tidy up the code structure. The reason for this decision was because as the task went on, we felt our code was being degraded further and further. 
Code written earlier was done so with the intention of cleaning up and refactoring, but was released and tested in `production' and never changed. The logic behind this was that 
the longer the system was down, the more points we lost. The nature of the game meant that it was optimal to cut a new release each time we wrote a new function (every few 
minutes) - continuous deployment. The pause was a strategic mistake which we paid for in valuable development 
minutes but it would be nice to think that it was the right decision for the long term maintainability of the code. The game should have been treated as a sprint to the end, 
continuously grabbing the largest market share of points and our game-changing mistake demonstrates how coding aggressively can be advantageous to coding defensively during 
the early stages of a startup.
\section{Feedback on the lecture/excercise}
We find this lecture well-structured as it broke down the process of deploying a piece of software into small but detailed procedures.
It contains some directive methodologies such as the kanban board, blue green releasing and split test which all help us to organise and prioritise work processes
in order to maintain a errorless but yet continuous deployment.

We thought that tutorial exercise was rather enjoyable as we found it interesting to race against other people on refactoring the answering method in a speedy manner.

\end{document}
