\documentclass[titlepage]{article}

\usepackage{tabularx}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage[pdfborder={0 0 0}]{hyperref}

%\graphicspath{{./gfx/}}

\title{Test Report Document\\Colony Management\\(Working Title)}
\author{CIS*3760 Software Engineering\\
\\
Winter 2012\\
\\
School of Computer Science\\
University of Guelph\\
Guelph, ON, Canada\\
%%\vfill
\\
Team members:\\
Chris Allen, 0703391\\
Douglas Griffith, 0506355\\
Jameson Reed, 0666220\\
\\
Clients:\\
Jeremy Simpson, Ph.D.\\
Melissa Allwood, BSc.H
}

\date{}

\begin{document}
\maketitle
\tableofcontents
\clearpage

\section{Revision History}
\subsection*{Revision 1.0}

Initial release, 2012-03-26

\clearpage

\section{Objectives}

This document describes the testing procedures we have created to test the
Colony Management application. Because this is primarily a user-driven
application, we have focused on automated system testing, with unit tests and 
integration tests as appropriate.
\\

Our unit and integration tests use QUnit, a JavaScript testing framework 
developed and maintained by the jQuery Testing Team. It was chosen since it
complements the other technologies we chose to use for this project, which are 
jQuery and jQuery Mobile. It allows us to write unit and integration tests in the
classic xUnit style, which increases our productivity. Since we are using a
white-box testing strategy for our unit and integration tests, it also lets us
write all our application code and testing code in the same language.
\\

Our system tests use Selenium with the ChromeDriver server, with Python
language bindings. We are using a black-box testing strategy for our system
tests, so they did not have to be written in the same language as the application
code. Selenium is a browser automation framework package that allows you to
simulate user interaction with a web-based application, which is perfect for
testing our user interface and our system as a whole. We considered alternative
automation frameworks, but Selenium seemed the best match for our needs, as it
supports the Google Chrome browser which is based on the WebKit platform,
that is also used by the default browser on our primary target platforms, BBOS 7
and iOS 5.

\section{Document References}

The initial objectives and goals of the Colony Management application are
outlined in the Requirements Analysis Document (RAD). The RAD documented
the problems the client was facing in their mouse colony management,
and their idea for a phone application which would provide a solution, it
outlined the goals they had for the application and the set of features the
application would attempt to provide. Following the RAD the System Design
Document (SDD) documented the way in which the application would provide the
features discussed in the RAD. The SDD covered the design decisions made
with regards to the trade offs and compromises that would need to be made
to compensate for the constraints placed on the project. The SDD produced
a structured design of all the implementation nonspecific features of the
application. Next the Object Design Document (ODD) took the plans laid out
in the SDD and mapped them to the implementation's domain, describing the
way in which the available technologies would be leveraged in order to best
meet the applications goals within the implementation imposed constraints.
\\

This Test Report Document (TRD) makes use of the three design documents in
order to determine the criteria that should be used in order to best test
the application, and ensure that projects goals have been adequately met.

\section{Test Summary}

We will be primarily testing the functional requirements of the application (see
section 6.2 of the RAD).\\

Our functional tests cover the creation, modification, and deletion of various
data within the application, including: colonies, breeding pairs, litters, mice,
experiments, experiment groups, and interventions.\\

We also test some non-functional requirements (see section 6.3 of the RAD).\\

Our non-functional tests cover some user interface considerations, as well as
performance targets and error handling. Most of the other non-functional
requirements are either only testable on actual deployment devices, or are
subjective constraints (user interface ease of use, for example).

\section{Testing Strategy}

\subsection{Inspecting components}
Each part of the application that was programmed by at least two group members.
This ensured that the code being developed did not stray to far away from what
was envisioned for the final product.  After the application was completed,
the code was looked over again by each member. This confirmed that every
part looked the way it was originally designed to function.

Each part of the application that was developed was coded by at least two group
members.  This ensured that the code being developed did not stray too far away
from what was envisioned for each component.  After the application was
completed, the code was reviewed by each member. This ensured that every part
conformed to specification.

\subsection{Unit testing}
Unit testing was performed on the low-level commonly used utility classes
of the application. Ensuring the reliability of these components allowed
us to efficiently perform top-down integration and system tests with the
confidence that the errors we were discovering were not a result of the
shared utility classes.

Unit testing was performed on the low-level commonly used utility classes of
the application. Ensuring the reliability of these components allowed us to
efficiently perform top down integration and system tests with the confidence
that the errors we were discovering were not a result of the shared utility
classes.

\subsection{Integration Testing}
The team decided upon doing a top-down approach when it came to integration
testing.  Starting from the top and reaching down to the lower components
was faster to write tests for and was reliable.  We made sure that the
components were working together by getting them to interact correctly and we
constantly probed the data to confirm that the information passed was correct.
Every component became involved with the integration tests and could correctly
function inside the ecosystem.

The team decided upon doing a top down approach when it came to integration
testing.  Starting from the top and reaching down to the lower components was
faster and still ensured the reliability of the components.  We made sure
that the components were working together by getting them to interact with
one another and constantly probed the data and confirmed the information
was correctly passed.  We made sure that every component was involved with
the system and could correctly function inside the ecosystem.

\subsection{System Testing}
System testing was used in order to ensure the functionality and usability of
the system. We utilized system testing's strength at testing the usability of
the user interface, in both the performance and workflow sense. We decided
system testing would be exceptionally effective for these tasks due to the
user-oriented nature of the application.

\section{Test Cases}

\subsection{Colony}

This test ensures that the system is able to successfully perform the following
operations on colonies: creation, modification, and deletion.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Create a colony
    \item Edit a colony
    \item Delete a colony
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/colony\_use\_case\_tests.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Name, Gender, Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./colony\_use\_case\_tests.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Colony: create, edit, delete.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 10 seconds. Any longer than that should
be considered a regression.\\

This test helped us find a regression in \texttt{app/js/ListView.js}, where
it was improperly reporting the name of the object to be deleted in the
confirmation dialog.

\subsection{Mouse}

This test ensures that the system is able to successfully perform the following
operations from the Mouse menu item: View a Mouse, Edit and Save, Edit and Cancel

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item View a mouse 
    \item Edit a mouse and save the changes 
    \item Edit a mouse and cancel the changes 
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/mouse\_edit.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Tag Name, Cage, Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./mouse\_edit.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Mouse: view, edit and save, edit and cancel.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 15 seconds. Any longer than that should
be considered a regression.\\


\subsection{Experiment}

This test ensures that the system is able to successfully perform the following
operations on experiments: creation, modification, and deletion.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Create an experiment
    \item Edit an experiment
    \item Delete an experiment
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/experiment\_use\_case\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Name, Priority, Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./experiment\_use\_case\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Experiment: create, edit, delete.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 15 seconds. Any longer than that should
be considered a regression.\\

This test helped us find a regression in \texttt{app/js/Controller.js}, where
it had ColonyMenu as the `Back' page from ExperimentDetail rather than the
correct page, ExperimentView.

\subsection{ExperimentGroup}

This test ensures that the system is able to successfully perform the following
operations on experiment groups: creation, modification, and deletion.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Create an experiment group
    \item Edit an experiment group
    \item Delete an experiment group
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/experimentgroup\_use\_case\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Name, Lower Age Limit, Upper Age Limit, Gender, Genotype, Quantity, and Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./experimentgroup\_use\_case\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        ExperimentGroup: create, edit, delete.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 15 seconds. Any longer than that should
be considered a regression.

\subsection{Litter}

This test ensures that the system is able to successfully perform the following
operations on the litters: creation, modification, and deletion. 

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Create a litter with a breeding pair
    \item Create a litter without a breeding pair
    \item Modify a litter
	\item Delete a litter pair
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/litter\_use\_case\_tests.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Date Of Birth, Size, Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./litter\_use\_case\_tests.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Litter: create with breeding pair, create without breeding pair, edit, delete.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in 20 seconds. Any longer than that should
be considered a regression.\\

This test helped us find visual issues when it came to creating and modifying
litters.  The first issue we came across was the fact that the calendar date to
represent the litter was adding an addition "1" to the name. For example, if
the litter was born on 2012-01-01, it would display 2012-01-011
instead. Another visual issue was that the editing windows would display
"title" instead of something relevant such as "Edit Litter".

\subsection{BreedingPair}

This test ensures that the system is able to successfully perform the following
operations on Breeding Pairs: Create, Add to, and Modify  

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Create a new breeding pair with a mouse of their choosing 
    \item Add a second mouse to the pair in order to complete it
    \item Add a new mouse to an existing pair in order to quickly create a new pair  
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/breedingpair\_use\_case\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./breedingpair\_use\_case\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Breeding Pair: Create, Complete, Modify 
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 30 seconds. Any longer than that should
be considered a regression.\\

This test helped us find a regression in \texttt{app/js/Controller.js}, where
it had ColonyMenu as the `Back' page from BreedingPairDetail rather than the
correct page, BreedingPairList.

\subsection{Calendar}

This test ensures that the system is able to successfully perform the following
operations on the calendar: render events correctly, ensure events are on
the right date, and ensure week mode does not effect event rendering.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Render the correct number of events per each month/week
    \item Make sure the events are on their correct days
    \item Ensure events are correctly rendered in week mode
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/calendar\_use\_case\_tests.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Clean Database and rescheduled events.
            \item[Input Commands] \quad\\
                Simulated clicks and waits.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./calendar\_use\_case\_tests.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Calendar: show proper rendering of events, show correct events, ensure events render in week mode.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in 6.5 seconds. Any longer than that should
be considered a regression.\\

This test helped when it came to debugging the fullcalendar plugin.  When a
user would back out of the calendar view and then re-enter the view, all
the events doubled.  This test highlighted an easily overlooked problem.

\subsection{Genotype}

This test ensures that the system is able to successfully perform the following
operations on genotypes: creation and modification.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to:

\begin{itemize}
    \item Create a genotype
    \item Edit a genotype
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/genotype\_use\_case\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Name, and Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./genotype\_use\_case\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Genotype: create, edit.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 10 seconds. Any longer than that should
be considered a regression.

\subsection{Scheduling}

This test ensures that the system is able to properly schedule the mice from
the sample data to the appropriate experiment groups, and that the user is
able to access this functionality from the interface.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that the user is able to:

\begin{itemize}
    \item Reschedule a colony
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/reschedule\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Sample data from application.
            \item[Input Commands] \quad\\
                Simulated clicks and waits.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./reschedule\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Scheduling.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 15 seconds. Any longer than that should
be considered a regression.

\subsection{Intervention}

This test ensures that users are able to view and save changes to interventions
in the system.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that the user is able to:

\begin{itemize}
    \item Modify an intervention
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/intervention\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Sample data from application, valid StartDate, EndDate, active status, and Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./intervention\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Intervention modification.
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 10 seconds. Any longer than that should
be considered a regression.

\subsection{Mouse creation}

This test ensures that the system is able to successfully
add a mouse to the system through the litter functionality  

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to: 

\begin{itemize}
    \item Add a mouse to the system by using a litter 
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/mouse\_creation\_use\_case\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                Valid Tag, Cage, Comment.
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./mouse\_creation\_use\_case\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Mouse Creation 
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 10 seconds. Any longer than that should
be considered a regression.\\

\subsection{Mouse--Intervention link}

This test ensures that the functionality for linking between a intervention
and mouse works correctly.

\subsection*{Test specification}

This is a black-box test, so it does not call any methods specifically.\\

Success of this test shows that a user is able to: 

\begin{itemize}
    \item Jump to a mouse detail view from an Intervention Detail
    \item Jump to a Intervention Detail view from a Mouse Detail
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test/mouse\_intervention\_connection\_use\_case\_test.py}
    \item[Means of Control] \quad\\
        Test is performed by the Selenium ChromeDriver.
    \item[Data] \quad
        \begin{description}
            \item[Input Commands] \quad\\
                Simulated clicks, waits, and keyboard input.
            \item[Output Data] \quad\\
                Time elapsed.
            \item[System Messages] \quad\\
                Exit status.
        \end{description}
    \item[Procedure] \quad\\
        Invoke as: \texttt{python ./mouse\_intervention\_connection\_use\_case\_test.py}
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        Mouse Intervention link 
    \item[Performance] \quad\\
        User interface responsiveness.
    \item[Measurable data] \quad
        \begin{description}
            \item[Exit status] \quad\\
                An exit status of 0 (zero) means the test
                completed successfully. Any other value indicates failure.
            \item[Time elapsed] \quad\\
                The time elapsed for a test is a general
                indicator of how responsive the user interface is. It should
                not be compared between test cases, but an increase in the
                average time elapsed for one test case can indicate a
                regression in responsiveness.
        \end{description}
\end{description}

This test usually completes in under 10 seconds. Any longer than that should
be considered a regression.\\

\subsection{ListView addListItem}

This test ensures the basic functionality of the ListView UI creation utility class.

\subsection*{Test specification}

This is a white-box test, it tests a complex methods functionality.\\

Success of this test shows that a ListView object is able to:

\begin{itemize}
    \item Have items added to it 
    \item Correctly store the information provided with the items
    \item Return a list of its stored items 
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test\_testunits.html}
    \item[Means of Control] \quad\\
        Test is performed by QUnit.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                UniqueID, label.
            \item[Methods Tested] \quad\\
	 	agr.ListView()	
                ListView.addListItem()
		ListView.dump()
            \item[Output Data] \quad\\
		Data retrieved from ListView.Dump()
            \item[System Messages] \quad\\
                Assertion results
        \end{description}
    \item[Procedure] \quad\\
       Open in chrome browser. 
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        ListView: initialize, addListItem,dump.
    \item[Measurable data] \quad
        \begin{description}
            \item[Assertion Results] \quad\\
                Whether the items returned by the dump contain the correct UID and Label
        \end{description}
\end{description}
This test passes indicating that the bottom level functionality of the ListView is still 
functioning correctly. 

\subsection{ListView append}

This test ensures the html list items created by ListView have their properties set correctly 

\subsection*{Test specification}

This is a white-box test, it tests a complex methods functionality.\\

Success of this test shows that a ListView object is able to:

\begin{itemize}
    \item Generate a html list item for each item it contains 
    \item set the inner html of each item correctly
    \item set the link correctly for an enabled item
    \item set the onclick function correctly for an enabled item
    \item ensure no onclick is set for a disabled item
    \item ensure no link is set for a disabled item
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test\_testunits.html}
    \item[Means of Control] \quad\\
        Test is performed by QUnit.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                ListItemOne:valid uid, valid html, onclick function, default link
                ListItemTwo:valid uid, valid html, onclick function, default link, disabled
            \item[Methods Tested] \quad\\
		 agr.ListView()
                 ListView.addListItem()
		 ListView.render()
		 ListView.append()
            \item[Output Data] \quad\\
		 State of the modified html list
            \item[System Messages] \quad\\
                Assertion results
        \end{description}
    \item[Procedure] \quad\\
       Open in chrome browser. 
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        ListView: addListItem, render, append.
    \item[Measurable data] \quad
        \begin{description}
            \item[Assertion Results] \quad
                \begin{itemize}
                    \item check the render created two list items
                    \item check they each contain the correct inn html
                    \item check the link is set for the enabled item
                    \item check the click is set for the enabled item
                    \item check the click is not set for the disabled item
                    \item check the link is not set for the disabled item
        \end{itemize}
        \end{description}
\end{description}
A passed status for this tests indicates that the ListView class can be counted on to correctly
generate an html list in its standard mode.

This test help to locate a bug where the link was still being set for disabled items.

\subsection{ListView editModeAppend}

This test ensures the html list items created by a ListView in edit
mode have their properties set correctly 

\subsection*{Test specification}

This is a white-box test, it tests a complex methods functionality.\\

Success of this test shows that a ListView object is able to:

\begin{itemize}
    \item Generate an edit mode html list item for each item it contains 
    \item Properly set any available edit page links
    \item Properly set any edit mode click functions
    \item Create two clickables for each item one being a delete button
\end{itemize}

\subsection*{Test description}

\begin{description}
    \item[Location] \quad\\
        \texttt{test\_testunits.html}
    \item[Means of Control] \quad\\
        Test is performed by QUnit.
    \item[Data] \quad
        \begin{description}
            \item[Input Data] \quad\\
                ListItemOne:valid uid, valid html, onclick function, onclick edit function, edit link, default link
                ListItemTwo:valid uid, valid html, onclick function, default link, disabled
            \item[Methods Tested] \quad\\
		agr.ListView()
                ListView.addListItem()
		ListView.editOn()
		ListView.render()
		ListView.editModeAppend()
            \item[Output Data] \quad\\
		 State of the modified html list
            \item[System Messages] \quad\\
                Assertion results
        \end{description}
    \item[Procedure] \quad\\
       Open in chrome browser. 
\end{description}

\subsection*{Test analysis report}

\begin{description}
    \item[Function] \quad\\
        ListView: addListItem, render, editModeAppend.
    \item[Measurable data] \quad
        \begin{description}
            \item[Assertion Results] \quad
                \begin{itemize}
                    \item check the render created two list items
                    \item check the render created four clickables, two edits, and two deletes 
                    \item check the edit link is set for the enabled item
                    \item check the edit click is set for the enabled item
                    \item check the edit link is not set for the disabled item
                    \item check the delete button is available and usable on both 
                    \item check the inner html of each items edit clickable is set properly
                \end{itemize}
        \end{description}
\end{description}
A passed status for this tests indicates that the ListView class can be counted on to correctly
generate an html list in its edit mode.


\section{Integration Testing}

Integration testing was performed throughout the development process as each module of the application became
ready for testing. In order to test only the integration of the targeted modules with each other the unwanted
modules source code and html files were simply not included on the test environment. The integration tests were
performed as manually driven top down tests focusing on the boundaries of the included modules. The integration
tests frequently uncovered errors dealing with the state keeping mechanism the application employs, and were 
an excellent tool to ensure that the modules were correctly passing and receiving the correct state from each
other, especially when they had been developed independently. The discover of these boundary issues would lead
to quick communication between the developers of the modules in order to quickly identify how their 
interpretations of the design had differed and resolve the issue. 

\section{System Testing}

System testing was used extensively during the testing of the colony management application.
It was performed both manually and automatically throughout the development process, as one of the preferred
testing methods. The manually performed system tests were typically performed by the developers along side 
their development work against a common set of test data based off an example set provided by the clients. The 
automatically performed tests were run using Selenium and the ChromeDriver server, and were also run against
the same common set of test data that was used for the manual tests. The system tests were a great resource for
ensuring that all of the existing functionality of the system continued to perform as expected when new
features were added and the application grew more complex. 

\section{Test Materials}

The tests have been successfully run on Windows 7 Professional 64-bit, Debian GNU/Linux, and Arch GNU/Linux. Instructions will be provided for Windows 7 and Debian GNU/Linux.\\

\subsection*{Windows 7}

\begin{enumerate}
    \item Download and install Google Chrome
        (\url{https://www.google.com/chrome})
    \item Download and install Python 2.7.2
        (\url{http://www.python.org/download/})
    \item Download and install setuptools 0.6c11 for Python 2.7
        (\url{http://pypi.python.org/pypi/setuptools})
    \item Add \texttt{C:\char`\\Python27\char`\\Scripts}
        to your system PATH.
        (\url{http://geekswithblogs.net/renso/archive/2009/10/21/how-to-set-the-windows-path-in-windows-7.aspx})
    \item Open a Command Prompt
        (\url{http://www.sevenforums.com/tutorials/947-command-prompt.html})
    \item Execute the following command: \texttt{easy\_install pip}
    \item Execute the following command: \texttt{pip install selenium}
    \item Download the appropriate ChromeDriver executable for your system
        (\url{http://code.google.com/p/chromedriver/downloads/list})
    \item Extract the executable and place it in the \texttt{test/} directory
        of the application.
    \item Run the tests from a Command Prompt like so:
        \texttt{python .\char`\\foo\_test.py}
\end{enumerate}

\subsection*{Debian GNU/Linux}

This should also work with derivative works, such as Ubuntu.

\begin{enumerate}
    \item Download and install Google Chrome
        (\url{https://www.google.com/chrome})
    \item Open a Terminal, or shell prompt.
    \item Execute the following command:
        \texttt{sudo apt-get install python-pip}
    \item Execute the following command:
        \texttt{sudo pip install selenium}
    \item Download the appropriate ChromeDriver executable for your system
        (\url{http://code.google.com/p/chromedriver/downloads/list})
    \item Extract the executable and place it in the \texttt{test/} directory
        of the application.
    \item Run the tests from a Terminal or shell prompt like so:
        \texttt{python ./foo\_test.py}
\end{enumerate}

\clearpage
\section{Glossary}

\begin{description}
\item[BBOS ---] BlackBerry Operating System, the operating system present on modern BlackBerry Phones.
\item[Breeding Pair ---] A pair of mice designated for breeding purposes. Kept aside from the rest of the colony.
\item[Colony ---] A colony is a group of mice which will be monitored and used to breed mice fitting desired criteria for experimental use.
\item[Control Group ---] Control groups are used by experiments in order to obtain the desired mix of mice traits. An example of this would be an experiment needing a group of ten females, and another group of 8 males.
\item[Database ---] Collects and retains data used for [Application] and remains inside the phone until the deletion of the application. 
\item[DOB ---] Date of Birth.
\item[Duration ---] An amount of time after the researcher has started a mouse in the experiment before they have to check up on them at the finish.
\item[Experiment ---] A test or series thereof which the researchers would like to accomplish using the mice in their colony.
\item[Genotype ---] A classification of the mice used in creating control groups for experiments, mice genotypes are obtained by sending samples to a test lab.
\item[iOS ---] Apple's iPhone, iPad, and iPod operating system.
\item[Litter ---] A group of pups from a single birth. 
\item[Peer-To-Peer ---] Also known as P2P where devices make direct contact with one another instead of going through a medium like a file server.
\item[PhoneGap ---] A cross platform mobile development framework which allows developers to program in web languages and renders them into the native language of the device.
\item[Pup ---] A newborn mouse.
\item[Researcher ---] The person maintaining the colony and performing experiments.
\item[Research Lab ---] The location where the colonies are kept, and work with the mice is performed.
\item[Test Lab ---] Refers to to the location mice samples are sent for genotype analysis, or the room in which the colonies are kept.
\end{description}

\end{document}
