% !TEX TS-program = pdflatex
% !TEX encoding = UTF-8 Unicode

% This is a simple template for a LaTeX document using the "article" class.
% See "book", "report", "letter" for other types of document.

\documentclass[11pt]{article} % use larger type; default would be 10pt

\usepackage[utf8]{inputenc} % set input encoding (not needed with XeLaTeX)
\usepackage{multirow}
\usepackage{slashbox}
\usepackage{float}
%%% Examples of Article customizations
% These packages are optional, depending whether you want the features they provide.
% See the LaTeX Companion or other references for full information.

%%% PAGE DIMENSIONS
\usepackage{geometry} % to change the page dimensions
\geometry{a4paper} % or letterpaper (US) or a5paper or....
% \geometry{margin=2in} % for example, change the margins to 2 inches all round
% \geometry{landscape} % set up the page for landscape
%   read geometry.pdf for detailed page layout information

\usepackage{graphicx} % support the \includegraphics command and options

% \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent

%%% PACKAGES
\usepackage{booktabs} % for much better looking tables
\usepackage{array} % for better arrays (eg matrices) in maths
\usepackage{paralist} % very flexible & customisable lists (eg. enumerate/itemize, etc.)
\usepackage{verbatim} % adds environment for commenting out blocks of text & for better verbatim
\usepackage{subfig} % make it possible to include more than one captioned figure/table in a single float
% These packages are all incorporated in the memoir class to one degree or another...

%%% HEADERS & FOOTERS
\usepackage{fancyhdr} % This should be set AFTER setting up the page geometry
\pagestyle{fancy} % options: empty , plain , fancy
\renewcommand{\headrulewidth}{0pt} % customise the layout...
\lhead{}\chead{}\rhead{}
\lfoot{}\cfoot{\thepage}\rfoot{}

%%% SECTION TITLE APPEARANCE
\usepackage{sectsty}
\allsectionsfont{\sffamily\mdseries\upshape} % (See the fntguide.pdf for font help)
% (This matches ConTeXt defaults)

%%% ToC (table of contents) APPEARANCE
\usepackage[nottoc,notlof,notlot]{tocbibind} % Put the bibliography in the ToC
\usepackage[titles,subfigure]{tocloft} % Alter the style of the Table of Contents
\renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape}
\renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} % No bold!

%%% END Article customizations

%%% The "real" document content comes below...

\title{Multiview and Multitask Machine Learning Project}
\author{Behrouz, Ksenia}
%\date{} % Activate to display a given date or no date (if empty),
         % otherwise the current date is printed 

\begin{document}
\maketitle

\section{Multiview}

Your text goes here.

\subsection{A subsection}


\section{Multitask}

\subsection{Error Rate}

The tables below summerizes the accuracy of each method. Each table is a 9 by 7, where columns indicate a task and rows indicate the number of training items used. All the values are average over 5 different run of the algorithm.\newline
The first table shows the accuracy of "Relevant Subtask Learning" , from the table it can be seen that the accuracy of each task is different and for most of the task the general trend is that too few training items and too many training items will result in lower accuracy due to underfitting and overfitting respectively . The highest accuracy was achieved by using task number two as task of interest and using 10 training samples, of course this does not proof that this setting is the best due to random initialization of training and test sets .

\begin {table}[H]
\caption {RSL Accuracy} \label{tab:title} 
\begin{center}
\begin{tabular}{|l||*{9}{c|}}\hline
\backslashbox{ Training}{Task}
  &1&2&3&4&5&6&7&8&9 \\ \hline
2 &  0.8421 & 0.7895 & 0.8289 & 0.7368 & 0.7368 & 0.8684 & 0.9079 & 0.8289 & 0.7368\\ \hline
4 & 0.8472 & 0.7639 & 0.8333 & 0.7361 & 0.9167 & 0.9167 & 0.9167 & 0.7917 & 0.7917\\ \hline
6 & 0.8824 & 0.8676 & 0.8235 & 0.7059 & 0.8971 & 0.8971 & 0.9265 & 0.8676 & 0.8529\\ \hline
8 & 0.8750 & 0.9063 & 0.8594 & 0.7969 & 0.8750 & 0.9219 & 0.9063 & 0.9375 & 0.8281\\ \hline
10 & 0.8833 & 0.9667 & 0.8833 & 0.8000 & 0.8500 & 0.9500 & 0.9000 & 0.9000 & 0.8500\\ \hline
12 & 0.9107 & 0.9464 & 0.8036 & 0.7857 & 0.9286 & 0.9107 & 0.9286 & 0.8750 & 0.8393\\ \hline
 14 & 0.9423 & 0.9231 & 0.8654 & 0.8846 & 0.9231 & 0.8462 & 0.9038 & 0.8846 & 0.8654\\ \hline

\end{tabular}

\end{center}
\end{table}

The second table shows the accuracy of "Pooling" method; where all the data are pooled together and a classifier is constructed for it. Same as for RSL method too few or too many training samples in most of the cases result in a worst accuracy. 
\begin {table}[H]
\caption {Pooling Accuracy} \label{tab:title} 
\begin{tabular}{|l||*{9}{c|}}\hline
\backslashbox{ Training}{Task}
  &1&2&3&4&5&6&7&8&9 \\ \hline
2&0.7632&0.7763&0.8026&0.6579&0.6711&0.8026&0.8289&0.7632&0.7368  \\ \hline
4&0.9028&0.7778&0.7778&0.7222&0.9306&0.8750&0.8611&0.8889&0.8194  \\ \hline
6&0.9265&0.8824&0.8235&0.7941&0.8676&0.9118&0.9118&0.9265&0.8971 \\ \hline
8&0.9531&0.9375&0.8125&0.7813&0.9219&0.7969&0.9219&0.9063&0.8125\\ \hline
10&0.8833&0.9000&0.7333&0.7333&0.8333&0.7167&0.9000&0.8667&0.8833\\ \hline
12&0.8929&0.9107&0.7321&0.8036&0.8750&0.8393&0.9286&0.8750&0.8036\\ \hline
14&0.8846&0.9038&0.8462&0.8654&0.8462&0.8846&0.9615&0.8269&0.8462\\ \hline
\end{tabular}
\end{table}

The last table shows the accuracy of "Single task learning" method; where each task is run seperately, disregarding all the other tasks. Using too few training samples results in an extremely low accuracy, unlike the two other methods, where although low training sample caused underfitting but the result were not as bad as single task learning. 
\begin {table}[H]
\caption {STL Accuracy} \label{tab:title} 
\begin{tabular}{|l||*{9}{c|}}\hline
\backslashbox{ Training}{Task}
  &1&2&3&4&5&6&7&8&9 \\ \hline
2&0.4474&0.6711&0.7632&0.6316&0.6579&0.8684&0.7895&0.7632&0.5263 \\ \hline
4&0.8194&0.7222&0.8333&0.6250&0.8472&0.7222&0.8472&0.7500&0.5972 \\ \hline
6&0.7353&0.8529&0.8235&0.7206&0.8676&0.8676&0.8382&0.7941&0.7794 \\ \hline
8&0.7656&0.9219&0.7969&0.7031&0.8594&0.8125&0.7656&0.7813&0.7969 \\ \hline
10&0.8667&0.8833&0.8000&0.7833&0.9000&0.8500&0.8333&0.8167&0.7667 \\ \hline
12&0.8036&0.8036&0.7857&0.7857&0.9643&0.9107&0.8036&0.7679&0.6964 \\ \hline
14&0.8846&0.9423&0.8462&0.7885&0.9038&0.9038&0.9038&0.7308&0.8269 \\ \hline
\end{tabular}
\end{table}

In general Relevant Subtask learning performs the best comparing to the other two methods, followed by pooling method. STL is having the worst accuracy.
The reason that RSL and pooling methods are performing better that STL can be explained by the fact that both are having an advantage over it. RSL is a multitask method where a classifier is constructed based on all the task and in pooling method the presenece of more data helps in constructing a better classifier. 
\newpage
\subsection{Filters}
\begin{figure}[H]
  \caption{Image filter.}
  \centering
    \includegraphics[width=1.0\textwidth]{t7.png}
\end{figure}

\end{document}
