\documentclass[oneside]{homework} %%Change `twoside' to `oneside' if you are printing only on the one side of each sheet.

\studname{Ran~Yu}
\studmail{ry2239@columbia.edu}
\coursename{Machine Learning}
\hwNo{1}
\uni{ry2239}

\usepackage{graphicx}
\usepackage{subfigure}
\begin{document}
\maketitle 

\section*{Problem 1}
1.	The Figures of experimenting with various choices for d presented as below,\\
\begin{figure}[!h]
     \centering  
     \subfigure[d = 1]{ \includegraphics[width=3.5cm, height=2.8cm]{d=1.pdf}}
     \subfigure[d = 2]{ \includegraphics[width=3.5cm, height=2.8cm]{d=2.pdf}}
     \subfigure[d = 3]{ \includegraphics[width=3.5cm, height=2.8cm]{d=3.pdf}}\\
     \subfigure[d = 4]{ \includegraphics[width=3.5cm, height=2.8cm]{d=4.pdf}}
     \subfigure[d = 5]{ \includegraphics[width=3.5cm, height=2.8cm]{d=5.pdf}}
     \subfigure[d = 6]{ \includegraphics[width=3.5cm, height=2.8cm]{d=6.pdf}}\\
 \end{figure}\\
 From the figures above, we found that when d=3, d=4 the fitting seems more reasonable.\\
 And the values of weight w presented as below,
 \begin{itemize}
 \item $d=1~w=( -0.4041 ,0.4520)$
 \item $d=2~w=(2.2234 ,-2.6953 ,0.8167)$
 \item $d=3~w=(14.9130 ,-19.6041 ,5.4884 ,0.1951)$
 \item $d=4~w=(10.0144 ,-4.6324 ,-7.6331 ,3.0879 ,0.2876)$
 \item $d=5~w=(15.8769 ,-27.6626 ,26.8444 ,-18.6180 ,4.5314 ,0.2491)$
 \item $d=6~w=(248.4969 ,-692.1768 ,734.0544 ,-356.6934 ,71.7547 ,-4.0027 ,0.4270)$
 \end{itemize}
 We can see from the plots and weights, that with the increase of degree, the weights increases, and this makes model become more unstable.\\
 2.	Here we use crossvalidation to get the best degree. We choose 2 type of crossvalidation, one is 'pick-one', and the another is 'm-fold'.\\
\emph{'Pick-one'}
 \begin{figure}[!h] 
     \centering   
     \includegraphics[width=9cm, height=7.5cm]{pick-one.pdf} 
     \caption{\label{lb}Pick-one Model} 
\end{figure}
We randomly choose 9 out of 10 data as training data and put rest of them as test data.  Try fitting with different polynomial order D, and we get the figure of Figure 1,\\
The red line is for $R_{test}$, and the blue one is for $R_{train}$. And we can get from Figure \ref{lb}, that when $d=4$, we get the smallest $R_{test}$ as $0.0189$. So we might select $d=4$ as the best.\\
\emph{'M-fold'}
We divide the data into several parts, and randomly choose one of them as testing data, and put rest of them as training data. We will compute the error for several times and compute the average error as the result.
 \begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=7.5cm]{m-fold.pdf} 
     \caption{M-fold Model} 
\end{figure}
Shows in the Figure 2, the m-fold method resulted almost the same with pick-one method. When dimension is around $4$, we get the smallest error.\\
But as the given data is limited, training and testing error were not stable in every computation. The more reliable result need further study of the data.\\
\section*{Problem 2}
When we set the RBF's sigma parameter equal to 1.0 to use RBF to fit the data, we get the training error and testing error as below,
 \begin{itemize}
 \item $err = 0.023129269038290$
\item $errT =  0.037144229626606$
\end{itemize}
\begin{small}*The training and testing error above is one out of several experiment result.\end{small}
\\
\\And we get the fit graph of Figure 3,
\begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=7.5cm]{q21.pdf} 
     \caption{The Fit Graph for Dataset2} 
\end{figure}
Then we try to experiment with sigma parameter, we find that when sigma is too small or too big, the testing error becomes large. For that when sigma is too small or too big, the problem of overfitting/underfitting happens, and thus increases the error. And the Figure 4 illustrate that.
\begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=7.5cm]{q2.pdf} 
     \caption{The Sigma-Error Graph for Dataset2} 
\end{figure}
We can see that, when sigma values around $2^{-3}$ we get the smallest error.\\
\section*{Problem 3}
Here we use the gradient descent algorithm, and set step size(h) as 0.1, we got the result below in the table,\\
\\
\begin{center}
\begin{tabular}{|p{5cm}|p{5cm}|}
\hline
$Iteration Number$ & $211$ \\
\hline
$Classification Error$ & $ 0.015$\\
\hline
$Perceptron Error$ & $8.215e^{-5}$\\
\hline
\end{tabular}
\end{center}
\begin{small}*The result above is only one out of several experiment result.\end{small}\\
\\And the classification graph was shown in Figure 5,\\
\begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=9cm]{prob31.pdf} 
     \caption{Gradient Descent algorithm} 
\end{figure}\\
And then I try experiment with $h$ value, and I found that the increase of h decrease the iteration number. And as for I changed the loop condition of Gradient Decent algorithm, changed the $R(\theta^t) \leq R(\theta^{t-1}) - \varepsilon$ into $|R(\theta^t) - R(\theta^{t-1})| \leq \varepsilon$, my program avoid the cases that iterate for few time and $R(\theta^{t})$ rise suddenly that it jumps out of the loop.\\
The relation between iteration number and step size($h$) shows as Figure 6,\\
\begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=9cm]{prob3.pdf} 
     \caption{IterationNumber-Stepsize Graph } 
\end{figure}
\section*{Problem 4}
\begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=7.5cm]{prob411.pdf} 
     \caption{Example of Ambiguous Region for Approach1} 
\end{figure}
1. Consider the approach one, such that $y_k(x)>0$ for inputs $x$ in class $C_k$ and $y_k(x)<0$ for the inputs not in class $C_k$, and we found that when facing $c=3$, there is situation shows as Figure 7, that some regions will be ambiguous in classification.\\

Suppose we have 3 classes, $i,j,k$ and the line $y_i$ can classify $Region~i$ from the $Region~i,j,k$, for that $x$ in $Region~i$, $y_i(x)>0$. And then, the line ${y_j}$ separate $Region~j$ from $Region~j, k$ , and our 3 group could be classified by 2 lines. But there is some region that remains ambiguous with this type of classification. For example, when we have data in $Region~*$, it could be tagged as class i for that $y_i(x)>0$ and it will be tagged as class j at the same time for $y_j(x)>0$, so which class does this data belong will become unknown. \\
2. The other approach, that for each possible pair of classes $C_j$ and $C_k$, using the discriminant function $y_{j,k}(x)$ to classify the patterns, would also lead to ambiguous regions.
\begin{figure}[!ht] 
     \centering   
     \includegraphics[width=9cm, height=7.5cm]{prob42.jpg} 
     \caption{Example of Ambiguous Region for Approach2} 
\end{figure}
As what the Figure 8 shows, we have 3 classes, $i,j,k$, when pattern is in $Region~i$ for example, function ${y_{i,j}} {y_{i,k}}$ would classify it as Class i, and function ${y_{k,j}}$ would classify it as Class k or Class j. We may introduce a ballot in classification, and we can mark the pattern as Class i for that 2 out of 3 functions classify it as Class i. In this way, we could successfully handle all the patterns in $Region~i,j,k$. 
But when we face the patterns in $Region~*$, things would be hard to tell. For that function ${y_{i,j}}$ would classify it as Class j,  ${y_{i,k}} $ would classify it as Class i, and ${y_{k,j}}$ classify it as Class k. We can hardly make any decision about the patterns in $Region~*$, and this left to be the ambiguous regions for approach two.

\end{document}