\documentclass[twoside,a4paper]{article}
\usepackage{geometry}
\geometry{margin=1.5cm, vmargin={0pt,1cm}}
\setlength{\topmargin}{-1cm}
\setlength{\paperheight}{29.7cm}
\setlength{\textheight}{25.3cm}

% useful packages.
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{enumerate}
\usepackage{graphicx}
\usepackage{multicol}
\usepackage{fancyhdr}
\usepackage{layout}

% some common command
\newcommand{\dif}{\mathrm{d}}
\newcommand{\avg}[1]{\left\langle #1 \right\rangle}
\newcommand{\difFrac}[2]{\frac{\dif #1}{\dif #2}}
\newcommand{\pdfFrac}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\OFL}{\mathrm{OFL}}
\newcommand{\UFL}{\mathrm{UFL}}
\newcommand{\fl}{\mathrm{fl}}
\newcommand{\op}{\odot}
\newcommand{\Eabs}{E_{\mathrm{abs}}}
\newcommand{\Erel}{E_{\mathrm{rel}}}

\begin{document}

\pagestyle{fancy}
\fancyhead{}
\lhead{Haolong Li (3180105433)}
\chead{DMAA homework \#2}
\rhead{\today}


\section*{3.2. Use the  kd-tree created by example 3.2 to find point $x = (3,4.5)^{T}$'s nearest neighbor point.}

Solution:

	Using the algorithm 3.3, we firstly find the leaf node which include the point x: (4,7), then return its father node (5,4).
	Compared with another child node (2,3), we know that (2,3) is a more near neighbor point.Then continue using algorithm 3.3, we can know the point (2,3) is the nearest neighbor point.

\section*{3.3. Refer to algorithm 3.3, write an algorithm which can output x's K-nearest neighbor points.}

Solution:

	Algorithm 3.4(The K-nearest neighbor search in kd-tree)
	
	Input: a created kd-tree, the target point x, an array used to storage the answer(or a stack, if using a stack, we must do some changes in inputing datas into the stack);
	
	Output: x's K-nearest neighbor points.
	
	(1) Find the leaf node that includes the target point x: the method is same as the algorithm 3.3.
	
	(2) Input the node into the array.
	
	(3) Recurrencely return upward, do things like below for each node:
	
		\quad \quad	(a) If the array has less than k members, insert this point into the array and go (c) , else, then go (b);
			
		\quad\quad 	(b) If the point the node refers to is nearer than the farest point in the array, then replace the farest point with the point the node refers to. (else,do nothing)
		
		\quad \quad	(c) (This is same as the algorithm 3.3(b))
		
		\quad \quad	(d) When return to the root node, searching is over.The last array is the K-nearest neighbor points.
			 
			 
\section*{4.1. Using the maximum likelihood estimation method to deduce the probability(4.8) in na{\"i}ve Bayes algorithm and formula (4.9)}

Solution:
	In fact, 
	\begin{equation}
	P(Y=c_{k})=P(y_{1},y_{2},\dots,y_{N}|\theta)=P(y_{1}|\theta)P(y_{2}|\theta)\dots P(y_{N}|\theta) \label{1}
	\end{equation}
	Then as the probability we observe is
	\begin{equation}
	\frac{\sum_{i=1}^{N}I(y_{i}=c_{k})}{N} \label{2} 
	\end{equation}
	So (1) = (2).	
	The formula (4.9) is same as above.	\hfill $\square$
	
\end{document}

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: t
%%% End: 
