\section{ML}
\begin{description}
\item[Question 1]
	\begin{enumerate}
		\item Generally we never test the same attribute twice along one path in a standard decision tree. Why not? (1.5 points)
		\item Can you devise a situation where we may have to test on same attribute twice along one path? Give an example. (1.5 points)
		\item Suppose we generate a training set from a decision tree and then apply decision-tree learning to that training set. Is it the case that the learning algorithm will eventually return the correct tree as the training set size goes to infinity? Why or why not? (2 points) 
	\end{enumerate}
      This question is inspired by exercises 18.4 and 18.5
\item[Question 2] Two statisticians go to the doctor and are both given the same prognosis: A 40\% chance that the problem is the deadly disease A, and a 60\% chance of the fatal disease B. Fortunately, there are anti-A and anti-B drugs that are inexpensive, 100\% effective, and free of side-effects. The statisticians have the choice of taking one drug, both, or neither.
	\begin{enumerate}
		\item What will the first statistician (an avid Bayesian) do? How about the second statistician, who always uses the maximum likelihood hypothesis? (2.5 points)
		\item The doctor does some research and discovers that disease B actually comes in two versions, dextro-B and levo-B, which are equally likely and equally treatable by the anti-B drug. Now that there are three hypotheses, what will the two statisticians do? (2.5 points) 
	\end{enumerate}
      This question is inspired by exercise 20.4
\item[Question 3]
	\begin{enumerate}
		\item Suppose that a training set contains only a single example, repeated 100 times. In 80 of the 100 cases, the single output value is 1; in the other 20, it is 0. What will a back-propagation network predict for this example, assuming that it has been trained and reaches a global optimum? (Hint: to find the global optimum, differentiate the error function and set to zero.) (2.5 points)
		\item Construct by hand a neural network that computes the XOR function of three inputs. Make sure to specify what sort of units you are using along with their weights. (2.5 points) 
	\end{enumerate}
      This question is inspired by exercises 20.11 and 20.19 
\end{description}

\subsection{Answer 1}
\begin{enumerate}
	\item It's unnecessary to test the same attribute twice in the path because we already know the out come of the test.
	\item If our test in the decision tree is boolean an the attribute is not. Then we maybe need to check twice or more if the attribute is equal to, grater then, less then and so on.
	\item In the end, or long enough, it will generate a tree that give the same answer as the ``correct tree''. But the tree will probably not be equal to the ``correct tree'' because there is multiple ways to describe a logic expression with a decision tree.
\end{enumerate}	

\subsection{Answer 2}
\begin{enumerate}
	\item The Bayesian statistician would take both anti drugs because then he is 100\% sure that he gone bee cure. The maximum likelihood would take the anti-B drug because he probably have the disease B.
	\item The Bayesian statistician would still take both drugs. Now the maximum likelihood would take the anti-A drug because it's now bigger probability that he have the disease A. % That's because he looks at the three diseases and see 
\end{enumerate}
\subsection{Answer 3}
\begin{enumerate}
	\item The probability that the network give back a 1 is 0.8.\\ The weights in the network will be adjust so that the error is minimal. The error functions look like this:
	\[ E = \frac{1}{2} \sum_i \left( y_i - a_i \right)^2 = \frac{1}{2}\left( 80 \left( 1- a_i \right)^2 + 20 \left( 0 - a_i\right)^2 \right) = 100a_i^2 - 80a_i + 40\]
	That give us the derivative of the error with respect of the output \(a_1\):
	\[ \frac{\delta E}{\delta a_1} = 100 a_1 - 80\]
	Now we set the derivative to zero and then we get \(a_1=0.8\)
	\item We can easy construct a truth table for the XOR with three inputs and then we use DNF to make a network.\\
	\begin{tabular}{c c c | c | c }
	\(x_1\) & \(x_2\) & \(x_3\) & \(x_1 \oplus x_2 \oplus x_3\) & DNF \\
	\hline
	0 & 0 & 0 & 0 & \\
	0 & 0 & 1 & 1 & \(\neg x_1 \land \neg x_2 \land x_3\)\\
	0 & 1 & 0 & 1 & \(\neg x_1 \land x_2 \land \neg x_3\)\\
	0 & 1 & 1 & 0 & \\
	1 & 0 & 0 & 1 & \(x_1 \land \neg x_2 \land \neg x_3\)\\
	1 & 0 & 1 & 0 & \\
	1 & 1 & 0 & 0 & \\
	1 & 1 & 1 & 0 &
	\end{tabular}\\
	So the network will represent the logic function \((\neg x_1 \land \neg x_2 \land x_3) \lor (\neg x_1 \land x_2 \land \neg x_3) \lor (x_1 \land \neg x_2 \land \neg x_3)\).
\begin{displaymath}
	\entrymodifiers={++[o][F-]}
    \xymatrix@C=50pt@R=40pt@L=0pt{
	x_1 \ar@{-}[r]^<(.15){+1} \ar@{-}[dr]^<(.15){-1} \ar@{-}[ddr]^<(.15){-1} & +2 \ar@{-}[dr]^<(.15){+1} \\
	x_2 \ar@{-}[r]^<(.15){+1} \ar@{-}[dr]^<(.15){-1} \ar@{-}[ur]^<(.15){-1} & +2 \ar@{-}[r]^<(.15){+1}& -2 \ar@{-}[r] &\\
	x_3 \ar@{-}[r]^<(.15){+1} \ar@{-}[ur]^<(.15){-1} \ar@{-}[uur]^<(.15){-1} & +2 \ar@{-}[ur]^<(.15){+1 }
	}
\end{displaymath}
\end{enumerate}
