\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size

\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs
\usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default
\usepackage[english]{babel} % English language/hyphenation
\usepackage{amsmath,amsfonts,amsthm} % Math packages
\usepackage[UTF8]{ctex}
\usepackage{graphicx}
\usepackage{float}
\usepackage[top=2cm, bottom=2cm, left=2cm, right=2cm]{geometry}
\usepackage[linesnumbered,boxed,lined,ruled]{algorithm2e}
\usepackage{algorithmicx}
\usepackage{algpseudocode}

\usepackage{xcolor}
\usepackage[framed,numbered,autolinebreaks,useliterate]{mcode}
\usepackage{listings}

\usepackage{sectsty} % Allows customizing section commands
\allsectionsfont{\centering \normalfont\scshape} % Make all sections centered, the default font and small caps

\usepackage{fancyhdr} % Custom headers and footers
\pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers
\fancyhead{} % No page header - if you want one, create it in the same way as the footers below
\fancyfoot[L]{} % Empty left footer
\fancyfoot[C]{} % Empty center footer
\fancyfoot[R]{\thepage} % Page numbering for right footer
\renewcommand{\headrulewidth}{0pt} % Remove header underlines
\renewcommand{\footrulewidth}{0pt} % Remove footer underlines
\setlength{\headheight}{13.6pt} % Customize the height of the header

\numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)
\numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)
\numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)

\setlength\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text

%----------------------------------------------------------------------------------------
%	TITLE SECTION
%----------------------------------------------------------------------------------------

\newcommand{\horrule}[1]{\rule{\linewidth}{#1}} % Create horizontal rule command with 1 argument of height

\title{
\normalfont \normalsize
\textsc{中国科学院大学\ 计算机与控制学院} \\ [25pt] % Your university, school and/or department name(s)
\horrule{0.5pt} \\[0.4cm] % Thin top horizontal rule
\huge 模式识别第一次作业 \\ % The assignment title
\horrule{2pt} \\[0.5cm] % Thick bottom horizontal rule
}

\author{黎吉国&201618013229046} % Your name

\date{\normalsize\today} % Today's date or a custom date

\begin{document}

\maketitle % Print the title
\newpage
\section{problem 1st}
对于一个$c$类分类问题，假设各类先验概率为$P(\omega_i),i=1,\ldots c$,类条件概率密度为$P(x|\omega_i),i=1,\ldots,c$(这里$x$表示特征向量)，
将第$j$类模式判别为第$i$类的损失为$\lambda_{ij}$.\\
(1)请写出贝叶斯最小风险决策和最小错误率决策的决策规则。\\
(2)引入拒识（表示为$c+1$类），假设决策损失为
\begin{equation*}
\lambda_{ij}=
\begin{cases}
  0,\quad i=j\\
  \lambda_r,\quad i=c+1\\
  \lambda_s,\quad otherwise
\end{cases}
\end{equation*}
请写出最小损失决策的决策规则（包括分类规则和拒识规则）。\\
\textbf{解:}\\
(1)对于最小风险决策，若$\alpha_i$表示将$x$判决为$\omega_i$这一行为，则每个行为的风险为：\\
\[R(\alpha_i)=\sum_{j=1,j\ne i}^{c}\lambda_{ij}P(\omega_j|x)=\sum_{j=1,j\ne i}^{c}\lambda_{ij}\frac{p(x|\omega_j)P(\omega_j)}{p(x)}\]
对应的判决规则为：
\[\text{if }R(\alpha_i)<R(\alpha_j),j=1,2,\ldots,c. j\ne i,\text{then }x\in \omega_i\]
(2)引入拒识之后
\begin{equation*}
R(\alpha_i)=
\begin{cases}
  \lambda_s(1-P(\omega_i|x))\qquad i=1,2,\ldots,c\\
  \lambda_r\qquad i=c+1,\text{reject}
\end{cases}
\end{equation*}
相应的判决规则只做微小的变动:
\[\text{if }R(\alpha_i)<R(\alpha_j),j=1,2,\ldots,c,c+1. j\ne i,\text{then }x\in \omega_i\]
其中，分类规则为:
\[P(\omega_i|x)=\max_{j=1,2,\ldots,c}P(\omega_j|x), \text{ if }P(\omega_i|x)>1-\lambda_r/\lambda_s, \text{ then }x\in \omega_i \]
拒识规则为：
\[P(\omega_i|x)=\max_{j=1,2,\ldots,c}P(\omega_j|x), \text{ if }P(\omega_i|x)<=1-\lambda_r/\lambda_s, \text{ then }reject \]
\newpage
\section{Problem 2ed}
表示模式的特征向量$x\in \mathmb{R}^d$，对一个$c$类分类问题，假设各类先验概率相等，每一类条件概率密度都是高斯分布。\\
(1)写出类条件概率密度的数学形式。\\
(2)写出下面两种情况下最小错误率决策判别函数：(a)类协方差矩阵不等；(b)所有类协方差矩阵相等。\\
(3)在基于高斯概率密度的二次判别函数中，当协方差矩阵为奇异时，判别函数变的不可计算，请说出两种克服协方差奇异的方法。\\
\textbf{解:}\\
(1)$d$维的高斯分布的条件概率密度是:
\begin{equation*}
p(x)=\frac{1}{ (2\pi)^{d/2}|\Sigma|^{1/2}}\exp((x-\mu)^T\Sigma^{-1}(x-\mu))
\end{equation*}
(2)为计算方便，我们可以设计判别函数为$g(x)=\ln p(x|\omega_i)+\ln P(\omega_i)$,则有：
\[g_i(x)=-\frac{1}{2}(x-\mu_i)^T\Sigma_i^{-1}(x-\mu_i)-\frac{d}{2}\ln 2\pi-\frac{1}{2}\ln |\Sigma_i|+\ln P(\omega_i)\]
(a)协方差矩阵不相等时，忽略常数项，可得：
\[g_i(x)=-\frac{1}{2}(x-\mu_i)^T\Sigma_i^{-1}(x-\mu_i)-\frac{1}{2}\ln |\Sigma_i|+\ln P(\omega_i)\]
(b)协方差矩阵相等时，忽略相等的项，可得:\\
\[g_i(x)=-\frac{1}{2}(x-\mu_i)^T\Sigma_i^{-1}(x-\mu_i)+\ln P(\omega_i)\]
(3)协方差矩阵不可逆，原因是样本个数太少，或者特征之间的相关度太高，或者是有些维度的特征完全不包含信息，这个时候一般有以下两种办法：\\
(a)特征降维，使得较少的样本就可以得到非奇异的协方差矩阵，同时可以剔除冗余的特征，常用的有PCA和fisher算法。\\
(b)参数共享，多类样本共享协方差矩阵。

\newpage
\section{Problem 3th}
Consider the following decision rule for a two-catagory one-dimensional problem:\\
 Decide $\omega_1$ if $x>\theta$; otherwise decide $\omega_2$.\\
 (a) Show that the probability of error for this rule given by
 \[P(error)=P(\omega_1)\int_{-\infty}^{\theta}p(x|\omega_1)dx + P(\omega_2)\int_{\theta}^{\infty}p(x|\omega_2dx\]
 (b) By differentiating, show that a necessary condination to minimize $P(error)$ is that $\theta$ satify
 \[p(\theta|\omega_1)P(\omega_1)=p(\theta|\omega_2)P(\omega_2)\]
 (c) Does this equation define $\theta$ uniquely?\\
 (d) Give an example where a value of $\theta$ satisfying the equation actually maximizes the probability of error.\\
 \textbf{Solution:}\\
 (c)$\theta$ is not unique.\\
 (d)following is an example.\\
 \begin{figure}[H]
 \centering
 \includegraphics[width=6in,height=3in]{Guass2.jpg}
 \caption{two 1d Guass pdfs}
 \label{fig:graph}
 \end{figure}
 In the Figure3.1,point A and point B both satify the equation in (b), but $\theta$ of A will minimize $P(error)$, while $\theta$ of B will maxmize $P(error)$.

\newpage
\section{Problem 4th}
假定$x$和$m$是两个随机变量，并设在给定$m$时，$x$的条件密度为
\[p(x|m)=(2\pi)^{\frac{1}{2}}\sigma^{-1}\exp\{-\frac{1}{2}(x-m)^2/\sigma^2\}\]
再假设$m$的边缘分布是正态分布，期望值是$m_0$，方差是$\sigma_m^2$，请直接给出$p(m|x)$。\\
\textbf{解：}\\
基本思路：$p(x,m)=p(x|m)p(m)=p(m|x)p(x),p(x)=\int_{-\infty}^{+\infty}p(x,m)dm$.\\
由已知可得$m$的概率密度函数为：
\[p(m)=\frac{1}{\sqrt{2\pi}{\sigma_m}}\exp\{-\frac{(x-m_0)^2}{2\sigma_m^2}\}\]
\[p(x,m)=p(x|m)p(m)=\frac{1}{\sigma\sigma_m}\exp\{-\frac{1}{2}( \frac{(x-m)^2}{\sigma^2} + \frac{(m-m_0)^2}{\sigma_m^2} )\}\]
\begin{align*}
p(x)&=\int_{-\infty}^{+\infty}p(x,m)dm\\
&=\int_{-\infty}^{+\infty}\frac{1}{\sigma\sigma_m}\exp\{-\frac{1}{2}( \frac{(x-m)^2}{\sigma^2} + \frac{(m-m_0)^2}{\sigma_m^2} )\}dm\\
&=\int_{-\infty}^{+\infty} \frac{1}{\sigma_1\sigma_m}\exp\{ \frac{1}{2}( (\frac{1}{\sigma^2}+\frac{1}{\sigma_m^2})(\frac{x}{\sigma^2}+\frac{m_0}{\sigma_m^2})+\frac{x^2}{\sigma^2}+\frac{m_0^2}{\sigma_m^2} ) \}\exp\{ -\frac{(m-(\frac{x}{\sigma^2}+\frac{m_0}{\sigma_m^2}))^2}{\frac{1}{1/\sigma^2+1/\sigma_m^2}} \} dm\\
&= \frac{1}{\sigma_1\sigma_m}\exp\{ \frac{1}{2}( (\frac{1}{\sigma^2}+\frac{1}{\sigma_m^2})(\frac{x}{\sigma^2}+\frac{m_0}{\sigma_m^2})+\frac{x^2}{\sigma^2}+\frac{m_0^2}{\sigma_m^2} ) \} \int_{-\infty}^{+\infty} \exp\{ -\frac{(m-(\frac{x}{\sigma^2}+\frac{m_0}{\sigma_m^2}))^2}{\frac{1}{1/\sigma^2+1/\sigma_m^2}} \} dm\\
&= \frac{1}{\sigma_1\sigma_m}\exp\{ \frac{1}{2}( (\frac{1}{\sigma^2}+\frac{1}{\sigma_m^2})(\frac{x}{\sigma^2}+\frac{m_0}{\sigma_m^2})+\frac{x^2}{\sigma^2}+\frac{m_0^2}{\sigma_m^2} ) \} \frac{\sqrt{2\pi}}{(1/\sigma^2+1/\sigma_m^2)^{1/2}} dm
\end{align*}
\begin{align*}
p(m|x)&=p(x,m)/p(x)\\
&=\frac{(1/\sigma^2+1/\sigma_m^2)^{1/2}}{\sqrt{2\pi}} \exp\{ -\frac{(m-(\frac{x}{\sigma^2}+\frac{m_0}{\sigma_m^2}))^2}{\frac{1}{1/\sigma^2+1/\sigma_m^2}} \} \\
&=\frac{\sigma^2+\sigma_m^2}{\sqrt{2\pi}\sigma\sigma_m}\exp\{ -\frac{1}{2} \frac{\sigma^2+\sigma_m^2}{\sigma^2+\sigma_m^2}( m-\frac{x\sigma_m^2+m_0\sigma^2}{\sigma^2\sigma_m^2} )^2 \}
\end{align*}

\newpage
\section{Problem 5th}
请分别用LDF,QDF分类器对MNIST数据集进行分类，并对结果进行分析讨论。

\textbf{1.问题描述：}\\
\textbf{input:} training data with labels\\
\textbf{output:} a classifier which recognises the test data as accurate as possible.\\
\textbf{2.问题分析：}这里直接使用图像的像素作为特征向量，则有大量的冗余特征，甚至很多特征是完全没有信息的，例如图像四周的空白处。所以我们需要使用降维算法剔除冗余，经典的降维算法是PCA算法，这里我们先使用PCA剔除特征的冗余，再使用高斯模型进行拟合。\\
\textbf{3.算法原理：}\\
LDF classifier:$\Sigma_i=\Sigma$,here we assume that there is a same prior probability for each class.
\[g_i(x)=\omega_i^t x + \omega_i0\]
\[\omega_i=\Sigma^{-1}\mu_i\]
\[\omega_i0=-\frac{1}{2}\mu_i^t \Sigma^{-1} \mi_i\]
QDF classifier:$\Sigma_i \ne \Sigma_j,i\ne j$, there is a same prior probability for each class.
\[g_i(x)=x^t W_i x + \omega_i^t x + \omega_i0\]
\[W_i=-\frac{1}{2}\Sigma_i^{-1}\]
\[\omega_i=\Sigma^{-1}\mu_i\]
\[\omega_i0=-\frac{1}{2}\mu_i^t\Sigma_i^{-1}\mu_i-\frac{1}{2}\ln |\Sigma_i|\]

\textbf{4.LDF伪代码：}(QDF 算法与LDF 类似)\\
\begin{algorithm}[H]
\caption{LDF\_Classifier\_train}
\KwData{data\_fold\_path}
\KwResult{paras for a classifier, $\omega,\omega0,P$}
\Begin{
  \tcp{read train images and labels}
  $images$ \leftarrow \text{read images from }$data\_fold\_path$;\\
  $labels$ \leftarrow \text{read labels from }$data\_fold\_path$;\\
  \tcp{use PCA to reduce the dim}
  [$eig\_vector,eig\_value$]=PCA($images$);\\
  \tcp{select k dim from images, and eig\_value is sorted by descend}
  $[eig\_value\_k,eig\_value\_k]\leftarrow  the\ first\  k\ cloumn$;\\
  $images\_k$\leftarrow $eig\_value\_k^T images$;\\
  $images\_k\_mean \leftarrow \text{mean}(images\_k^T)$;\\
  $images\_k\_cov\leftarrow \text{cov}(images\_k^T)$;\\
  $\omega \leftarrow images\_cov^{-1}images\_k\_mean$;\\
  \For {$i\leftarrow \KwTo \ 10$}
  {
    $\omega0(i)=-\frac{1}{2}images\_k\_mean^T \omega(:,i)$
  }
  $P=eig\_vector\_k$
}
\end{algorithm}\\

\textbf{5.性能分析：}\\
这里影响性能的主要是PCA中k的取值，不同的取值的时间复杂度和识别准确率如下表所示：\\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$rate$&0.80&0.81&0.82&0.83&0.84&0.85&0.86\\
\hline
$k$&44&46&49&52&56&59&64\\
\hline
$acc_{LDF}$&0.8404&0.8403&0.8382&0.8397&0.8366&0.8360&0.8356\\
\hline
$acc_{QDF}$&0.9561&0.9567&\textbf{0.9580}&0.9562&0.9549&0.9551&0.9552\\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
0.87&0.88&0.89&0.90&0.91&0.92&0.93&0.94\\
\hline
68&74&80&87&96&106&119&134\\
\hline
0.8367&0.8333&0.8326&0.8324&0.8309&0.8310&0.8286&0.8225\\
\hline
0.9532&0.9521&0.9492&0.9480&0.9453&0.9425&0.9388&0.9325 \\
\hline
\end{tabular}
\end{center}

通过表格我们可以看出，整体来看，QDF的性能要优于LDF,同时，k(特征维数)的增大并不一定带来性能的提升，LDF的性能一直在下降，QDF则是先上升后下降,在特征维数约为49时性能最好。
而且，在向量维数比较高时（大约超过150），协方差矩阵接近奇异，则会造成计算误差增大，难以得到良好的性能。

\textbf{6.matlab测试程序示例：}打印输出的是一系列的正确率。
\lstset{language=Matlab}%代码语言使用的是matlab
\lstset{breaklines}%自动将长的代码行换行排版
\lstset{extendedchars=false}%解决代码跨页时，章节标题，页眉等汉字不显示的问题
\begin{lstlisting}[frame=single]
>> path='../data/'

path =

../data/

>> [rate,k,accuracy]=test(path,0);
Columns 1 through 4

   0.8404    0.8403    0.8382    0.8397    0.8366    0.8360    0.8356    0.8367
   0.9561    0.9567    0.9580    0.9562    0.9549    0.9551    0.9552    0.9532

 Columns 9 through 16

   0.8333    0.8326    0.8324    0.8309    0.8310    0.8286    0.8225    0.8228
   0.9521    0.9492    0.9480    0.9453    0.9425    0.9388    0.9325    0.1135

\end{lstlisting}
\end{document}
