{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Bioinformatics Algorithm\n",
    "\n",
    "## 过程化考核4: Hidden Markov Model\n",
    "\n",
    "> Last Modified Time 2019-11-04 15:18 By ZeFengZhu\n",
    "\n",
    "### 问题描述\n",
    "\n",
    "Make a report about HMM and the relevant knowledge in the class.\n",
    "\n",
    "### 基础概念\n",
    "\n",
    "#### Finite State Machines\n",
    "\n",
    "> Finite State Machines. Brilliant.org. Retrieved 15:50, November 4, 2019, from https://brilliant.org/wiki/finite-state-machines/\n",
    "\n",
    "#### Random Walks\n",
    "\n",
    "> Random Walks . Brilliant.org. Retrieved 15:52, November 4, 2019, from https://brilliant.org/wiki/the-random-walk/\n",
    "\n",
    "#### HMM\n",
    "\n",
    "> https://ww2.mathworks.cn/help/stats/hidden-markov-models-hmm.html\n",
    "\n",
    "> https://www.cnblogs.com/Determined22/p/6750327.html\n",
    "\n",
    "> http://www.randomservices.org/random/markov/General.html\n",
    "\n",
    "> Markov Chains. Brilliant.org. Retrieved 15:45, November 4, 2019, from https://brilliant.org/wiki/markov-chains/\n",
    "\n",
    "> A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, _LAWRENCE R. RABINER, FELLOW, IEEE_ \n",
    "\n",
    "> A Revealing Introduction to Hidden Markov Models, _Mark Stamp∗ Department of Computer Science San Jose State University_\n",
    "\n",
    "> An Introduction to Conditional Random Fields for Relational Learnin, _Charles Sutton, Andrew McCallum Department of Computer Science University of Massachusetts, USA_\n",
    "\n",
    "* Stochastic Processes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### HMM 概述\n",
    "\n",
    "![fig of HMM](../../docs/figs/hmm.png)\n",
    "\n",
    "> 图中阴影部分为观测变量，白圈代表隐变量\n",
    "\n",
    "1. Dynamic Model (time + mixture), 模型用如下$\\lambda$表达\n",
    "2. $\\lambda = (\\pi, A, B)$\n",
    "    * $\\pi \\rightarrow$ 初始probability distribution\n",
    "    * $A \\rightarrow$ 状态转移矩阵\n",
    "    * $B \\rightarrow$ 发射矩阵\n",
    "3. 观测序列 $O \\, \\{o_1, o_2, ..., o_t, o_{t+1},...\\}$\n",
    "    * 观测变量 $o$\n",
    "    * 值域(观测值集合) $V = \\{v_{1}, v_{2}, ..., v_{M}\\}$\n",
    "4. 状态序列 $i \\,\\,\\, \\{i_1, i_2, ..., i_t, i_{t+1},...\\}$\n",
    "    * 状态变量 $v$ \n",
    "    * 值域(状态值集合) $Q = \\{q_{1}, q_{2}, ..., q_{N}\\}$\n",
    "5. $A = [a_{ij}] ,\\,\\, a_{ij}=P(i_{t+1}=q_j|i_t=q_i)$\n",
    "6. $B = [b_{j(k)}], \\,\\, b_{j(k)} = P(o_t=v_k|i_t=q_j)$\n",
    "\n",
    "#### 两个假设\n",
    "1. 齐次马尔可夫假设 (无后效性)\n",
    "$$P(i_{t+1}|i_t,i_{t-1},...i_1,\\,\\,o_t,o_{t_1},...,o_1)=P(i_{t+1}|i_t)$$\n",
    "    * 对t+1时刻，该时刻状态仅与t时刻的状态有关，与前面其他任意时刻的状态以及观测值无关系\n",
    "\n",
    "2. 观测独立假设 \n",
    "$$P(o_t|i_t,i_{t-1},...i_1,\\,\\,o_t,o_{t_1},...,o_1)=P(o_t|i_t)$$\n",
    "    * 对于t时刻，观测变量仅与t时刻的状态有关\n",
    "\n",
    "#### 三个问题\n",
    "1. Evalution\n",
    "    * 已知$\\lambda$,求该观测序列$O=o_1o_2...o_t$出现的概率\n",
    "    * $P(O|\\lambda)$\n",
    "    * Forward/Backward Algorithm\n",
    "2. Learning\n",
    "    * 参数估计问题：$\\lambda$如何求\n",
    "    * $\\lambda = arg\\,maxP(O|\\lambda)$\n",
    "    * EM算法\n",
    "3. Decoding\n",
    "    * 已知观测序列$O$，找到一状态序列$I=i_1i_2...i_t$使得$P(I|O)$最大\n",
    "    * $I=arg\\,maxP(I|O)$\n",
    "    * 预测: $P(i_{t+1}|o_1o_2...o_t)$\n",
    "    * 滤波: $P(i_t|o_1o_2...o_t)$\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### HMM Evaluation Problem\n",
    "\n",
    "> Given $\\lambda$，求$P(O|\\lambda)$\n",
    "\n",
    "$P(O|\\lambda)=\\sum_{I}P(I,O|\\lambda)=\\sum_{I}P(O|I,\\lambda)\\,P(I|\\lambda)$\n",
    "\n",
    "#### 推理\n",
    "\n",
    "1. $P(i_t|i_1,i_2,...,i_{t-1}\\,,\\lambda)=P(i_t|i_{t-1})=a_{i_{t-1},\\,i_t}$\n",
    "2. 进行联合分布递归展开\n",
    "\n",
    "因为:\n",
    "\n",
    "$$P(I|\\lambda)=P(i_1,i_2,...,i_t|\\lambda)=P(i_t|i_1,i_2,...,i_{t-1}\\,,\\lambda)\\,P(i_1,i_2,...,i_{t-1}\\,,\\lambda)=\\pi(a_{i_t})\\,\\prod_{t=2}^{T}a_{i_{t-1},\\,i_t}$$\n",
    "\n",
    "$$P(O|I,\\lambda)=\\prod_{t=1}^{T}b_{i_t}(o_t)$$\n",
    "\n",
    "所以:\n",
    "\n",
    "$$P(O|\\lambda)=\\sum_{I}\\pi(a_{i_1})\\prod_{t=2}^{T}a_{i_{t-1},\\,i_t}\\prod_{t=1}^{T}b_{i_t}(O_t)$$\n",
    "\n",
    "该公式反映了这个求解过程是指数级算法($O(N^T)$)，需要优化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Forward Algorithm\n",
    "\n",
    "> 结合两个假设推导\n",
    "\n",
    "1. 引入记号$\\alpha_{t}(i)$，记为初始到t时刻的观测序列以及t时刻的状态的联合概率\n",
    "$$\\alpha_{t}(i)=P(o_1,...o_t,\\,i_t=q_i|\\lambda) \\Rightarrow \\alpha_{T}(i)=P(O,\\,i_t=q_i|\\lambda)$$\n",
    "2. 递推式: $\\alpha_{t+1}(j)=\\sum_{i=1}^{N}b_{j}(O_{t+1})\\,a_{ij}\\,\\alpha_{t}(i)$\n",
    "3. $P(O|\\lambda)=\\sum_{i=1}^{N} P(O,i_t=q_i|\\lambda)=\\sum_{i=1}^{N}\\alpha_{T}(i)$\n",
    "\n",
    "\n",
    "#### Backward Algorithm\n",
    "\n",
    "> 结合两个假设推导, 为前向算法的变体, 从最后一个观察到的符号开始，在网络中后向移动，直到到达位置T\n",
    "\n",
    "1. 引入记号$\\beta_{t}(i)$，记为t时刻到最终T时刻的观测序列以及t时刻的状态的联合概率\n",
    "$$\\beta_{t}(i)=P(o_{t+1},...o_T,\\,i_t=q_i|\\lambda)$$\n",
    "2. 递推式: $\\beta_{t}(i)=\\sum_{j=1}^{N}b_{j}(o_{t+1})\\,a_{ij}\\,\\beta_{t+1}(j)$\n",
    "3. $P(O|\\lambda)=\\sum_{i=1}^{N} P(o_1,...,o_T,i_t=q_i|\\lambda)=\\sum_{i=1}^{N}b_{i}(o_1)\\pi_{i}\\beta_{1}(i)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
