{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h1><center> Exact inference for graphical models</center></h1>\n",
    "# 1. Basic\n",
    "Inference means compute $p(Y|E=e)$ in a graphical models. By the definition of conditional probability, we know that \n",
    "$$\n",
    "p(Y|E=e)=\\frac{P(Y,E=e)}{p(E=e)}\n",
    "$$\n",
    "Let $W=\\mathcal{X}-Y-E$ be the  irrelevant random variable. Then\n",
    "\\begin{align}\n",
    "p(Y=y,E=e) &=\\int p(Y=y,E=e,W=w)dw \\\\\n",
    "p(E=e)&=\\int p(Y=y,E=e)dy \\\\\n",
    "\\end{align}\n",
    "We can see that  we can perform inference by just marginal rule  and conditional rule. Unfortunately, the problem of inference in graphical models is NP-hard, and therefore it probably requires exponential time in the worst case.\n",
    "# 2. Variable Elimination\n",
    "## 2.1 The basic ideas\n",
    "We begin our discussion of inference by discussing the principles underlying exact inference in graphical models. Taking a simple network as example \n",
    "$$\n",
    "X_1 \\rightarrow X_2 \\ldots \\rightarrow X_n\n",
    "$$\n",
    "We assume that each variable has $k$ posssible values.The marginal distribution $p(X_{i+1})$ can be computed as\n",
    "$$\n",
    "p(X_{i+1})=\\sum_{x_i}p(x_i)p(X_{i+1}|x_i)\n",
    "$$\n",
    "where $p(x_i)$ is computed in the previous step. The cost of each step is $O(k^2)$;hence, the total cost is $O(nk^2)$. By comparison, consider the naive method, which generating the entire joint distribution and summing it out. The cost is $O(k^n)$.\n",
    "\n",
    "The reason behind the reduction of complexity is that we have done too much repeated computation. To compute $p(X_{i+1}=m),\\, m=1,2,\\ldots,k$, we have compute $p(X_i)$ for $k$ times. Written somewhat more compactly\n",
    "\\begin{align}\n",
    "p(X_{i+1})  &=\\sum_{X_1,X_2,\\ldots,X_i}p(X_1,X_2,\\ldots,X_{i+1})  \\\\\n",
    "            &=\\sum_{X_1,X_2,\\ldots,X_i}p(X_1)p(X_2|X_1)\\ldots p(X_{i+1}|X_i)  \\\\\n",
    "            &=\\sum_{X_i} \\left[  \\sum_{X_1,X_2,\\ldots,X_{i-1}} p(X_1)p(X_2|X_1)\\ldots p(X_{i}|X_{i-1})\\right]\n",
    "            p(X_{i+1}|X_i) \\\\\n",
    "\\end{align}\n",
    "We can calculate \n",
    "$$\n",
    "p(X_i)=\\left[  \\sum_{X_1,X_2,\\ldots,X_{i-1}} p(X_1)p(X_2|X_1)\\ldots p(X_{i}|X_{i-1})\\right]\n",
    "$$\n",
    "just once for all possible $X_{i+1}$, which reduce the exponential complexity to linear complexity.\n",
    "\n",
    "To summarize, the two ideas that help us address the exponential blowup of the joint distribution are:\n",
    "* Because of the structure of the local markov property, some subexpressions in the joint depend only on a small number of variables.\n",
    "* By computing these expressions once and caching the results, we can avoid generating them exponentially many times.\n",
    "\n",
    "## 2.2 Basic Elimination\n",
    "We will assume that we are given a graphical model as a product of factors\n",
    "$$\n",
    "p(\\vec{x})=\\frac{1}{Z}\\prod_{c \\in C}\\phi_c(\\vec{x}_c)\n",
    "$$\n",
    "where $\\vec{x}_c$ means variables in factor c. We can view the variable elimination as an operation on factors.\n",
    "\n",
    "**Factor marginalization** Let $\\vec{X}$ be a set of variables and $Y \\not \\in \\vec{X}$ a variable. Let $\\phi(\\vec{X},Y)$ be a factor. We define the factor marginalization of $Y$ in $\\phi$, denoted $\\sum_Y \\phi$ to be a new factor $\\psi$ over $\\vec{X}$ such that\n",
    "$$\n",
    "\\psi(\\vec{X})=\\sum_Y \\phi(\\vec{X},Y)\n",
    "$$\n",
    "We have a simple rule allowing us to exchange summation and product: If $X \\not \\in Scope[\\phi_1]$, where $Scope[\\phi_1]$ means variables in factor $\\phi_1$, then\n",
    "$$\n",
    "\\sum_X(\\phi_1 \\cdot \\phi_2)=\\phi_1 \\cdot \\sum_X\\phi_2\n",
    "$$\n",
    "One simple algorithm using factor marginalization and the above exchange rule is **sum-product variable elimination**. The basic idea in the algorithm is that we sum out variables one at a time. When we sum out any variable, we multiply all the factors that mention that variable, generating a product factor. Now, we sum out the variable from this combined factor, generating a new factor that we enter into our set of factors to be dealt with\n",
    "***\n",
    "```\n",
    "//Algorithm Sum-product variable elimination algorithm\n",
    "Procedure Sum_Product_VE(\n",
    "    Phi,           // Set of factors\n",
    "    Z,             // Set of variables to be eliminated\n",
    "    <,             // Ordering on Z\n",
    "    )\n",
    "    \n",
    "    Let z1,z2,...,zk be an ordering of Z such that\n",
    "        zi<zj if and only if i<j\n",
    "    \n",
    "    for i=1,...,k\n",
    "        Phi=Sum_Product_Eliminate_Var(Phi,zi)\n",
    "    return product(Phi)     // product of all factors\n",
    "\n",
    "Procedure Sum_Product_Eliminate_Var(\n",
    "    phi,           // Set of factors\n",
    "    z,             // Variable to be eliminated\n",
    "    )\n",
    "    \n",
    "    phi_in={p in Phi where z in scope[p]}  //phi_in is the factors where z is in its scope\n",
    "    phi_rest=Phi-phi_in\n",
    "    phi_product=product(phi_in)            // The product factor\n",
    "    phi_product_eliminate=sum_z(phi_product)  // Eliminate z from the product factor\n",
    "    return {phi_rest,phi_product_eliminate}\n",
    "```\n",
    "\n",
    "***\n",
    "\n",
    "<img src=\"imgs/1.png\" alt=\"drawing\" width=\"250\"/>\n",
    "The joint distribution of the above graphical model is\n",
    "\\begin{align}\n",
    "p(A,B,C,D,E,F,G,H)&=p(A)\\,p(B|A)\\,p(C)\\,p(D|B,C)\\,p(E|C)\\,p(F|D)\\,p(G|E,F)\\,p(H|D,G) \\\\\n",
    "                  &=\\phi_A(A)\\,\\phi_B(B,A)\\,\\phi_C(C)\\,\\phi_D(D,B,C)\\,\\phi_E(E,C)\\,\\phi_F(F,D)\\,\\phi_G(G,E,F)\\,\n",
    "                  \\phi_H(H,D,G)\n",
    "\\end{align}\n",
    "We will now apply the VE algorithm to compute $p(G)$ with the elimination ordering:A,B,C,D,E,F,H.\n",
    "1. Eliminating A\n",
    "\\begin{align}\n",
    "\\psi_1(A,B) &=\\phi_A(A)\\,\\phi_B(B,A) \\\\\n",
    "\\tau_1(B)&=\\sum_A \\psi_1(A,B)\n",
    "\\end{align}\n",
    "2. Eliminating B\n",
    "\\begin{align}\n",
    "\\psi_2(D,B,C) &=\\phi_D(D,B,C) \\cdot \\tau_1(B) \\\\\n",
    "\\tau_2(D,C)  &=\\sum_B \\psi_2(D,B,C)\n",
    "\\end{align}\n",
    "3. Eliminating C\n",
    "\\begin{align}\n",
    "\\psi_3(D,E,C) &=\\phi_C(C)\\cdot \\phi_E(E,C) \\cdot \\tau_2(D,C) \\\\\n",
    "\\tau_3(D,E) &=\\sum_C \\psi_3(D,E,C)\n",
    "\\end{align}\n",
    "4. Eliminating D\n",
    "\\begin{align}\n",
    "\\psi_4(H,D,G,F,E) &=\\phi_H(H,D,G) \\cdot \\phi_F(F,D) \\cdot \\tau_3(D,E) \\\\\n",
    "\\tau_4(H,G,F,E)   &=\\sum_D \\psi_4(H,D,G,F,E) \\\\\n",
    "\\end{align}\n",
    "5. Eliminating E\n",
    "\\begin{align}\n",
    "\\psi_5(H,G,E,F) &=\\tau_4(H,G,F,E) \\cdot \\phi_G(G,E,F)\\\\\n",
    "\\tau_5(H,G,F) &=\\sum_E \\psi_5(H,G,E,F)\\\\\n",
    "\\end{align}\n",
    "6. Eliminating F and H\n",
    "$$\n",
    "\\tau_6(G)=\\sum_{F,H} \\tau_5(H,G,F)\n",
    "$$\n",
    "\n",
    "When dealing with evidences, which means compute $p(Y|E=e)$, we don't need to eliminate the variables in $E$. We only need to eliminate the irrelevant random variable $W=\\mathcal{X}-Y-E$ and leaves the evidences as it's.\n",
    "\n",
    "The complexity of the variable elimination algorithm is related to the graph structure and the elimination order. The complexity is exponential growing with the maximum size of the intermediate factors during variable elimination.\n",
    "# 3.Clique Trees\n",
    "## 3.1 Basic idea\n",
    "The main disadvantage of the variable elimination algorithm (apart from its exponential dependence on treewidth) is that it is inefficient if we want to compute multiple queries conditionedon the same evidence, the reason of which is that in every query we compute the entire graph.In this section, we present another view of variable elimination, in which we can cache the intermediate result. We take it as a \"message  passing\" process. We define a new object call *cluster*, which means a collection of variables. Each cluster is a computational data structure, which takes \"messages\" $\\tau_j$ generated by other cluster $\\psi_j$ and generates a message $\\tau_i$ that is used by another cluster $\\psi_l$.\n",
    "\n",
    "**Cluster Graphs** is a data structure that provides a graphical flowchart of the variable elimination process.A cluster graph $U$ for a set of clusters $\\Phi$ over $X$ is an undirected graph, each of whose nodes $i$ is associated with a subset $C_i \\in X$ .Each edge between a pair of clusters $C_i$ and $C_j$ is associated with a sepset $S_{i,j} \\in C_i \\cap C_j$.\n",
    "\n",
    "An execution of variable elimination defines a cluster graph: We have a cluster for each factor $\\psi_i$ used in the computation, which is associated with the set of variables $C_i = Scope[\\psi_i]$. We\n",
    "draw an edge between two clusters $C_i$ and $C_j$ if the message $\\tau_i$ , produced by eliminating a\n",
    "variable in $\\psi_i$ , is used in the computation of $\\tau_j$.\n",
    "<img src=\"imgs/2.png\" alt=\"drawing\" width=\"350\"/>\n",
    "\n",
    "As the variable elimination algorithm uses each intermediate factor $\\tau_i$ as most once, the cluster graph induced by an execution of variable elimination is necessarily a tree, which is called a **clique tree**.We note that although a cluster graph is defined to be an undirected graph, an execution of variable elimination does define a direction for the edges, as induced by the flow of messages between the clusters. The directed graph induced by the messages is a directed tree, with all the messages flowing toward a single cluster where the final result is computed.This cluster is called the root of the directed tree.\n",
    "## 3.2 Message Passing: Sum Product\n",
    "One way of using a clique tree is simply as guidance for the operations of variable elimination. The factors are computed in the cliques, and messages are sent along the edges. Each clique takes the incoming messages (factors), multiplies them, sums out one or more variables, and sends an outgoing message to another clique. We will now first assume that we have given a verified clique tree. \n",
    "\n",
    "We can now specify a general variable elimination algorithm that can be implemented via **message passing** in a clique tree.\n",
    "\n",
    "Let $\\mathcal{T}$ be a clique tree with the cliques $C_1,C_2,\\ldots,C_k$.We begin by multiplying the factors assigned to each clique, resulting in our initial potentials.We then use the clique-tree data structure to pass messages between neighboring cliques, sending all messages toward the root clique. Recall that each factor $\\phi \\in \\Phi$ is assigned to some clique $\\alpha(\\phi)$, we define the initial potential of $C_j$ to be\n",
    "$$\n",
    "\\phi_j(C_j)=\\prod_{\\phi:\\alpha(\\phi)=j}\\phi\n",
    "$$\n",
    "Let $C_r$ be the selected root clique. We now perform sum-product variable elimination over the cliques,starting from the leaves of the clique tree and moving towards the root. The message from $C_i$ to its parent clique $C_j$ is computed using the following \n",
    "**sum-product message passing** algorithm\n",
    "$$\n",
    "\\tau_{i \\rightarrow j}=\\sum_{C_i-(C_i\\cap C_j)}\\phi_i \\cdot \\prod_{k \\in(Nb_i-\\{j\\})}\\tau_{k\\rightarrow i}\n",
    "$$\n",
    "where $Nb_i$ is the neighbor of $i$.\n",
    "\n",
    "This message passing process proceeds up the tree, culminating at the root clique. When the root clique has received all messages, it multiplies them with its own initial potential. The result is a factor called the **beliefs**.\n",
    "## 3.3 Constructing a Clique Tree\n",
    "So far, we have assumed that a clique tree is given to us. Now ,we need to find ways to construct a clique tree. There are two basic approaches, the first based on variable elimination and the second on direct graph manipulation.\n",
    "### 3.3.1 Clique trees from variable elimination\n",
    "The first approach is based on variable elimination. As we have discussed above, the execution of a variable elimination algorithm can be associated with a clique tree.\n",
    "### 3.3.2 Clique trees from Chordal Graphs\n",
    "There are several algorithms which construct clique tree from a graph model.Details refre to the wiki[https://en.wikipedia.org/wiki/Junction_tree_algorithm]"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [default]",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
