{
 "metadata": {
  "signature": "sha256:4bdd8478f28a388ed16f9cfa5b23c9f99cae4463b19ea3fe21c495c385e450c3"
 },
 "nbformat": 3,
 "nbformat_minor": 0,
 "worksheets": [
  {
   "cells": [
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Linear classification\n",
      "======================================================================\n",
      "\n",
      "Fisher's Linear Discriminant\n",
      "----------------------------\n",
      "\n",
      "The first example shows the implementation of Fisher's Linear Classifier\n",
      "for 2-class problem and this algorithm is precisely described in book\n",
      "\"Pattern Recognition and Machine Learning\" by Christopher M Bishop (p\n",
      "186, Section 4.1). The main idea of this algorithm is that we try to\n",
      "reduce the dimensionality of input vector X and project it onto 1D space\n",
      "using the equation y=W.T X where W.T - row vector of weights, and we\n",
      "adjust the weight vector W and choose the projection that maximizes the\n",
      "class separation. The following program use the famouse data set Iris\n",
      "with 150 number of instances and 4 attributes (4D space), target vector\n",
      "which contains labels: \"Iris-setosa\", \"Iris-virginica\",\n",
      "\"Iris-versicolor\", therefore, we have 3 classes, but, in this case, we\n",
      "may assume that we have class 1 with labels \"Iris-setosa\" and class 2\n",
      "with other instances. Iris data set is available here:\n",
      "<http://archive.ics.uci.edu/ml/datasets/Iris/> or here (comma separated\n",
      "format) - [bezdekIris.data.txt](files/attachments/LinearClassification/bezdekIris.data.txt)\n",
      " "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "#! python\n",
      "from __future__ import division\n",
      "import numpy as np\n",
      "import matplotlib.pyplot as plt\n",
      "\n",
      "def read_data():\n",
      "  f=open(\"Iris.txt\", 'r')\n",
      "  lines=[line.strip() for line in f.readlines()]\n",
      "  f.close()\n",
      "\n",
      "  lines=[line.split(\",\") for line in lines if line]\n",
      "\n",
      "  class1=np.array([line[:4] for line in lines if line[-1]==\"Iris-setosa\"], dtype=np.float)\n",
      "\n",
      "  class2=np.array([line[:4] for line in lines if line[-1]!=\"Iris-setosa\"], dtype=np.float)\n",
      "  \n",
      "  return class1, class2\n",
      " \n",
      "  \n",
      "def main():\n",
      "\n",
      "  class1, class2=read_data()\n",
      "\n",
      "  mean1=np.mean(class1, axis=0)\n",
      "  mean2=np.mean(class2, axis=0)\n",
      "  \n",
      "  #calculate variance within class\n",
      "  Sw=np.dot((class1-mean1).T, (class1-mean1))+np.dot((class2-mean2).T, (class2-mean2))\n",
      "  \n",
      "  #calculate weights which maximize linear separation\n",
      "  w=np.dot(np.linalg.inv(Sw), (mean2-mean1))\n",
      "\n",
      "  print \"vector of max weights\", w\n",
      "  #projection of classes on 1D space\n",
      "  plt.plot(np.dot(class1, w), [0]*class1.shape[0], \"bo\", label=\"Iris-setosa\")\n",
      "  plt.plot(np.dot(class2, w), [0]*class2.shape[0], \"go\", label=\"Iris-versicolor and Iris-virginica\")\n",
      "  plt.legend()\n",
      "  \n",
      "  plt.show()\n",
      "\n",
      "main()"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "![](files/attachments/LinearClassification/Fisher_disrciminant.JPG)\n",
      "\n",
      "Probabilistic Generative Model\n",
      "------------------------------\n",
      "\n",
      "This program is the implementation of Probabilistic Generative Model for\n",
      "K-class problem which is also described in book \"Pattern Recognition and\n",
      "Machine Learning\" by Christopher M Bishop (p 196, Section 4.2). We try\n",
      "to learn the class-conditional densities (likelihood) p(x|Ck) for each\n",
      "class K, and prior probability density p(Ck), then we can compute\n",
      "posterior probability p(Ck|x) by using Bayes rule. Here we assume that\n",
      "p(x|Ck) are 4D Gaussians with parameters uk - mean vector of class K, Sk\n",
      "- covariance matrix of class K, also p(Ck) for all k is 1/3. Then we\n",
      "compute so called quantities ak (variables pc's in the program) and if\n",
      "ak\\>\\>aj for all k!=j then assign p(Ck|x)=1 and p(Cj|x)=0."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "#! python\n",
      "from __future__ import division\n",
      "import numpy as np\n",
      "import matplotlib.pyplot as plt\n",
      "import math\n",
      "\n",
      "def read_data():\n",
      "  f=open(\"Iris.txt\", 'r')\n",
      "  lines=[line.strip() for line in f.readlines()]\n",
      "  f.close()\n",
      "\n",
      "  lines=[line.split(\",\") for line in lines if line]\n",
      "\n",
      "  data=np.array([line[:4] for line in lines if line], dtype=np.float) \n",
      "\n",
      "  class1=np.array([line[:4] for line in lines if line[-1]==\"Iris-setosa\"], dtype=np.float)\n",
      "\n",
      "  class2=np.array([line[:4] for line in lines if line[-1]==\"Iris-virginica\"], dtype=np.float)\n",
      "\n",
      "  class3=np.array([line[:4] for line in lines if line[-1]==\"Iris-versicolor\"], dtype=np.float)\n",
      "\n",
      " #list of class labels\n",
      "  labels=[]\n",
      "  for line in lines:\n",
      "   strt=line.pop()\n",
      "   labels.append(strt)\n",
      "  #create array of labels\n",
      "  labels=[line.split(\",\") for line in labels if line]\n",
      "  t=np.zeros(shape=(150, 3))\n",
      "  #create target vector encoded according to 1-of-K scheme\n",
      "  for i in xrange(len(data)):\n",
      "   if labels[i]==[\"Iris-setosa\"]: t[i][0]=1\n",
      "   elif labels[i]==[\"Iris-versicolor\"]: t[i][1]=1\n",
      "   elif labels[i]==[\"Iris-virginica\"]: t[i][2]=1\n",
      "  \n",
      "  return class1, class2, class3, data, t\n",
      "\n",
      "def gaussian(x, mean, cov):\n",
      "  xm=np.reshape((x-mean), (-1, 1))\n",
      "  px=1/(math.pow(2.0*math.pi, 2))*1/math.sqrt(np.linalg.det(cov))*math.exp(-(np.dot(np.dot(xm.T, np.linalg.inv(cov)), xm))/2)\n",
      "  return px\n",
      "\n",
      "def main():\n",
      " class1, class2, class3, data, t=read_data()\n",
      " \n",
      " count=np.zeros(shape=(150,1))\n",
      " t_assigned=np.zeros(shape=(150, 3))\n",
      " cov=np.zeros(shape=(3, 4, 4))\n",
      " mean=np.zeros(shape=(3, 4))\n",
      "\n",
      "  #compute means for each class\n",
      " mean1=class1.mean(axis=0)\n",
      " mean2=class2.mean(axis=0)\n",
      " mean3=class3.mean(axis=0)\n",
      "  #compute covariance matrices, such that the columns are variables and rows are observations of variables\n",
      " cov1=np.cov(class1, rowvar=0)\n",
      " cov2=np.cov(class2, rowvar=0)\n",
      " cov3=np.cov(class3, rowvar=0)\n",
      "  \n",
      "#compute gaussian likelihood functions p(x|Ck) for each class\n",
      " for i in xrange(len(data)):\n",
      "     px1=(1/3.0)*gaussian(data[i], mean1, cov1)\n",
      "     px2=(1/3.0)*gaussian(data[i], mean2, cov2)\n",
      "     px3=(1/3.0)*gaussian(data[i], mean3, cov3)\n",
      "     m=np.max([px1, px2, px3])\n",
      " #compute posterior probability p(Ck|x) assuming that p(x|Ck) is gaussian and the entire expression is wrapped by sigmoid function \n",
      "     pc1=((math.exp(px1)*math.exp(-m))*math.exp(m))/((math.exp(px2)*math.exp(-m)+math.exp(px3)*math.exp(-m))*math.exp(m))\n",
      "     pc2=((math.exp(px2)*math.exp(-m))*math.exp(m))/((math.exp(px1)*math.exp(-m)+math.exp(px3)*math.exp(-m))*math.exp(m))\n",
      "     pc3=((math.exp(px3)*math.exp(-m))*math.exp(m))/((math.exp(px1)*math.exp(-m)+math.exp(px2)*math.exp(-m))*math.exp(m))\n",
      " #assign p(Ck|x)=1 if p(Ck|x)>>p(Cj|x) for all j!=k\n",
      "     if pc1>pc2 and pc1>pc3: t_assigned[i][0]=1\n",
      "     elif pc3>pc1 and pc3>pc2: t_assigned[i][1]=1\n",
      "     elif pc2>pc1 and pc2>pc3: t_assigned[i][2]=1\n",
      " #count the number of misclassifications\n",
      "     for j in xrange(3):\n",
      "      if t[i][j]-t_assigned[i][j]!=0: count[i]=1\n",
      "    \n",
      " cov=[cov1, cov2, cov3]\n",
      " mean=[mean1, mean2, mean3]\n",
      " \n",
      " t1=np.zeros(shape=(len(class1), 1))\n",
      " t2=np.zeros(shape=(len(class2), 1))\n",
      " t3=np.zeros(shape=(len(class3), 1))\n",
      " for i in xrange(len(data)):\n",
      "  for j in xrange(len(class1)):\n",
      "   if t_assigned[i][0]==1: t1[j]=1\n",
      "   elif t_assigned[i][1]==1: t2[j]=2\n",
      "   elif t_assigned[i][2]==1: t3[j]=3\n",
      " \n",
      " plt.plot(t1, \"bo\", label=\"Iris-setosa\")\n",
      " plt.plot(t2, \"go\", label=\"Iris-versicolor\")\n",
      " plt.plot(t3, \"ro\", label=\"Iris-virginica\")\n",
      " plt.legend()\n",
      " plt.show()\n",
      "   \n",
      " print \"number of misclassifications\", sum(count), \"assigned labels to data points\", t_assigned, \"target data\", t\n",
      "\n",
      "main()"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "This program resulted in the number of misclassifications = 3 out of all 150 instances \n",
      "\n",
      "![](files/attachments/LinearClassification/Probabilistic_model.PNG)"
     ]
    }
   ],
   "metadata": {}
  }
 ]
}