{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h1><center> Outline</center></h1>\n",
    "# $\\S$Basic\n",
    "* 01-[Introduction on machine learning](https://github.com/thu-xd/Notes_on_ML/blob/master/01-Introduction/Introduction.ipynb)\n",
    "* 02-[Basic probability](https://github.com/thu-xd/Notes_on_ML/blob/master/02-Basic_probability/Probability.ipynb)\n",
    "* 03-[Basic information theory](http://localhost:8891/notebooks/03-Basic_information_theory/Information_theory.ipynb)\n",
    "\n",
    "# $\\S$Supervised learning\n",
    "## Generative Models\n",
    "Generative models are based on the bayes  rule as follows\n",
    "$$\n",
    "p(y=c|\\vec{x},\\vec{\\theta})=\\frac{p(y=c|\\vec{\\theta})p(\\vec{x}|y=c,\\vec{\\theta})}{p(\\vec{x}|\\vec{\\theta})}\n",
    "$$\n",
    "The basic task of generative models is to model $p(\\vec{x}|y=c,\\vec{\\theta})$ which is a multivariate distribution for each class c.\n",
    "* 04-[Naive bayes](https://github.com/thu-xd/Notes_on_ML/blob/master/04-Naive_Bayes/Naive%20Bayes.ipynb)\n",
    "* 05-[Gaussian discriminant analysis](https://github.com/thu-xd/Notes_on_ML/blob/master/05-Gaussian_discriminant_analysis/Gaussian_discriminant_analysis.ipynb)\n",
    "\n",
    "##  Discriminative Models\n",
    "Discriminative models are different from generative models in that discriminative models directly fit a model of the form $p(y|\\vec{x})$.\n",
    "* 06-[Linear_regression](https://github.com/thu-xd/Notes_on_ML/blob/master/06-Linear_regression/Linear_regression.ipynb)\n",
    "* 07-[Logistic regression](https://github.com/thu-xd/Notes_on_ML/blob/master/07-Logistic_regression/Logistic_regression.ipynb)\n",
    "* 08-[Generalized linear models](https://github.com/thu-xd/Notes_on_ML/blob/master/08-Generalized_linear_models/Generalized_linear_models.ipynb)\n",
    "\n",
    "##  Tree based Models\n",
    "* 09-[Decision tree](https://github.com/thu-xd/Notes_on_ML/blob/master/09-Decision_tree/Decision%20Tree.ipynb)\n",
    "\n",
    "## Kernel based Models\n",
    "Kernel based models are used mainly for two purpose: 1. In situations when we don't have a fixed length feature vector. 2. In situations when we want to perform non-linear transfomations on raw features in implicit form.\n",
    "* 10-[Kernel](https://github.com/thu-xd/Notes_on_ML/blob/master/10-Kernel/Kernels.ipynb)\n",
    "\n",
    "## Adaptive basis function models\n",
    "In the previous chapters, we have discuss a lot of models using effective features as inputs, like linear regression and logistic regression. However, effective features are not always at hand. At the most of the time ,we need to apply some transform to the raw features to extract effective non-linear features, which is tricky and needs much domain knowledge. Kernel based models provide a powerful way to create non-linear features from raw features in an implicit way. Although this can work well, it relies on having a good kernel function to measure the similarity between input vectors. Often coming up with a good kernel is quite difficult.\n",
    "\n",
    "An alternative approach is to try to learn useful features $\\phi(\\vec{x})$ directly from the input data. Then these features are used as inputs to a generalized linear model.\n",
    "$$\n",
    "g\\left[\\mu(\\vec{x})\\right]=w_0+\\sum_{m=1}^Mw_m\\phi_m(\\vec{x})\n",
    "$$\n",
    "* 11-[Additive Models](https://github.com/thu-xd/Notes_on_ML/blob/master/11-Adaptive_basis_function_models/Additive%20models.ipynb)\n",
    "* 12-[Boosting](https://github.com/thu-xd/Notes_on_ML/blob/master/12-Boosting/Boosting.ipynb)\n",
    "\n",
    "# $\\S$ Unsupervised learning\n",
    "Unsupervised learning is about estimating the joint probability distribution of the inputs $\\{\\vec{x}_i\\}_{i=1}^N$.\n",
    "## Directed graphical models\n",
    "* 13-[Directed graphical models](https://github.com/thu-xd/Notes_on_ML/blob/master/13-Directed_graphical_models/Directed_graphical_models.ipynb)\n",
    "\n",
    "#### Real valued data\n",
    "* 14-[Mixture models](https://github.com/thu-xd/Notes_on_ML/blob/master/14-Mixture_models/Mixture_models.ipynb)\n",
    "* 15-[Latent linear models](https://github.com/thu-xd/Notes_on_ML/blob/master/15-Latent_linear_models/Latent_linear_models.ipynb)\n",
    "\n",
    "#### Sequential data\n",
    "* 16-[Markov models](https://github.com/thu-xd/Notes_on_ML/blob/master/16-Markov_models/Markov%20models.ipynb)\n",
    "* 17-[State space models](https://github.com/thu-xd/Notes_on_ML/blob/master/17-State_space_models/State%20space%20models.ipynb)\n",
    "\n",
    "## Undirected graphical models\n",
    "* 18-[Markov Random Fields](https://github.com/thu-xd/Notes_on_ML/blob/master/18-Markov_random_fields/Markov_random_fields.ipynb)\n",
    "\n",
    "## Inference in generalized graphical models\n",
    "* 19-[Exact inference](https://github.com/thu-xd/Notes_on_ML/blob/master/19-Exact_inference_for_graphical_models/Exact_inference_for_graphical_models.ipynb)\n",
    "* 20-Variational inference (On the way)\n",
    "* 21-[Monte Carlo sampling](https://github.com/thu-xd/Notes_on_ML/blob/master/21-Monte_Carlo_inference/Monte_Carlo_Approximation.ipynb)\n",
    "* 22-[Markov chain Monte Carlo sampling](https://github.com/thu-xd/Notes_on_ML/blob/master/22-Markov_chain_Monte_Carlo_inference/Markov_chain_Monte_Carlo.ipynb)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [default]",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
