{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Bagging and Boosting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If different are uncorrelated with each other, residuals are also uncorrelated with each other and on an average it goes to zero by bagging"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Checklist for RF to make it work\n",
    "* data in numerical format -> means handle categorical variables\n",
    "* handle missing values\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Information game\n",
    "* Keep on checking how validation score improves with next split in the tree"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Hyperparameters:\n",
    "* **max_depth** : default depth of tree = log2(n) where n is number of rows in data. {! think like each node split up in 2 subnodes and tree will end with 1 row in each node, i.e. n leaf nodes}. If I chose min_leaves = 2, then our max_depth will be log2(n) - 1 \n",
    "* **max_features** : 0.5 means 50% of total features and it is different features for different split\n",
    "* **min_sample_leaf** : Minimum number of number of rows/observations in each lowest leaf node\n",
    "\n",
    "#### Why subsample? \n",
    "##### What to we need for a good RF :\n",
    "1. Each tree better --> minus for subsampling\n",
    "2. b/w trees less correlation --> plus for subsampling\n",
    "\n",
    "Therefore, chosing right hyperparamters and subsample, we want to reduce correlation among trees of a forest, so that on an average they perform well!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* Look at 5 scores printed from rf print function. It can help to determine some feature which has high feature importance but is decreasing validation score\n",
    "* "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Interpretetion of RF\n",
    "* Look at the std dev of all different trees\n",
    "* \n",
    "\n",
    "#### Important features \n",
    "(can use some threshold cut-off)\n",
    "* It is not necessary that removing important features will improve the accuracy. But by getting rid of features that are not important is a way to get rid of \n",
    "\n",
    "**Why can't we take only 1 feature per tree and make forest? ** \n",
    "-- we will not be capturing interactions then. e.g. prob. of claim depends on how old the car is, then we need both year claimed and year sold. Taking only 1 feature in each tree will not capture that.\n",
    "\n",
    "**Why need one hot encoding**? -- Rule of thumb: One hot encoding for column of cardinality > x (x = 7)\n",
    "-- Let's say we have 5 levels of feature C1 = {VL, L, M, H, VH} and we are only interested in VL level, then using C1 will make a lot of nodes for C1 and reduce it's importance. But we could have 5 differnt columns (0,1) and only VL will come out to be important. \n",
    "\n",
    "Doing this can give particular level of some feature which turn out to be important.\n",
    "\n",
    "** Rank correlation (spearmanr) ** -- As correlation coefficient can't capture non-linear relationship, we can do a rank correlation to check if the 2 things are related, regardless linear or not. The idea is to first convert all to rank, then calculate correlation coefficient\n",
    "\n",
    "#### Plot for interpretetion and insights :\n",
    "* pdp \n",
    "* use of univariate plots\n",
    "* partial dependence plots\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Questions:\n",
    "1. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
