{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Random forest from scratch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. Take average prediction from all the trees  \n",
    "  \n",
    "2. indexes to keep track which row goes to right, which to left hand side of tree . \n",
    "  \n",
    "3. prediction in each node = mean of dependent variables that are in that node (branch) of that tree. \n",
    "  \n",
    "5. __repr__ to change default print of method to helpful formatted stuff (e.g. where that method is present) .\n",
    "  \n",
    "6. **@ notation** -> decorator. Think of **flask** from data acquisition class. here using **@property** decorator i.e. we don't need to put any parenthesis anyore in function (mostly with no arguments)\n",
    "  \n",
    "7. Score similar to minimizing RMSE is minimizing group standard deviatons. think of cat and dog example to think intuitively.  \n",
    "  \n",
    "4. How to find which variable to split on? --> minimize weighted group std deviations for each split.\n",
    "  \n",
    "8. What is computaition complexity of **find_better_split** ? n square. (n loops x check each lhs(i) n times --> n squared) --> changed to order n in next section\n",
    "  \n",
    "9. ** %prun ** similar to ** %time ** : gives internal processes time\n",
    "  \n",
    "10. **alpha** in scatter plot helps if dots are sitting on top of each other  \n",
    "  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Explore Cython  \n",
    "\n",
    "* Make stuff faster and easy to edit python codes. How/why?  \n",
    "* similar imports --> **cimport numpy as np**   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**-- thought cloud --  **\n",
    "  \n",
    "* Algorithms go obsolete. No point of using SVM today  \n",
    "* Magic number for T to normal distribution = 22. This we should have both in validation/train set.  \n",
    "* Downside of tree based algos --> They don't extrapolate. Linear algos but they are not very accurate. Neural nets are best. \n",
    "* Size of validation set? --> first answer how much accuracy we want . For e.g. for fraud detection, even 0.2% change in accuracy matter, maybe not for differentiating cat and dog.\n",
    "    * A way to think about it --> even with 0.2% differece in accuracy, we could get 50% of change in accuracy from 0.4% . \n",
    "*  set_rf_sample also does with replacement  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Questions --  \n",
    "\n",
    "* 22 number or 22% ?\n",
    "* How is cython faster? (what does c++ has to make it faster)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### HW --\n",
    "\n",
    "* Write code (from scratch) for removing redundant features, partial dependence and tree interpretor. \n",
    "* Add gist and nb extension on jupyter notebook"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
