{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "eeb6f225-f12b-4734-b4d8-bbad158489b4",
    "_uuid": "14822e2720fab974cf6d25aa5d70ad8ffd15c376"
   },
   "source": [
    "# Table of Contents\n",
    "\n",
    "1. &nbsp; [Introduction](#1.-Introduction)\n",
    "2. &nbsp; [Preamble](#2.-Preamble)\n",
    "3. &nbsp; [Helpers](#3.-Helpers)\n",
    "4. &nbsp; [Leaderboard](#4.-Leaderboard)\n",
    "5. &nbsp; [Feature Engineering](#5.-Feature-Engineering)\n",
    "6. &nbsp; [Pipeline Preprocessing](#6.-Pipeline-Preprocessing)\n",
    "7. &nbsp; [Holdout + CV](#7.-Holdout-+-CV)\n",
    "8. &nbsp; [Final Words](#8.-Final-Words)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "ee2d9d9a-6251-40e0-bb3a-f2ade0b0a2e2",
    "_uuid": "bfae23d91a42610cc3f78232becb0277cc7e243c"
   },
   "source": [
    "# 1. Introduction\n",
    "\n",
    "This notebook is an XGBoost starter for the Titanic dataset, featuring no missing data imputation and no data binning.\n",
    "\n",
    "No EDA since there's plenty of awesome EDA for this dataset.\n",
    "\n",
    "Questions and feedback are welcome!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "2227856b-6764-4623-945c-842db512d171",
    "_uuid": "9da4de755a1c71324c008c79062ec81e0e3e0357"
   },
   "source": [
    "## Credit\n",
    "\n",
    "Moral of the story:\n",
    "> Generally, grouping passengers is a good way to improve your score. Try searching for groups.\n",
    "\n",
    "-- Konstantin\n",
    "\n",
    "I learned a lot from various kernels and discussions.  I want to especially credit:\n",
    "\n",
    "- [How am I doing with my score](https://www.kaggle.com/pliptor/how-am-i-doing-with-my-score) by [Oscar Takeshita](https://www.kaggle.com/pliptor)\n",
    "- [Titanic [0.82] - [0.83]](https://www.kaggle.com/konstantinmasich/titanic-0-82-0-83) by [Konstantin](https://www.kaggle.com/konstantinmasich)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "dd57ca25-fc8d-47a4-bb66-1e07b6c0ac62",
    "_uuid": "0c2eee01f648171a55fbdae70e0750e7a585da75"
   },
   "source": [
    "I also recommend checking out:\n",
    "\n",
    "### sklearn pipelines + pandas\n",
    "- [Deploying Machine Learning using sklearn pipelines](https://www.youtube.com/watch?v=URdnFlZnlaE) (YouTube) by Kevin Goetsch\n",
    "- [Mind the Gap! Bridging the pandas – scikit learn dtype divide](https://www.youtube.com/watch?v=KLPtEBokqQ0) (YouTube) by Tom Augspurger\n",
    "- Kevin Goetsch's github repo: https://github.com/Kgoetsch/sklearn_pipeline_enhancements\n",
    "- Julie Michelman's github repo: https://github.com/jem1031/pandas-pipelines-custom-transformers\n",
    "\n",
    "### XGBoost\n",
    "- [Walkthrough](https://www.youtube.com/watch?v=ufHo8vbk6g4) (YouTube) by [Tong He](https://www.kaggle.com/hetong007)\n",
    "- [Open Source Tools and Data Science Competitions](https://www.youtube.com/watch?v=7YnVZrabTA8) (YouTube) by [Owen Zhang](https://www.kaggle.com/owenzhang1)\n",
    "- [Parameters](https://github.com/dmlc/xgboost/blob/master/doc/parameter.md) (github)\n",
    "- [Python API](http://xgboost.readthedocs.io/en/latest/python/python_api.html) (readthedocs)\n",
    "\n",
    "### Titanic\n",
    "- https://www.encyclopedia-titanica.org/\n",
    "- [Titanic Cutaway Diagram](https://commons.wikimedia.org/wiki/File:Titanic_cutaway_diagram.png) (Wikimedia)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "94113272-d401-46f2-be37-724588e481e5",
    "_uuid": "dd3b89587a14f2a86a89157a422b3dea9286cc8c"
   },
   "source": [
    "## License\n",
    "\n",
    "My work is licensed under CC0:\n",
    "\n",
    "- Overview: https://creativecommons.org/publicdomain/zero/1.0/\n",
    "- Legal code: https://creativecommons.org/publicdomain/zero/1.0/legalcode.txt\n",
    "\n",
    "All other rights remain with their respective owners."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "39a26201-84ae-4f1e-8bb8-f0e96fd491da",
    "_uuid": "c0971a6b7c7a9d92ad313f95b529ac204e5e5c9a",
    "collapsed": true
   },
   "source": [
    "# 2. Preamble\n",
    "\n",
    "The usual suspects."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "b302bd8f-396c-4453-bef7-72ba6bfece9a",
    "_uuid": "a82a6859aaac5b08af2dd1df85c5b49d8adaf3bc",
    "collapsed": true
   },
   "source": [
    "## 2.1 Jupyter Magic"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "_cell_guid": "b7f30342-3763-4262-8590-72e5bf843594",
    "_kg_hide-input": true,
    "_kg_hide-output": false,
    "_uuid": "b794e78774e52872fb21a639bbb5f2772a77a601",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%load_ext autoreload\n",
    "%autoreload 2\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "df7a307c-26f6-4115-9af6-8ef809d9a1e8",
    "_uuid": "3df7e6a9a10bcceb722154d1eb4c1767307824dc",
    "collapsed": true
   },
   "source": [
    "## 2.2 Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "_cell_guid": "5a1ca71a-58b7-41db-992a-0f6d88a7794d",
    "_kg_hide-input": true,
    "_uuid": "999022daaa90bab4b63dd64b1e2098f968ab60b9",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from functools import partial\n",
    "\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import seaborn as sns\n",
    "import xgboost as xgb\n",
    "from matplotlib import pyplot as plt\n",
    "from sklearn.pipeline import make_pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "801e4f71-ea31-40cb-b3bc-78cbe674547b",
    "_uuid": "ce64801b45bb695046c67e2916a7b91865fd7535",
    "collapsed": true
   },
   "source": [
    "## 2.3 Library Settings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "_cell_guid": "42face5d-8de8-48f6-9751-07729a9d13df",
    "_kg_hide-input": true,
    "_uuid": "c6f6a159337669ecb7ab507ab0e293bcf487219b",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.rcParams['figure.figsize'] = (13,4)\n",
    "sns.set(\n",
    "    style='whitegrid',\n",
    "    color_codes=True,\n",
    "    font_scale=1.5)\n",
    "np.set_printoptions(\n",
    "    suppress=True,\n",
    "    linewidth=200)\n",
    "pd.set_option(\n",
    "    'display.max_rows', 1000,\n",
    "    'display.max_columns', None,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "1aee4046-a576-4f8b-bd94-70cee8febe8a",
    "_uuid": "d9908c7094c39be39cfc2b94a7b8e9a8a4dc3e81",
    "collapsed": true
   },
   "source": [
    "## 2.4 Globals"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "_cell_guid": "534d7263-e5ff-485f-9344-3ace5a8954a7",
    "_uuid": "4df87052a36ff0ba9ab9630d7d0c4b68696e2910",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "SEED = 0\n",
    "SEED_LIST = 2 ** np.array([2, 3, 5, 7, 11, 13, 17, 19, 23, 29])\n",
    "VAL_SIZE = 0.3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "671df15a-25c8-4a44-a515-a7903072950e",
    "_uuid": "01e2ab8a4aac8bef04f7ae2ea6499da89f3bb78f"
   },
   "source": [
    "## 2.5 File Paths"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "_cell_guid": "dfe95984-0fca-4110-8ab9-880b9cd4bd86",
    "_kg_hide-input": true,
    "_uuid": "93f6e2efc3a4e450837cf71567bb04b3b23d53d2",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "train_csv       = '../input/titanic/train.csv'\n",
    "test_csv        = '../input/titanic/test.csv'\n",
    "submit_csv      = '../input/titanic/gender_submission.csv'\n",
    "leaderboard_csv = '../input/titanic-public-leaderboard/titanic-publicleaderboard.csv'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "b8a3a9a3-8fee-4b78-84d7-a9c7615d6386",
    "_uuid": "879d2228af33e8c2b9b9931ee8e2e50e66324272",
    "collapsed": true
   },
   "source": [
    "# 3. Helpers\n",
    "\n",
    "The true carry."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "939a4bb4-10e1-4331-b8f3-b01a92d279f3",
    "_uuid": "6d548cfddfb55613378df5ea41747942dec27755",
    "collapsed": true
   },
   "source": [
    "## 3.1 XGBoost"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "4c081848-227e-46c6-af0b-21d7f1b97451",
    "_uuid": "d6ae8f806800c9647e701c99e950bb0e631eb5e1"
   },
   "source": [
    "### Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "_cell_guid": "90939e80-3b16-412b-b33a-78521f0a7170",
    "_kg_hide-input": true,
    "_uuid": "1250d21e1d441d14674b64171101d4d8890ac094",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from sklearn.model_selection import RepeatedStratifiedKFold\n",
    "\n",
    "def cv(params, n=100, n_cv=5, k=5):\n",
    "    cv_results = xgb.cv(\n",
    "        params,\n",
    "        dfull,\n",
    "        num_boost_round=n,\n",
    "        folds=RepeatedStratifiedKFold(n_splits=k, n_repeats=n_cv, random_state=SEED),\n",
    "        seed=SEED,\n",
    "    )\n",
    "    plot_cv(cv_results)\n",
    "    return cv_results\n",
    "\n",
    "def holdout(params, n=100, early_stopping_rounds=None):\n",
    "    evals = {}\n",
    "    m = xgb.train(\n",
    "        params,\n",
    "        dtrain,\n",
    "        num_boost_round=n,\n",
    "        evals=[(dtrain, 'train'), (dval, 'val')],\n",
    "        evals_result=evals,\n",
    "        early_stopping_rounds=early_stopping_rounds,\n",
    "        verbose_eval=None,\n",
    "    )\n",
    "    plot_evals(evals)\n",
    "    return evals\n",
    "\n",
    "def train(params, n):\n",
    "    return xgb.train(\n",
    "        params,\n",
    "        dfull,\n",
    "        num_boost_round=n,\n",
    "        verbose_eval=None,\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "8be5f0dd-23ce-4916-8ccc-918552d3d56c",
    "_uuid": "6f2395995cdbad914a7442eb73c44ed756179e98"
   },
   "source": [
    "### Plotting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "_cell_guid": "d5c56ac5-5e42-4577-b84b-9ad87d11f072",
    "_kg_hide-input": true,
    "_uuid": "814a245b0b61bc7a76e66f59a782d9717325f0f1",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def roll(ls, w=5):\n",
    "    return pd.Series(ls).rolling(window=w).mean()\n",
    "\n",
    "def plot(a, b, c, d):\n",
    "    plt.subplot(1, 2, 1)\n",
    "    plt.plot(a), plt.plot(b)\n",
    "    plt.ylim(0, 0.7)\n",
    "\n",
    "    plt.subplot(1, 2, 2)\n",
    "    plt.plot(c), plt.plot(d)\n",
    "    plt.ylim(0, 0.2)\n",
    "\n",
    "def plot_cv(cv_dict, start=0, stop=None):\n",
    "    keys = [\n",
    "        'train-logloss-mean',\n",
    "        'test-logloss-mean',\n",
    "        'train-error-mean',\n",
    "        'test-error-mean'\n",
    "    ]\n",
    "    plot(*[roll(cv_dict[k][start:stop]) for k in keys])\n",
    "\n",
    "def plot_evals(evals, start=0, stop=None):\n",
    "    eval_list = [\n",
    "        roll(evals[a][b][start:stop])\n",
    "        for b in ['logloss', 'error']\n",
    "        for a in ['train', 'val']\n",
    "    ]\n",
    "    plot(*eval_list)\n",
    "\n",
    "def plot_cv_error(cv_results, start=0, stop=None):\n",
    "    plt.plot(cv_results[['train-error-mean', 'test-error-mean']][start:stop])\n",
    "\n",
    "def plot_holdout_error(h, start=0, stop=None):\n",
    "    plt.plot(\n",
    "        pd.DataFrame(\n",
    "            [h['train']['error'], h['val']['error']],\n",
    "            index=['train', 'val'])\n",
    "        .T\n",
    "        [start:stop]\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "c2b8870f-5f22-47f8-a4ca-9b732b8c2770",
    "_uuid": "7bf0cdb0335c736c67f9687ee376967b18bc2501"
   },
   "source": [
    "### Submit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "_cell_guid": "6bd609cd-184c-4a98-833f-77a76f63bc86",
    "_kg_hide-input": true,
    "_uuid": "154286dead77083ed8342187da47b238438d0540",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def ensemble(params, n):\n",
    "    def d(x): return dict(params, seed=x)\n",
    "    return (\n",
    "        np.vstack(train(d(x), n).predict(dtest) for x in SEED_LIST)\n",
    "        .T\n",
    "        .mean(axis=1)\n",
    "    )\n",
    "\n",
    "def submit(y_hat, name):\n",
    "    df = pd.read_csv(submit_csv).assign(Survived=y_hat)\n",
    "    timestamp = datetime.datetime.now().strftime('%d-%m-%Y_%H-%M')\n",
    "    path = f'./{timestamp}_{name}.csv'\n",
    "    df.to_csv(path, index=False)\n",
    "\n",
    "def threshold(y_hat, pr=0.5):\n",
    "    return (y_hat > pr) * 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "55e79f64-9605-4a45-96ff-a227bcd5f08b",
    "_uuid": "b664e354f2a960781f14fce6d2eaa700a18e0841",
    "collapsed": true
   },
   "source": [
    "## 3.2 Scripts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "_cell_guid": "22e4b436-bc24-44ef-bc10-3972d1c1d2cb",
    "_kg_hide-input": true,
    "_uuid": "9d09015181232a7f33e505346ace46ea88b16299",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import datetime\n",
    "\n",
    "def dtype_info(X):\n",
    "    return pd.concat([\n",
    "        X.dtypes.rename('dtypes'),\n",
    "        traintest.min().astype('object').rename('min'),\n",
    "        traintest.max().astype('object').rename('max'),],\n",
    "        axis=1\n",
    "    )\n",
    "\n",
    "def find(col, s, df):\n",
    "    if isinstance(s, str):\n",
    "        pass\n",
    "    else:\n",
    "        s = '|'.join([f'{x}' for x in s])\n",
    "    return df[(\n",
    "        df\n",
    "        [col]\n",
    "        .str.lower()\n",
    "        .str.contains(s)\n",
    "    )]\n",
    "\n",
    "def na(X):\n",
    "    count = X.isna().sum()\n",
    "    if len(X.shape) < 2:\n",
    "        return count\n",
    "    else:\n",
    "        return count[lambda x: x > 0]\n",
    "\n",
    "def perc(x):\n",
    "    return np.round(x * 100, 2)\n",
    "\n",
    "def vc(df):\n",
    "    return df.value_counts(dropna=False).sort_index()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "d306efb3-4f1f-4b8b-9302-5af88e61ebe4",
    "_uuid": "4360488a07bc078b41fbe60b1593ad585f7a2d53",
    "collapsed": true
   },
   "source": [
    "## 3.3 seq"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "_cell_guid": "dfc80966-f66e-487e-b654-b70f8004436c",
    "_kg_hide-input": true,
    "_uuid": "15dfcce7f21538394f6e90d1179e0fb06755cd6a",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import math\n",
    "from typing import Union\n",
    "\n",
    "Numeric = Union[int, float, np.number]\n",
    "\n",
    "def seq(\n",
    "        start: Numeric,\n",
    "        stop: Numeric,\n",
    "        step: Numeric = None) \\\n",
    "        -> np.ndarray:\n",
    "    \"\"\"Inclusive sequence.\"\"\"\n",
    "\n",
    "    if step is None:\n",
    "        if start < stop:\n",
    "            step = 1\n",
    "        else:\n",
    "            step = -1\n",
    "\n",
    "    if is_int(start) and is_int(step):\n",
    "        dtype = 'int'\n",
    "    else:\n",
    "        dtype = None\n",
    "\n",
    "    d = max(n_dec(step), n_dec(start))\n",
    "    n_step = math.floor(round(round(stop - start, d + 1) / step, d + 1)) + 1\n",
    "    delta = np.arange(n_step) * step\n",
    "    return np.round(start + delta, decimals=d).astype(dtype)\n",
    "\n",
    "def is_int(\n",
    "        x: Numeric) \\\n",
    "        -> bool:\n",
    "    \"\"\"Whether `x` is int.\"\"\"\n",
    "    return isinstance(x, (int, np.integer))\n",
    "\n",
    "def n_dec(\n",
    "        x: Numeric) \\\n",
    "        -> int:\n",
    "    \"\"\"No of decimal places, using `str` conversion.\"\"\"\n",
    "    if x == 0:\n",
    "        return 0\n",
    "    _, _, dec = str(x).partition('.')\n",
    "    return len(dec)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "1a9c755c-2379-4c7e-a4b7-64d3fca457ef",
    "_uuid": "4d06bc1bf4a3cfae0cea5d330f65f3f86de93a50",
    "collapsed": true
   },
   "source": [
    "## 3.4 Misc"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "_cell_guid": "c5fae222-cbf6-4e07-a532-fafddbe35642",
    "_kg_hide-input": true,
    "_uuid": "29a7f21430e6d5b2d7f5d6d5aa4fc962c8662191",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def bin_interp(X, bins, interp=None):\n",
    "    \"\"\"Interpolate bin values.\"\"\"\n",
    "\n",
    "    idx = X.apply(lambda x: bin_val(x, bins))\n",
    "\n",
    "    if interp == 'median':\n",
    "        v = X.groupby(idx).median()\n",
    "    elif interp == 'mean':\n",
    "        v = X.groupby(idx).mean()\n",
    "    elif interp == 'min':\n",
    "        v = X.groupby(idx).min()\n",
    "    elif interp == 'max':\n",
    "        v = X.groupby(idx).max()\n",
    "    else:\n",
    "        return seq(0, len(bins))\n",
    "\n",
    "    v = list(v)\n",
    "    bin_vals = [v[0]] + v + [v[-1]]\n",
    "\n",
    "    return bin_vals\n",
    "\n",
    "def bin_val(x, bins, vals=None):\n",
    "    \"\"\"Map `x` to bin value.\"\"\"\n",
    "\n",
    "    if vals is None:\n",
    "        vals = seq(0, len(bins))\n",
    "\n",
    "    assert len(vals) == len(bins) + 1, 'len(vals) must equal len(bins) + 1'\n",
    "\n",
    "    if np.isnan(x):\n",
    "        return np.nan\n",
    "    elif x < bins[0]:\n",
    "        index = 0\n",
    "    elif x == bins[0]:\n",
    "        index = 1\n",
    "    elif x == bins[-1]:\n",
    "        index = -2\n",
    "    elif x > bins[-1]:\n",
    "        index = -1\n",
    "    else:\n",
    "        index = np.searchsorted(bins, x, side='right')\n",
    "\n",
    "    return vals[index]\n",
    "\n",
    "def count(col, traintest):\n",
    "    \"\"\"Map value counts.\"\"\"\n",
    "\n",
    "    def f(x):\n",
    "        if pd.notna(x) and x in vc.index:\n",
    "            return vc.loc[x]\n",
    "        else:\n",
    "            return np.nan\n",
    "\n",
    "    vc = traintest.value_counts()\n",
    "\n",
    "    return (\n",
    "        col\n",
    "        .apply(lambda x: f(x))\n",
    "        .rename(traintest.name + '_count')\n",
    "    )\n",
    "\n",
    "def eq_attr(one, attr, *rest):\n",
    "    return all(all(getattr(one, attr) == getattr(x, attr)) for x in rest)\n",
    "\n",
    "def match(X, col, with_df):\n",
    "    \"\"\"Yes/no inner join.\"\"\"\n",
    "\n",
    "    return (\n",
    "        X[col]\n",
    "        .isin(with_df[col])\n",
    "        .astype(np.uint8)\n",
    "        .rename(with_df.index.name)\n",
    "    )\n",
    "\n",
    "def reorder(df, order=None):\n",
    "    \"\"\"Sort `df` columns by dtype and name.\"\"\"\n",
    "\n",
    "    def sort(df):\n",
    "        return df.dtypes.reset_index().sort_values([0, 'index'])['index']\n",
    "    if order is None:\n",
    "        order = [np.floating, np.integer, 'category', 'object']\n",
    "    names = [sort(df.select_dtypes(s)) for s in order]\n",
    "    return df[[x for ls in names for x in ls]]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "483b8435-bfce-4e5f-be33-d374a04c1858",
    "_uuid": "1235e4305a1665e421ac754ba9a7b958ff1f2518",
    "collapsed": true
   },
   "source": [
    "## 3.5 Preprocessing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "_cell_guid": "c83f38f9-68e0-454d-9783-4cff807f1a08",
    "_kg_hide-input": true,
    "_uuid": "95f43d647a86a35f926b3c9df694ea858f2e7ec9",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "def load(csv):\n",
    "    ycol = 'target'\n",
    "\n",
    "    col_names = {\n",
    "        'Survived': ycol,\n",
    "        'Pclass': 'ticket_class',\n",
    "        'Name': 'name',\n",
    "        'Sex': 'sex',\n",
    "        'Age': 'age',\n",
    "        'SibSp': 'n_sib_sp',\n",
    "        'Parch': 'n_par_ch',\n",
    "        'Ticket': 'ticket',\n",
    "        'Fare': 'fare',\n",
    "        'Cabin': 'cabin',\n",
    "        'Embarked': 'port',\n",
    "    }\n",
    "\n",
    "    exclude = [\n",
    "        'PassengerId'\n",
    "    ]\n",
    "\n",
    "    dtype = {\n",
    "        'Pclass': np.uint8,\n",
    "        'Age': np.float32,\n",
    "        'SibSp': np.uint8,\n",
    "        'Parch': np.uint8,\n",
    "        'Fare': np.float32,\n",
    "    }\n",
    "\n",
    "    df = reorder(\n",
    "        pd.read_csv(\n",
    "            csv,\n",
    "            dtype=dtype,\n",
    "            usecols=lambda x: x not in exclude,\n",
    "        )\n",
    "        .rename(columns=col_names)\n",
    "    )\n",
    "\n",
    "    if ycol in df.columns:\n",
    "        return df.drop(columns=ycol), df[ycol]\n",
    "    else:\n",
    "        return df\n",
    "\n",
    "def load_titanic():\n",
    "    X, y = load(train_csv)\n",
    "    test = load(test_csv)\n",
    "    traintest = pd.concat([X, test])\n",
    "    return X, y, test, traintest\n",
    "\n",
    "def preprocess(pip):\n",
    "    full_X, full_y, todo_test, todo_traintest = load_titanic()\n",
    "\n",
    "    todo_X, todo_val_X, y, val_y \\\n",
    "        = train_test_split(\n",
    "            full_X,\n",
    "            full_y,\n",
    "            test_size=VAL_SIZE,\n",
    "            stratify=full_y,\n",
    "            random_state=SEED\n",
    "        )\n",
    "\n",
    "    tr_y = full_y\n",
    "    tr_X = pip.fit_transform(full_X, full_y)\n",
    "    traintest = pip.transform(todo_traintest)\n",
    "\n",
    "    X = pip.fit_transform(todo_X, y)\n",
    "    val_X = pip.transform(todo_val_X)\n",
    "    test = pip.transform(todo_test)\n",
    "\n",
    "    return (\n",
    "        reorder(X), y,\n",
    "        reorder(val_X), val_y,\n",
    "        reorder(tr_X), tr_y,\n",
    "        reorder(test), reorder(traintest)\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "28cc09fd-a138-47bf-9ca1-2169b4a38349",
    "_uuid": "f8721a2f3702878d13c2e0d3fafcc52ef4547cf1",
    "collapsed": true
   },
   "source": [
    "## 3.6 Transformers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "_cell_guid": "ac3d86af-f1af-4c1d-aa84-ed9a896fc092",
    "_kg_hide-input": true,
    "_uuid": "23e35e4bd9a03139b4de77b86475705468a70097",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from sklearn.base import TransformerMixin\n",
    "\n",
    "\n",
    "class Apply(TransformerMixin):\n",
    "    def __init__(self, fn):\n",
    "        self.fn = fn\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.apply(self.fn)\n",
    "\n",
    "\n",
    "class AsType(TransformerMixin):\n",
    "    def __init__(self, t):\n",
    "        self.t = t\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        if self.t == 'category':\n",
    "            self.dtype = pd.Categorical(X.unique())\n",
    "        else:\n",
    "            self.dtype = self.t\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.astype(self.dtype)\n",
    "\n",
    "\n",
    "class ColMap(TransformerMixin):\n",
    "    def __init__(self, trf):\n",
    "        self.trf = trf\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        self.trf_list = [self.trf().fit(col) for _, col in X.iteritems()]\n",
    "        return self\n",
    "    \n",
    "    def transform(self, X):\n",
    "        cols = [t.transform(X.iloc[:, i]) for i, t in enumerate(self.trf_list)]\n",
    "        return pd.concat(cols, axis=1)\n",
    "\n",
    "\n",
    "class ColProduct(TransformerMixin):\n",
    "    def __init__(self, trf):\n",
    "        pass\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.product(axis=1)\n",
    "\n",
    "\n",
    "class ColQuot(TransformerMixin):\n",
    "    def __init__(self):\n",
    "        pass\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.iloc[:, 0] / X.iloc[:, 1]\n",
    "\n",
    "\n",
    "class ColSum(TransformerMixin):\n",
    "    def __init__(self):\n",
    "        pass\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.sum(axis=1)\n",
    "\n",
    "\n",
    "class Cut(TransformerMixin):\n",
    "    def __init__(self, bins, interp=None):\n",
    "        self.bins = bins\n",
    "        self.interp = interp\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        self.name = X.name\n",
    "        self.vals = bin_interp(X, self.bins, self.interp)\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        n = len(self.vals) - 2\n",
    "        return (\n",
    "            X\n",
    "            .apply(lambda x: bin_val(x, self.bins, self.vals))\n",
    "            .rename(f'{self.name}_cut{n}')\n",
    "        )\n",
    "\n",
    "\n",
    "class DataFrameUnion(TransformerMixin):\n",
    "    def __init__(self, trf_list):\n",
    "        self.trf_list = trf_list\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        for t in self.trf_list:\n",
    "            t.fit(X, y)\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return pd.concat([t.transform(X) for t in self.trf_list], axis=1)\n",
    "\n",
    "\n",
    "class FillNA(TransformerMixin):\n",
    "    def __init__(self, val):\n",
    "        self.val = val\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.fillna(self.val)\n",
    "\n",
    "\n",
    "class GetDummies(TransformerMixin):\n",
    "    def __init__(self, drop_first=False):\n",
    "        self.drop = drop_first\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        self.name = X.name\n",
    "        self.cat = pd.Categorical(X.unique())\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return pd.get_dummies(X.astype(self.cat), prefix=self.name, drop_first=self.drop)\n",
    "\n",
    "\n",
    "class Identity(TransformerMixin):\n",
    "    def __init__(self):\n",
    "        pass\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X\n",
    "\n",
    "\n",
    "class Map(TransformerMixin):\n",
    "    def __init__(self, d):\n",
    "        self.d = d\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.map(self.d)\n",
    "\n",
    "\n",
    "class MeanEncode(TransformerMixin):\n",
    "    def __init__(self, y):\n",
    "        self.y = y\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        m = self.y.groupby(X).mean()\n",
    "        keys = m.sort_values().index.values\n",
    "        vals = m.index.values\n",
    "        self.encode = {k: v for (k, v) in zip(keys, vals)}\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.replace(self.encode)\n",
    "\n",
    "\n",
    "class NADummies(TransformerMixin):\n",
    "    def __init__(self):\n",
    "        pass\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.isna().astype(np.uint8).rename(X.name, + '_na')\n",
    "\n",
    "\n",
    "class PdFunction(TransformerMixin):\n",
    "    def __init__(self, fn):\n",
    "        self.fn = fn\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return self.fn(X)\n",
    "\n",
    "\n",
    "class QCut(TransformerMixin):\n",
    "    def __init__(self, q, interp=None):\n",
    "        self.q = q\n",
    "        self.interp = interp\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        _, self.bins = pd.qcut(X, self.q, retbins=True)\n",
    "        self.bin_vals = bin_interp(X, self.bins, self.interp)\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return (\n",
    "            X\n",
    "            .apply(lambda x: bin_val(x, self.bins, self.bin_vals))\n",
    "            .rename(f'{X.name}_qcut{self.q}')\n",
    "        )\n",
    "\n",
    "\n",
    "class Rename(TransformerMixin):\n",
    "    def __init__(self, name):\n",
    "        self.name = name\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.rename(self.name)\n",
    "\n",
    "\n",
    "class SelectColumns(TransformerMixin):\n",
    "    def __init__(self, include=None, exclude=None):\n",
    "        self.include = include\n",
    "        self.exclude = exclude\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        if self.include:\n",
    "            X = X[self.include]\n",
    "        if self.exclude:\n",
    "            return X.drop(columns=self.exclude)\n",
    "        return X\n",
    "\n",
    "\n",
    "class SelectDtypes(TransformerMixin):\n",
    "    def __init__(self, include=None, exclude=None):\n",
    "        self.include = include\n",
    "        self.exclude = exclude\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return X.select_dtypes(include=self.include, exclude=self.exclude)\n",
    "\n",
    "\n",
    "class StandardScaler(TransformerMixin):\n",
    "    def __init__(self):\n",
    "        pass\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        self.mean = X.mean()\n",
    "        self.std = X.std(ddof=0)\n",
    "        return self\n",
    "\n",
    "    def transform(self, X):\n",
    "        return (X - self.mean) / self.std"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "7c132b01-dbe8-4ead-be59-08ddbf0d16b6",
    "_uuid": "71d44fb72d59949aba415240e8a99d9f6319eb99"
   },
   "source": [
    "## 3.7 Leaderboard"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "_cell_guid": "065ea643-b7e2-4876-8c2d-1a9263eb0fb8",
    "_kg_hide-input": true,
    "_uuid": "d6dbb5a59acb45a7139eeb28ddb1eaf296839830",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def read_leaderboard():\n",
    "    return (\n",
    "        pd\n",
    "        .read_csv(leaderboard_csv)\n",
    "        .groupby('TeamId')\n",
    "        .Score.max()\n",
    "    )\n",
    "\n",
    "def leaderboard_info():\n",
    "    df = read_leaderboard()\n",
    "\n",
    "    n = len(df)\n",
    "    m = len(pd.read_csv(leaderboard_csv))\n",
    "    print(f'{n} Teams, {m} submissions')\n",
    "\n",
    "    mean = perc(df.mean())\n",
    "    print(f'Mean: {mean}')\n",
    "\n",
    "    std = perc(df.std())\n",
    "    print(f'Stdev: {std}')\n",
    "\n",
    "def leaderboard_percentiles(p=None):\n",
    "    df = read_leaderboard()\n",
    "\n",
    "    if p is None:\n",
    "        p = seq(90, 10, step=-10)\n",
    "\n",
    "    return pd.DataFrame({\n",
    "        'Percentile': p,\n",
    "        'Score': perc(np.percentile(df, p)),\n",
    "    })\n",
    "\n",
    "def plot_leaderboard(x=None):\n",
    "    df = read_leaderboard()\n",
    "    \n",
    "    if x is None:\n",
    "        x = seq(0, 100, step=0.1)\n",
    "    y = np.percentile(df, q=x)\n",
    "\n",
    "    plt.title('Leaderboard')\n",
    "    plt.ylabel('Score (% Accuracy)')\n",
    "    plt.xlabel('Percentile (%)')\n",
    "    \n",
    "    plt.plot(x, y*100)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "225ee49f-da9d-4cc3-8156-835bd315df18",
    "_uuid": "072e92e0b590a3d0f312f639bc2b0744acc777e7"
   },
   "source": [
    "# 4. Leaderboard\n",
    "\n",
    "Raw leaderboard data from 10 May 2018."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "23b6d84b-56da-42b1-8157-b86a07cc139f",
    "_uuid": "938fd5f528c2a7c32a7feaaec2f75da2c9d371e5"
   },
   "source": [
    "A quick overview of the public leaderboard to get a feel for the competition."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "_cell_guid": "a1a56814-1458-41fa-a2d1-494923e05fc5",
    "_kg_hide-input": false,
    "_uuid": "346e0848766ca392c02e9697fc952f04ddfb1f32",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "leaderboard_info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "1acf6517-0598-46db-8b7f-b23d0058c373",
    "_uuid": "b476d262b4772e8d7a257a1f18b3372503a9f348"
   },
   "source": [
    "The raw data has multiple scores per team, while the public leaderboard shows best submits only.  We'll be looking at best submits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "_cell_guid": "eabbc318-0cd3-4a37-bd3d-d7b077e23c66",
    "_kg_hide-input": false,
    "_uuid": "7f2cddc944054e858bacb17748131273a963f334",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "leaderboard_percentiles()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "eec993e3-1538-4a40-b4de-afc202dbbefa",
    "_uuid": "334663c0acd1152757d6746f29dc5a79e0b99273"
   },
   "source": [
    "### Key Takeaways\n",
    "\n",
    "- The gender baseline (76.55% acc) sits at the 30th percentile.\n",
    "- This is a very small dataset, and the test set is especially small.\n",
    "- The public leaderboard is calculated from 50% of 418 rows: that's 209 predictions.\n",
    "- So, the difference between 30th percentile and 90th percentile is 8 people.\n",
    "- The leaderboard metric is accuracy, but we'll be minimizing log loss (since xgboost requires gradient + hessian).\n",
    "- Accuracy is a very chunky metric\n",
    "    - The minimum resolution of the public leaderboard is roughly 0.48% acc (1 person of 209).\n",
    "    - Unlike log loss, (Bayesian) confidence isn't taken into account."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "a197749f-3772-496d-b360-453ce3e0373c",
    "_uuid": "5013f304426cf471c6e1d5705a7be94e7e48f5bc"
   },
   "source": [
    "Next, let's take a quick look at the full distribution of scores."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "_cell_guid": "96cb89bb-7a47-4983-b553-eb694af4eb35",
    "_uuid": "cdb39159bd02af10898a2d05211915ad58fc549c",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plot_leaderboard()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "d4e29db3-8395-4812-8d89-fac3d8dc641f",
    "_uuid": "071945045d50d4d0201a73a7aafade80df910f56"
   },
   "source": [
    "- Submitting floating point predictions instead of `int` will score 0.\n",
    "- There's a big jump near the top.  Scores around 100% acc are probably using at least some hand labeling.\n",
    "- Most scores are around 78% +/- 4 people."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "b9758422-b26b-4255-aded-5df0a19e4524",
    "_uuid": "8dfc9d6044d56ad8830713c3f89c8c8ee840764c",
    "collapsed": true
   },
   "source": [
    "# 5. Feature Engineering\n",
    "\n",
    "Lead into gold."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "92b5c258-c555-4a0a-956e-5e2f7050cddf",
    "_uuid": "2fcfe4760273fe0742088410ace17f6e12aebbe7"
   },
   "source": [
    "## 5.1 Glossary\n",
    "Features have been renamed as follows:\n",
    "```\n",
    "Survived  ->  target\n",
    "Pclass    ->  ticket_class\n",
    "Name      ->  name\n",
    "Sex       ->  sex\n",
    "Age       ->  age\n",
    "SibSp     ->  n_sib_sp\n",
    "Parch     ->  n_par_ch\n",
    "Ticket    ->  ticket\n",
    "Fare      ->  fare\n",
    "Cabin     ->  cabin\n",
    "Embarked  ->  port\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "6cebea90-2228-40cf-ba45-33c08058a599",
    "_uuid": "cb7d4e52bee9b281958fa7ce04a06220862f9138"
   },
   "source": [
    "## 5.2 Features\n",
    "\n",
    "In order of importance (using `seed=0`), excluding complementary dummies (the full feature importance plot is in &sect; 7.4):\n",
    "\n",
    "### surv (84)\n",
    "- At least 1 person survived, with the same ticket or surname.\n",
    "- Restricted to groups that appear in both `train` and `test`.\n",
    "- Combination of `tk_surv` and `sn_surv`.\n",
    "- `tk_surv` (max): at least 1 person with the same ticket survived.\n",
    "- `sn_surv` (max): at least 1 person with the same surname survived.\n",
    "- `ticket` and `surname` groups don't completely overlap -> `na` values.\n",
    "- 6 levels:\n",
    "```\n",
    "4  ->   1   1  ->  both tk_surv and sn_surv\n",
    "3  ->   1  na  ->  at least 1\n",
    "2  ->   1   0  ->  exactly 1\n",
    "1  ->   0  na  ->  maybe 1\n",
    "0  ->   0   0  ->  exactly 0\n",
    "na ->  na  na  ->  unknown\n",
    "```\n",
    "- Credit to all the top kernels.  This is probably *the* key feature, and perhaps the only important feature.\n",
    "\n",
    "### cabin_encode_h (60)\n",
    "- Horizontal major, cabin encoding (vertical slices); \"major\" as in \"row major\" vs \"col major\" matrices.\n",
    "- `cabin_encode_h = cabin_no_encode + deck_encode / 10`\n",
    "- `cabin_no_encode` is a hand labeled feature representing how close/far the cabin is from the from/back of the ship (explained below).\n",
    "- `deck_encode` is a simple label encoding of deck A to G and T (explained below).\n",
    "- Counterpart to `cabin_encode_v`.\n",
    "\n",
    "### fare_quot (58)\n",
    "- `fare_quot = fare / ticket_count`\n",
    "\n",
    "### ticket_count (47)\n",
    "- Number of people with the same ticket, across `train` + `test`.\n",
    "\n",
    "### title_mr (42)\n",
    "- Extracted from name.\n",
    "- Includes rare titles such as `capt`, `col`, `don`.\n",
    "\n",
    "### fare (41)\n",
    "- As is.\n",
    "\n",
    "### age_tc3_sex1 (32)\n",
    "- 3rd class, female `age`\n",
    "- Uses `age_mask`: filter `age` by `ticket_class` and `sex`; 0 or `na` otherwise.\n",
    "\n",
    "### age (32)\n",
    "- As is.\n",
    "\n",
    "### tk_age_mean (26)\n",
    "- Average age of people with same ticket, across `train` + `test`.\n",
    "\n",
    "### ticket_class_3 (25)\n",
    "- `ticket_class` dummy\n",
    "\n",
    "### sex (25)\n",
    "- Label encoding:\n",
    "    - `female -> 1`\n",
    "    - `male   -> 0`\n",
    "\n",
    "### tk_n_sib_sp_mean (25)\n",
    "- Average `n_sib_sp` of people with the same ticket, across `train` + `test`.\n",
    "\n",
    "### cabin_no_encode (22)\n",
    "- Horizontal encoding of cabin number: how close/far from the front/back of the ship.\n",
    "- Hand labeled feature using deckplans at Encyclopedia Titanica.\n",
    "```\n",
    "          /----------------\\\n",
    "Back   | V  IV  III  II  I >   Front\n",
    "          \\----------------/\n",
    "```\n",
    "- Diagram of the Titanic collision: https://commons.wikimedia.org/wiki/File:Titanic_porting_around_English.svg\n",
    "- Part of `cabin_encode_v` and `cabin_encode_h`.\n",
    "\n",
    "### tk_sex (21)\n",
    "- Mean `sex` of people with the same ticket, across `train` and `test`.\n",
    "\n",
    "### cabin_encode_v (21)\n",
    "- Deck major cabin encoding (horizontal slices); \"major\" as in \"row major\" vs \"col major\" matrices.\n",
    "- `cabin_encode_v = deck_encode + cabin_no_encode / 10`\n",
    "- Counterpart to `cabin_encode_h`.\n",
    "\n",
    "### n_fam (16)\n",
    "- `n_fam = n_par_ch + n_sib_sp`\n",
    "\n",
    "### tc3_sex1 (16)\n",
    "- Dummy: 3rd class, female.\n",
    "- `tc_sex` are dummies, indicating `ticket_class` and `sex`\n",
    "- No missing values, unlike `age_mask` features.\n",
    "\n",
    "### tk_n_par_ch_mean (13)\n",
    "- Mean `n_par_ch` of people with the same ticket, across `train` and `test`.\n",
    "\n",
    "### port (11)\n",
    "- `Embarked` -> rename to `port` -> label encode:\n",
    "```\n",
    "S -> 1\n",
    "Q -> 2\n",
    "C -> 3\n",
    "```\n",
    "\n",
    "### title_master (8)\n",
    "- Extracted from `name`.\n",
    "\n",
    "### deck_encode (6)\n",
    "- Label encoding of `deck`, which is extracted from `cabin`.\n",
    "```\n",
    "T -> 8  (the top)\n",
    "A -> 7\n",
    "B -> 6\n",
    "C -> 5\n",
    "D -> 4\n",
    "E -> 3\n",
    "F -> 2\n",
    "G -> 1  (the bottom)\n",
    "```\n",
    "\n",
    "### n_fam_2 (2)\n",
    "- Polynomial feature: `n_fam_2 = n_sib_sp * n_par_ch`\n",
    "- The idea is to treat `n_sib_sp` as a horizontal feature, and `n_par_ch` as a vertical feature, producing a sort of area feature.\n",
    "\n",
    "### title_mrs (1)\n",
    "- Extracted from `name`.\n",
    "- Includes: `mme`, `the` (`Countess`), `dona`, `lady`.\n",
    "\n",
    "### n_sib_sp (1)\n",
    "- As is.\n",
    "\n",
    "### title_miss (1)\n",
    "- Extracted from `name`.\n",
    "- Includes: `ms`, `mlle`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "77999361-a5d4-4799-a1dc-d6db18c535e0",
    "_uuid": "b3c68efe3b6fb85ac63b97655c2f63320b8e76ab"
   },
   "source": [
    "## 5.3 Unused Features & Ideas\n",
    "\n",
    "Excluded features aren't necessarily flawed; implementation matters.\n",
    "\n",
    "### Included, but not used by XGBoost (seed = 0)\n",
    "- `age_tc1_sex1`\n",
    "- `n_par_ch`\n",
    "- `ticket_class_1`\n",
    "- `ticket_class_2`\n",
    "\n",
    "### Excluded\n",
    "- `ticket_no`: `uint` extracted from `ticket`.\n",
    "    - Various binning strategies including hand labeling.\n",
    "    - The deck plans suggest that ticket number is *not* correlated with cabin position.\n",
    "    - Can be used to augment ticket/surname groups: eg, extended family members have nearby ticket numbers.\n",
    "- `ticket_prefix`: `str` extracted from `ticket`.\n",
    "    - Some tickets have a prefix such as `PC` or `STON/O2`.\n",
    "    - Some, such as `STON`, seem to correspond to port of embark (Southampton).\n",
    "- `age_cut` + `fare_cut`:\n",
    "    - Binning by hand or by quantile (`qcut`).\n",
    "- `sn_surv` + `tk_surv` (alone):\n",
    "    - Variations such as `mean` and `min`.\n",
    "    - Only a combined `max` is included.\n",
    "- `tk_`: ticket group `min`, `max`, `count` for features such as `n_sib_sp`.  Only `mean` is included.\n",
    "- `mother`, `father`, `child`:\n",
    "    - Family position, and variations such as `tk_child` (ticket has child)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "f3c1aba2-2acd-4917-9008-900e298a6426",
    "_uuid": "27383d0c16b0e3136124e14fb5275fefd0f8c14f"
   },
   "source": [
    "## 5.4 Functions\n",
    "\n",
    "Implementation details."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "fdccb35f-615f-439d-8532-4ff16795db85",
    "_uuid": "21509d25c9dc8d3edf3eaaf5b07654f1c50d0566"
   },
   "source": [
    "Derived from:\n",
    "\n",
    "1. `n_sib_sp` + `n_par_ch`\n",
    "1. `cabin`\n",
    "1. `name`\n",
    "1. `sex`\n",
    "1. `ticket`\n",
    "1. interaction: multi column features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "408e03fa-36d7-4d60-a90a-641d6e2b6437",
    "_uuid": "95e43e18a82b8b8ed8f9f5bd0927317e533d3719",
    "collapsed": true
   },
   "source": [
    "### SibSp + ParCh"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "_cell_guid": "bbd96dc4-8523-49be-81cb-5e2a6e7ddd74",
    "_kg_hide-input": true,
    "_uuid": "e94c005d65f472a68e57d66ef1830908a03de890",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def n_fam(X):\n",
    "    return (\n",
    "        (X.n_sib_sp + X.n_par_ch)\n",
    "        .astype(np.uint8)\n",
    "        .rename('n_fam')\n",
    "    )\n",
    "\n",
    "def n_fam_2(X):\n",
    "    return (\n",
    "        ((X.n_sib_sp+1) * (X.n_par_ch+1))\n",
    "        .astype(np.uint8)\n",
    "        .rename('n_fam_2')\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "a6768c85-5f05-43f5-ac15-d671c07a360b",
    "_uuid": "5cc9bcbc2968ec470cc770145ac579c8b9f401af",
    "collapsed": true
   },
   "source": [
    "### Cabin"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "_cell_guid": "794e6c22-97d0-498f-ad7f-16e42e707a8b",
    "_kg_hide-input": true,
    "_uuid": "de68978d7c1fc8dd5ad0fbd691a278371fb69444",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def cabin_encode_v(X):\n",
    "    return (\n",
    "        (deck_encode(X) + cabin_no_encode(X) / 10)\n",
    "        .astype(np.float32)\n",
    "        .rename('cabin_encode_v')\n",
    "    )\n",
    "\n",
    "def cabin_encode_h(X):\n",
    "    return (\n",
    "        (cabin_no_encode(X) + deck_encode(X) / 10)\n",
    "        .astype(np.float32)\n",
    "        .rename('cabin_encode_h')\n",
    "    )\n",
    "\n",
    "def cabin_no(X):\n",
    "    return (\n",
    "        X\n",
    "        .cabin\n",
    "        .str.extract(r'(\\d+)', expand=False)\n",
    "        .astype(np.float32)\n",
    "        .rename('cabin_no')\n",
    "    )\n",
    "\n",
    "def cabin_no_encode(X):\n",
    "    def encode(x):\n",
    "        if x.deck == 'T':\n",
    "            return 2\n",
    "        elif np.isnan(x.cabin_no):\n",
    "            return np.nan\n",
    "        elif x.deck == 'A':\n",
    "            if x.cabin_no >= 35:\n",
    "                return 4\n",
    "            else:\n",
    "                return 2\n",
    "        elif x.deck == 'B':\n",
    "            if x.cabin_no >= 51:\n",
    "                return 3\n",
    "            else:\n",
    "                return 2\n",
    "        elif x.deck == 'C':\n",
    "            if x.cabin_no % 2 == 0:\n",
    "                if 92 <= x.cabin_no <= 102 or 142 <= x.cabin_no <= 148:\n",
    "                    return 4\n",
    "                elif 62 <= x.cabin_no <= 90 or 104 <= x.cabin_no <= 140:\n",
    "                    return 3\n",
    "                else:\n",
    "                    return 2\n",
    "            else:\n",
    "                if 85 <= x.cabin_no <= 93 or 123 <= x.cabin_no <= 127:\n",
    "                    return 4\n",
    "                elif 55 <= x.cabin_no <= 83 or 95 <= x.cabin_no <= 121:\n",
    "                    return 3\n",
    "                else:\n",
    "                    return 2\n",
    "        elif x.deck == 'D':\n",
    "            if x.cabin_no >= 51:\n",
    "                return 5\n",
    "            else:\n",
    "                return 2\n",
    "        elif x.deck == 'E':\n",
    "            if x.cabin_no >= 91:\n",
    "                return 5\n",
    "            elif x.cabin_no >= 70:\n",
    "                return 4\n",
    "            elif x.cabin_no >= 26:\n",
    "                return 3\n",
    "            else:\n",
    "                return 2\n",
    "        elif x.deck == 'F':\n",
    "            if x.cabin_no >= 46:\n",
    "                return 1\n",
    "            elif x.cabin_no >= 20:\n",
    "                return 5\n",
    "            else:\n",
    "                return 4\n",
    "        elif x.deck == 'G':\n",
    "            return 5\n",
    "    \n",
    "    df = pd.concat([X.cabin, deck(X), cabin_no(X)], axis=1)\n",
    "    return (\n",
    "        df\n",
    "        .apply(encode, axis=1)\n",
    "        .astype(np.float32)\n",
    "        .rename('cabin_no_encode')\n",
    "    )\n",
    "\n",
    "def deck(X):\n",
    "    return (\n",
    "        X\n",
    "        .cabin\n",
    "        .str.extract(r'([A-Z])', expand=False)\n",
    "        .rename('deck')\n",
    "    )\n",
    "\n",
    "def deck_encode(X):\n",
    "    return (\n",
    "        deck(X)\n",
    "        .map({\n",
    "            'T': 8,\n",
    "            'A': 7,\n",
    "            'B': 6,\n",
    "            'C': 5,\n",
    "            'D': 4,\n",
    "            'E': 3,\n",
    "            'F': 2,\n",
    "            'G': 1,\n",
    "        })\n",
    "        .astype(np.float32)\n",
    "        .rename('deck_encode')\n",
    "    )\n",
    "\n",
    "def starboard(X):\n",
    "    return (\n",
    "        (np.round(cabin_no(X)) % 2 == 0)\n",
    "        .astype(np.uint8)\n",
    "        .rename('starboard')\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "379ba459-3cfe-440d-a59e-ed026c695c43",
    "_uuid": "dc6f4dc27c314e7d58522c6c8340e85e5758596a",
    "collapsed": true
   },
   "source": [
    "### Name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "_cell_guid": "6cada04c-0161-4974-ac49-cba5361aedcd",
    "_kg_hide-input": true,
    "_uuid": "d36e94af96917e182c192919c98f1b2af47bb772",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def surname(X):\n",
    "    return (\n",
    "        X\n",
    "        .name\n",
    "        .str.lower()\n",
    "        .str.extract(r'([a-z]+),', expand=False)\n",
    "    )\n",
    "\n",
    "def title(X):\n",
    "    return (\n",
    "        X\n",
    "        .name\n",
    "        .str.lower()\n",
    "        .str.extract(r', (\\w+)', expand=False)\n",
    "        .rename('title')\n",
    "    )\n",
    "\n",
    "def title_fill(X):\n",
    "    def rare(row):\n",
    "        if row.title in ['miss', 'mrs', 'master', 'mr']:\n",
    "            return row.title\n",
    "        elif row.title in d:\n",
    "            return d[row.title]\n",
    "        elif row.sex == 'male':\n",
    "            return 'mr'\n",
    "        elif row.sex == 'female':\n",
    "            return 'mrs'\n",
    "        else:\n",
    "            raise ValueError('row.sex is missing / not in [`male`, `female`]')\n",
    "\n",
    "    miss = ['ms', 'mlle']\n",
    "    mrs = ['mme', 'dona', 'lady', 'the']\n",
    "    mr = [\n",
    "        'capt',\n",
    "        'col',\n",
    "        'don',\n",
    "        'jonkheer',\n",
    "        'major',\n",
    "        'rev',\n",
    "        'sir',\n",
    "    ]\n",
    "\n",
    "    d = {\n",
    "        **{k: 'mr' for k in mr},\n",
    "        **{k: 'mrs' for k in mrs},\n",
    "        **{k: 'miss' for k in miss}\n",
    "    }\n",
    "\n",
    "    return (\n",
    "        X\n",
    "        .assign(title=title)\n",
    "        .apply(rare, axis=1)\n",
    "        .rename('title')\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "4c759738-4d6a-4cfb-bd3e-c269567dbf41",
    "_uuid": "2abd82431b5429786c099fcffc48e17953b1c552",
    "collapsed": true
   },
   "source": [
    "### Sex"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "_cell_guid": "aa43db83-94cd-4c38-bf8d-b5f9edd7a14a",
    "_kg_hide-input": true,
    "_uuid": "5a8591293b189720d10e93e24a69ed71339cfa5a",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def sex(X):\n",
    "    return (\n",
    "        X\n",
    "        .sex\n",
    "        .map({'female': 1, 'male': 0})\n",
    "        .astype(np.uint8)\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "622021e4-a967-4fbc-85b6-40117d91460c",
    "_uuid": "63c0c67fdffbb3000cfde515c9f996aabd59cdfc",
    "collapsed": true
   },
   "source": [
    "### Ticket"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "_cell_guid": "5899dc60-aa31-4b45-a4a2-e7be3a8ee608",
    "_kg_hide-input": true,
    "_uuid": "8544afbc66768e8e0f0b313a15747f0c958b1ef0",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def ticket_count(X):\n",
    "    _, _, _, traintest = load_titanic()\n",
    "    return count(X.ticket, traintest.ticket).astype(np.uint8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "6f3ea57e-095e-491c-bb53-e0f13f563875",
    "_uuid": "ff61fb46b4d2bf04a8224acfcc9c4663c288504c",
    "collapsed": true
   },
   "source": [
    "### Interaction"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "_cell_guid": "af53032e-0285-4f2a-a8aa-919d864c066b",
    "_kg_hide-input": true,
    "_uuid": "f50dbfce5dec86ae11a12ca91547db99967b7e76",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def age_mask(X, tc, sx):\n",
    "    nm = f'age_tc{tc}_sex{sx}'\n",
    "    return (X.age * (X.ticket_class == tc) * (sex(X) == sx)).rename(nm)\n",
    "\n",
    "def fare_quot(X):\n",
    "    return (\n",
    "        (X.fare / ticket_count(X))\n",
    "        .astype(np.float32)\n",
    "        .rename('fare_quot')\n",
    "    )\n",
    "\n",
    "def tc_sex(X, tc, sx):\n",
    "    return (\n",
    "        ((X.ticket_class == tc) & (sex(X) == sx))\n",
    "        .astype(np.uint8)\n",
    "        .rename(f'tc{tc}_sex{sx}')\n",
    "    )\n",
    "\n",
    "def tk_fn(X, col, fn='mean'):\n",
    "    _, _, _, traintest = load_titanic()\n",
    "    vc = getattr(traintest[col].groupby(traintest.ticket), fn)()\n",
    "    return (\n",
    "        X\n",
    "        .ticket\n",
    "        .apply(lambda x: vc.loc[x])\n",
    "        .astype(np.float32)\n",
    "        .rename(f'tk_{col}_{fn}')\n",
    "    )\n",
    "\n",
    "def tk_sex(X):\n",
    "    _, _, _, traintest = load_titanic()\n",
    "    vc = sex(traintest).groupby(traintest.ticket).mean()\n",
    "    return (\n",
    "        X\n",
    "        .ticket\n",
    "        .apply(lambda x: vc.loc[x])\n",
    "        .astype(np.float32)\n",
    "        .rename('tk_sex')\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "34c3c0cc-804d-4f8d-b6fc-7a28f03af041",
    "_uuid": "609154493bd6fdea1f487cf6b0f260334f3a48c3"
   },
   "source": [
    "Finally, the all important `surv` group of functions:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "_cell_guid": "5e74e180-5531-4c93-8ed3-57ac994474fd",
    "_kg_hide-input": true,
    "_uuid": "d6bdece44d5483bbc3c88046100e608e3f69e256",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def surv(X):\n",
    "    def encode(x):\n",
    "        a = x.tk_surv_max\n",
    "        b = x.sn_surv_max\n",
    "        if a == 1 and b == 1:\n",
    "            return 4\n",
    "        elif a == 1 or b == 1:\n",
    "            if a == 0 or b == 0:\n",
    "                return 2\n",
    "            else:\n",
    "                return 3\n",
    "        elif a == 0 or b == 0:\n",
    "            if a == 0 and b == 0:\n",
    "                return 0\n",
    "            else:\n",
    "                return 1\n",
    "        else:\n",
    "            return np.nan\n",
    "    return (\n",
    "        pd.concat([tk_surv(X), sn_surv(X)], axis=1)\n",
    "        .apply(encode, axis=1)\n",
    "        .astype(np.float32)\n",
    "        .rename('surv')\n",
    "    )\n",
    "\n",
    "def sn_surv(X, fn='max'):\n",
    "    tr, y, te, _ = load_titanic()\n",
    "    v = getattr(y.groupby(surname(tr)), fn)()[lambda x: x.index.isin(surname(te))]\n",
    "    return (\n",
    "        surname(X)\n",
    "        .map(v)\n",
    "        .astype(np.float32)\n",
    "        .rename(f'sn_surv_{fn}')\n",
    "    )\n",
    "\n",
    "def tk_surv(X, fn='max'):\n",
    "    tr, y, te, _ = load_titanic()\n",
    "    v = getattr(y.groupby(tr.ticket), fn)()[lambda x: x.index.isin(te.ticket)]\n",
    "    return (\n",
    "        X\n",
    "        .ticket\n",
    "        .map(v)\n",
    "        .astype(np.float32)\n",
    "        .rename(f'tk_surv_{fn}')\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "3593edec-6aee-4085-b85a-032d5005a4c4",
    "_uuid": "39b18dd8ebade0c3dce466faedbcbbc8b386a0bf",
    "collapsed": true
   },
   "source": [
    "# 6. Pipeline Preprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "0644a29f-9081-4362-81b9-6275e155e244",
    "_uuid": "13025cec4189a84725ad783b4e6a3d2539f983f7"
   },
   "source": [
    "<figure>\n",
    "  <img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/2010_mavericks_competition.jpg/640px-2010_mavericks_competition.jpg\">\n",
    "  <figcaption style=\"text-align: center;\">\n",
    "      Andrew Davis at Mavericks. Photograph by Shalom Jacobovitz.\n",
    "      <br>via\n",
    "       <a href=\"https://commons.wikimedia.org/wiki/File:2010_mavericks_competition.jpg\">Wikimedia</a>\n",
    "       (<a href=\"https://creativecommons.org/licenses/by-sa/2.0\">CC BY-SA 2.0</a>)\n",
    "  </figcaption>\n",
    "</figure>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "a5458d1b-433d-4a8f-ad8a-d15652c57536",
    "_uuid": "85bcc345b02a68c9a1cf1ce72930e9d4af4150fc"
   },
   "source": [
    "## 6.1 The Pipeline\n",
    "\n",
    "Credit to Kevin Goetsch, Julie Michelman, and Tom Augspurger."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "4ee7f519-54fa-4713-8bed-dee746ea6034",
    "_kg_hide-input": true,
    "_uuid": "4ba1ea0c14f8a4f18719e6ad0baed737a5183e50",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "X_pipeline = DataFrameUnion([\n",
    "    # age\n",
    "    SelectColumns('age'),\n",
    "\n",
    "    # fare\n",
    "    SelectColumns('fare'),\n",
    "\n",
    "    # n_par_ch + n_sib_sp\n",
    "    SelectColumns('n_par_ch'),\n",
    "    SelectColumns('n_sib_sp'),\n",
    "    PdFunction(n_fam),\n",
    "    PdFunction(n_fam_2),\n",
    "\n",
    "    # ticket_class\n",
    "    make_pipeline(\n",
    "        SelectColumns('ticket_class'),\n",
    "        GetDummies(),\n",
    "    ),\n",
    "\n",
    "    # cabin\n",
    "    PdFunction(cabin_encode_v),\n",
    "    PdFunction(cabin_encode_h),\n",
    "    PdFunction(cabin_no_encode),\n",
    "    PdFunction(deck_encode),\n",
    "\n",
    "    # name -> title -> dummies\n",
    "    make_pipeline(\n",
    "        PdFunction(title_fill),\n",
    "        GetDummies(),\n",
    "    ),\n",
    "\n",
    "    # port -> 1/2/3\n",
    "    make_pipeline(\n",
    "        SelectColumns('port'),\n",
    "        Map({'S': 1, 'Q': 2, 'C': 3}),\n",
    "        AsType(np.float32)\n",
    "    ),\n",
    "\n",
    "    # sex -> 0/1\n",
    "    PdFunction(sex),\n",
    "\n",
    "    # ticket -> count\n",
    "    PdFunction(ticket_count),\n",
    "\n",
    "    #\n",
    "    # interaction #\n",
    "    \n",
    "    # fare / ticket_count -> fare_quot\n",
    "    PdFunction(fare_quot),\n",
    "\n",
    "    # age by sex/ticket_class\n",
    "    PdFunction(partial(age_mask, tc=1, sx=1)),\n",
    "    PdFunction(partial(age_mask, tc=2, sx=1)),\n",
    "    PdFunction(partial(age_mask, tc=3, sx=1)),\n",
    "    PdFunction(partial(age_mask, tc=1, sx=0)),\n",
    "    PdFunction(partial(age_mask, tc=2, sx=0)),\n",
    "    PdFunction(partial(age_mask, tc=3, sx=0)),\n",
    "\n",
    "    # 0/1 by sex/ticket_class\n",
    "    PdFunction(partial(tc_sex, tc=1, sx=1)),\n",
    "    PdFunction(partial(tc_sex, tc=2, sx=1)),\n",
    "    PdFunction(partial(tc_sex, tc=3, sx=1)),\n",
    "    PdFunction(partial(tc_sex, tc=1, sx=0)),\n",
    "    PdFunction(partial(tc_sex, tc=2, sx=0)),\n",
    "    PdFunction(partial(tc_sex, tc=3, sx=0)),\n",
    "\n",
    "    # ticket grouping\n",
    "    PdFunction(surv),\n",
    "    PdFunction(tk_sex),\n",
    "    PdFunction(partial(tk_fn, col='age')),\n",
    "    PdFunction(partial(tk_fn, col='n_par_ch')),\n",
    "    PdFunction(partial(tk_fn, col='n_sib_sp')),\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "4fcaff31-49b3-4ba8-8dd0-1ef056831550",
    "_uuid": "2aa538a5b8889fd32defb787a6b58efc776889bc"
   },
   "source": [
    "## 6.2 Execute\n",
    "\n",
    "- Split `train`:\n",
    "    - `train/val`: validation set `val_X, val_y`, using `VAL_SIZE`\n",
    "    - `train/train`: proper train set `X, y`\n",
    "- Full `train`: `tr_X, tr_y`\n",
    "- Combined `train/test`: `traintest`\n",
    "- `test`: as is"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "371e3f06-1457-4028-b281-fb74fb5a5019",
    "_uuid": "73b932fffa4e4c7485b6c6a311453225d771f4a1",
    "collapsed": true,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "X, y, val_X, val_y, tr_X, tr_y, test, traintest = preprocess(X_pipeline)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "4c3d726a-5cec-48d0-9775-51b499c1beb1",
    "_uuid": "450e81ce8f979e0545fd633f1ccedaa6a27732f4"
   },
   "source": [
    "## 6.3 Diagnostics"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "ca42e010-03e4-47f3-9127-9f2e8f348097",
    "_uuid": "4cfe312fb96c38242d1c2b59da0b15f6f0f65b3c"
   },
   "source": [
    "### Shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "86dcf042-6a06-4ef6-ae07-1a208fa2d507",
    "_uuid": "3b89f7ac4ed1e50918991b5956bfcccf6ec6c929",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "X.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "1821f240-ccf3-4719-affd-d3b2952179b7",
    "_uuid": "5e6729e67372ae318bc222255dec6cd7a6869b95"
   },
   "source": [
    "### Dtypes\n",
    "Check for:\n",
    "- Overflow\n",
    "- Column names\n",
    "- Floating point error\n",
    "- Anything that looks funny"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "dad49a79-748c-47c5-a1c0-281144f750e3",
    "_kg_hide-output": true,
    "_uuid": "85822294d02554926d80ac050cf7f5ec6d9b9213",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "dtype_info(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "340295da-9eac-4722-9c74-ac05bf132bc4",
    "_uuid": "02a786e2ed546e49b01932959ce513921c3c4ed0"
   },
   "source": [
    "Use function `vc` (value counts) to check individual columns."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "368d9b6d-f176-478a-a797-c6923e6d5d67",
    "_uuid": "0aaa5b27a75a78ee4ec25235cec7e125de0c5a98"
   },
   "source": [
    "### Train/Val/Test Parity\n",
    "\n",
    "Check that each dataframe has the same dtypes and same columns, in the same order."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "e1d7beac-d4e1-4179-8418-13c22e560733",
    "_uuid": "3ef7395034a3c709a6dd085ca939c2bc7b422921",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "eq_attr(X, 'columns', val_X, tr_X, test, traintest) \\\n",
    "    and eq_attr(X, 'dtypes', val_X, tr_X, test, traintest)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "53602fe3-1a5a-44ba-9b2a-9028c4a98351",
    "_uuid": "0f3edcf0a0fbc6875c9080e5b393a6730b6d9885"
   },
   "source": [
    "### DMatrix\n",
    "XGBoost's custom data format."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "96650c52-4869-4ff6-93e8-ee35ae8a5476",
    "_uuid": "d7f48fe2aaae357f02345cb7a4c0099ca2a1e425",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "dtrain = xgb.DMatrix(X, y)\n",
    "dval = xgb.DMatrix(val_X, val_y)\n",
    "dfull = xgb.DMatrix(tr_X, tr_y)\n",
    "dtest = xgb.DMatrix(test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "2f4b3804-a98e-4abd-a061-ab7463b224fc",
    "_uuid": "dd44f21c2ca861ed22c1b2011c8f804e09e093d5",
    "collapsed": true
   },
   "source": [
    "# 7. Holdout + CV\n",
    "\n",
    "Stirring in an ad hoc fashion."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "6a91e236-329e-40d0-a178-8ef8cc155e02",
    "_uuid": "71f7894eeead968d32dca6e3b281234426d6922b"
   },
   "source": [
    "## 7.1 Parameters\n",
    "\n",
    "Done by hand.  Here's a rough outline of what I tried:\n",
    "```\n",
    "eta:           0.1 -> 0.01 -> 0.005\n",
    "gamma:              0 -> 1 -> 2   <- 3/5/10/20\n",
    "max_depth:          3 -> 4 -> 5   <- 6/7/8/16/32\n",
    "min_child_weight:        1 -> 1.6\n",
    "subsample:               1 -> 0.9 <- 0.7/0.5/0.3\n",
    "colsample_by_tree:       1 -> 0.5 <- 0.9/0.3\n",
    "lambda:  0 -> 1 -> 2 -> 32 -> 16\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "2ded6edb-52cb-4606-b651-ad8e813d0042",
    "_uuid": "ecbf3ff81371a72e9db77b1341a15d6009623875"
   },
   "source": [
    "The narrative:\n",
    "\n",
    "- `eta`: `0.01 -> 0.005`\n",
    "    - My previous best model had less than 50 trees (`n=35`) -> `0.5x` learning rate -> `2x` trees -> `+1` public leaderboard.\n",
    "    - Otherwise, most of my training was done at `eta=0.01`.\n",
    "    - `eta=0.1` seems to train too quickly -> overfit too quickly.\n",
    "- `gamma`: `1 -> 2`\n",
    "    - Most of my training was done at `gamma=1`.\n",
    "    - Used to combat overfitting.\n",
    "- `max_depth`: `3 -> 4 -> 5`\n",
    "    - Using such a high `max_depth` is probably suboptimal, given the overwhelming concern of overfitting.\n",
    "    - Other `xgboost` kernels have had success with `n=3`.\n",
    "    - I suspect there's a small subset of this model that performs better; I think Konstantin's kernel is a pretty good indication of this.\n",
    "    - My intuition was that a single tree requires 2 splits to isolate a single level of a label encoded column, such as `deck_encode`.  And, an interaction across 5 or so columns doesn't seem unreasonable.\n",
    "- `min_child_weight`: `1 -> 1.6`\n",
    "    - Using Owen Zhang's rule of thumb: `mcw = 3/sqrt(event_rate)` -> 1.6\n",
    "    - I didn't really deviate from `1.6`.\n",
    "- `subsample`: `1 -> 0.9`\n",
    "    - Owen Zhang recommends just `1`, but I thought a small amount (`0.9`) might help with overfitting.\n",
    "- `colsample_by_tree`: `1 -> 0.5`\n",
    "    - Again following Owen Zhang; did not deviate from `0.5` very much.\n",
    "    - `colsample=1` seems to cause `surv` to overfit; the model will refuse to use other (apparently) suboptimal columns.\n",
    "- `lambda`: `1 -> 16`\n",
    "    - I wanted a lot of regularization, and `gamma` seemed too heavy handed.\n",
    "    - `lambda` seems to slow but not stop overfitting.\n",
    "    - I used powers of 2: `1, 2, 4, 16, 32, 64` and values halfway between: `3, 10, 24, 48`.\n",
    "- I also tried adjusting:\n",
    "    - `scale_pos_weight:` `0.5` to `3.0` by `0.1`\n",
    "    - `base_score`: `0.5 -> 0.4, 0.45, 0.49, 0.51, 0.55, 0.6`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "82e43689-5205-4578-af4f-91c1a34d4e10",
    "_uuid": "7985cf2f8e88a17d8d9e46b152b19d1bc4fd30c3"
   },
   "source": [
    "## Protoyping Examples\n",
    "\n",
    "Here's a quick look at some parameter combinations:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "d260336e-70c1-4305-862f-a3614f4ae91e",
    "_uuid": "6780d14588e9dcf4c326bf93dd0bf8df3c46a3cf"
   },
   "source": [
    "### Defaults\n",
    "Different implementations have different defaults."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "2c5980e0-8851-42d0-a2d8-742f9983ad51",
    "_kg_hide-input": true,
    "_kg_hide-output": false,
    "_uuid": "8ba1d61e5408ea72383a8a31140966bbf672e8bc",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_params = {\n",
    "    'eta': 0.1,\n",
    "    'gamma': 0,\n",
    "    'max_depth': 3,\n",
    "    'min_child_weight': 1,\n",
    "    'subsample': 1,\n",
    "    'colsample_bytree': 1,\n",
    "    'lambda': 0,\n",
    "    'eval_metric': ['error', 'logloss'],\n",
    "    'objective': 'binary:logistic',\n",
    "    'silent': 1,\n",
    "    'seed': SEED,\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "e1c06acc-6ea5-4600-976b-b02239b156a9",
    "_kg_hide-input": false,
    "_kg_hide-output": true,
    "_uuid": "39cebd1ceb5d42eedf03e49a86bc7bec999def54",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_h = holdout(_params, n=200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "e9f56ddb-0c0e-472a-8d9a-db636492af05",
    "_kg_hide-input": false,
    "_kg_hide-output": true,
    "_uuid": "45d921c41bc01093ba1525b0da6c7259481ab7c3",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_cv = cv(_params, n=200)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "0f7df470-9e90-4f8a-8fe6-20998dd163ce",
    "_uuid": "2ff8acd322c6e4708e36ad45ddbbb2705ba72fe0"
   },
   "source": [
    "### Zoom In"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "b616e782-00b8-49cc-a31c-5ea13825bbf1",
    "_kg_hide-input": true,
    "_kg_hide-output": false,
    "_uuid": "c5ed48c6ffb264d3a7bc644c708e4a20e7f7ead1",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_params = {\n",
    "    'eta': 0.025,\n",
    "    'gamma': 0,\n",
    "    'max_depth': 3,\n",
    "    'min_child_weight': 1,\n",
    "    'subsample': 1,\n",
    "    'colsample_bytree': 1,\n",
    "    'lambda': 0,\n",
    "    'eval_metric': ['error', 'logloss'],\n",
    "    'objective': 'binary:logistic',\n",
    "    'silent': 1,\n",
    "    'seed': SEED,\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "f94803d5-921e-4434-83d1-80bf6ab6a157",
    "_kg_hide-input": false,
    "_kg_hide-output": true,
    "_uuid": "379cdd7e197d377821c81c06b7f96fe8c45a6d0e",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_h = holdout(_params, n=200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "f2e1b72e-7f8b-4017-9085-24b03f7569ce",
    "_kg_hide-input": false,
    "_kg_hide-output": true,
    "_uuid": "e8765a386942f505901617b523422bc570b8525b",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_cv = cv(_params, n=200)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "667d0b5e-22a7-492d-99f3-61a072c92ab9",
    "_uuid": "5ac7e22db48d64a732827c3b597080631d58bb33"
   },
   "source": [
    "### Add some regularization"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "4541a234-8ffc-47a9-b2cd-7ccfee1fadd7",
    "_kg_hide-input": true,
    "_kg_hide-output": false,
    "_uuid": "ca1319fab94731725b7d349417a154b2e8c16df6",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_params = {\n",
    "    'eta': 0.025,\n",
    "    'gamma': 1,\n",
    "    'max_depth': 3,\n",
    "    'min_child_weight': 1.6,\n",
    "    'subsample': 1,\n",
    "    'colsample_bytree': 0.5,\n",
    "    'lambda': 1,\n",
    "    'eval_metric': ['error', 'logloss'],\n",
    "    'objective': 'binary:logistic',\n",
    "    'silent': 1,\n",
    "    'seed': SEED,\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "d75f1bbf-5dea-486a-985e-c45c81db97c5",
    "_kg_hide-input": false,
    "_kg_hide-output": true,
    "_uuid": "5c75b603247df62c7ec753cea611ce9edc66dd7e",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_h = holdout(_params, n=200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "148aec68-f5e1-4753-b837-32af5cf5c375",
    "_kg_hide-input": false,
    "_kg_hide-output": true,
    "_uuid": "0c5e4f8f6db9ad2fb4ae2b5adbefc21325874da9",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "_cv = cv(_params, n=200)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "e69a3443-949e-4ede-9b22-0c8e9481339d",
    "_uuid": "1829531e79b046274b16ee3ae1229f73e9ed5357"
   },
   "source": [
    "- At this point, I would try to find a number of trees `n` with low holdout and cv error (both).\n",
    "- For a long time, I wanted a model with low log loss, but I haven't been able to figure it out.\n",
    "- I eventually spent most of my training with small variations of `eta=0.01`, `gamma=1`, `max_depth=5`, `mcw=1.6`, `subsample=0.9`, `colsample=0.5`, `lambda=16`.\n",
    "- At various points, I removed features that were unused or almost unused (`f score=1`) by my then best models.  I tried not to do too much feature selection."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "a67229d2-296b-4882-b4e1-71a13f0f9a2d",
    "_uuid": "2f0834bb10af22e4ec46dafe84f43db2f8385689"
   },
   "source": [
    "## The Final Model\n",
    "\n",
    "Without further ado."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "c3b10331-cd22-4b9b-a52e-ed2ad1fe484e",
    "_uuid": "ec25346c40c49708d29e61ebb433c03bf01e845e",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "params = {\n",
    "    'eta': 0.005,\n",
    "    'gamma': 2,\n",
    "    'max_depth': 5,\n",
    "    'min_child_weight': 1.6,\n",
    "    'subsample': 0.9,\n",
    "    'colsample_bytree': 0.5,\n",
    "    'lambda': 16,\n",
    "    'eval_metric': ['error', 'logloss'],\n",
    "    'objective': 'binary:logistic',\n",
    "    'silent': 1,\n",
    "    'seed': SEED,\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "8c461e95-2a6b-455e-b727-c441f0a7ce87",
    "_uuid": "45921ed82a264349843cb81c3008136f49e6e267",
    "collapsed": true
   },
   "source": [
    "## 7.2 Holdout\n",
    "\n",
    "Train on `dtrain` and measure log loss and error on `dval`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "9f5478e7-ad9c-4871-8360-f8a573828e43",
    "_uuid": "c91cf398fba4046e371c968650d8dfdd996de67d",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "h = holdout(params, n=200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "2ac7b25b-db2d-4d91-b023-c488501eedba",
    "_uuid": "27f1fe9ba7cb6331ab1ec1dc0c1969c00548c44f",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plot_holdout_error(h, 0, 200)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "9896d86f-b485-461f-8595-0113a4296367",
    "_uuid": "f6c3ec12f4dde325a96aa0bba5cc1ecfd2651341",
    "collapsed": true
   },
   "source": [
    "## 7.3 CV\n",
    "\n",
    "Train on `dfull` with `5x5` repeated stratified *k*-fold cross validation.\n",
    "\n",
    "I also used `StratifiedShuffleSplit` at various test sizes, ranging from 0.1 to 0.95 and *k*=10 *k*fold."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "68f5f3ed-6964-4da3-9d43-840967081956",
    "_uuid": "23b745d3639d0e8229cc2a0d56f6fc4727d36b8e",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "cv_results = cv(params, n=200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "9d1c8343-2de3-45b6-a25f-4ea7b3df1058",
    "_uuid": "075af5884fab315ea43372785e1216c9a9db1798",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plot_cv_error(cv_results, 0, 200)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "25c31c2c-4049-4988-ab5e-3fb49510f2c4",
    "_uuid": "b835767f5ac480ae66a4ae2ca76d5230bb027d5b"
   },
   "source": [
    "Candidates for early stopping include: `n = 65, 96, 105`.\n",
    "- 96 and 105 predict the same values.\n",
    "- 65 is 1 off on the public leaderboard.\n",
    "\n",
    "Chasing the leaderboard, my choice is `n=96` trees."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "9d6a90c5-a2ac-419e-a31d-0612de080b3f",
    "_uuid": "8d78b180bf4359119f63944ab3cd0073b485ea02",
    "collapsed": true
   },
   "source": [
    "## 7.4 Feature Importance\n",
    "\n",
    "A quick look at single seed feature importance.  The final model averages across several random seeds."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "ee082111-0af0-47a8-88db-fdc9c88d7268",
    "_uuid": "4afccb4f7d6d81bf75e5b90096efb784bd572fc4",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "z = train(params, n=96)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "a3e625fe-14ec-4a68-a4c7-4bb8088bb420",
    "_uuid": "df8ca90bed361f2d3dc64e9aafb6abd3097c3d8f"
   },
   "source": [
    "Unused columns:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "18a66842-a54a-4e53-aa09-491829b377db",
    "_uuid": "78aaf09be0c0287b366939240f0c4a18d0c1173f",
    "collapsed": true,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "X.columns[~X.columns.isin(z.get_fscore().keys())]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "4ce2c4a9-fc64-47aa-95ed-a7849c1e39a4",
    "_uuid": "6f31e3a5d64f6041450387d0bfca1384d5c0d706"
   },
   "source": [
    "Built-in plotting:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "a4fcad11-5725-4fb6-923c-0521bc025cff",
    "_uuid": "48d9f208dca5c163337ce03b79a24417aed0fe3d",
    "collapsed": true,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "_, ax = plt.subplots(1, 1, figsize=(13, 16))\n",
    "xgb.plot_importance(z, ax=ax, importance_type='weight');"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "a4fcad11-5725-4fb6-923c-0521bc025cff",
    "_uuid": "48d9f208dca5c163337ce03b79a24417aed0fe3d",
    "collapsed": true,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "_, ax = plt.subplots(1, 1, figsize=(13, 16))\n",
    "xgb.plot_importance(z, ax=ax, importance_type='gain');"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "a4fcad11-5725-4fb6-923c-0521bc025cff",
    "_uuid": "48d9f208dca5c163337ce03b79a24417aed0fe3d",
    "collapsed": true,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "_, ax = plt.subplots(1, 1, figsize=(13, 16))\n",
    "xgb.plot_importance(z, ax=ax, importance_type='cover');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "9a2724c2-9b62-4aac-a65a-d7f3f3a549a5",
    "_uuid": "3f69ccb810bfe9e5b15f371e0660ad5857848cdb",
    "collapsed": true
   },
   "source": [
    "## 7.5 Trees\n",
    "\n",
    "We can look at individual trees.\n",
    "\n",
    "Kaggle's notebook display width is a bit narrow; use browser zoom-in for a more readable view."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "26ee0574-2517-4935-9ddf-b750262ae69a",
    "_uuid": "95b74a89a1ee9a28947379d105d4d06432a88942",
    "collapsed": true
   },
   "source": [
    "### First 5 trees"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "f0034103-d485-4a3f-9ba9-d63b7904058d",
    "_kg_hide-output": true,
    "_uuid": "928f7f63e6432f05f5afb08ed1add02c63cad545",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "90a3e185-ee1a-4e5c-9363-19a1b21e827a",
    "_kg_hide-output": true,
    "_uuid": "b46b47173700aee0ac1cc56126aacd4f2e6d69e2",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "65724d1d-d377-4e48-b926-71a1de378b69",
    "_kg_hide-output": true,
    "_uuid": "0ceb6a379044eb1182cbf2845049475cafa6d4e0",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "bb9e461f-7452-4d6c-96a7-054c20314f8d",
    "_kg_hide-output": true,
    "_uuid": "1b21a634c58bb139025463baa1171a927b9718d9",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "c39acbeb-5d1c-451f-a77e-42f252a0648e",
    "_kg_hide-output": true,
    "_uuid": "b7bff533648a5efff2a0505316f356bdb40ac36c",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "3dff157f-9621-4c40-a1dc-4304fbb9cfb3",
    "_uuid": "a2373646269ed374fcd3b060cba7d43864b9102d",
    "collapsed": true
   },
   "source": [
    "### Last 5 trees"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "3c6dbe9e-5c50-44b0-80e9-effda356f821",
    "_kg_hide-output": true,
    "_uuid": "07399dfdcec5b4bcf0b0f408ea22457047e0495e",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=z.best_ntree_limit-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "e37ced53-4236-40ee-8423-01edc676ce8a",
    "_kg_hide-output": true,
    "_uuid": "ea12d49f9b50fb87fcb0d35732e5a7901aad305d",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=z.best_ntree_limit-2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "24a722fb-480c-405e-93cc-e0e18de95e74",
    "_kg_hide-output": true,
    "_uuid": "f33b8dbfe546c85a88ad0d955bbf68bc7df828ca",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=z.best_ntree_limit-3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "dc8342e4-f457-4aa7-82a4-15d244b3f8d2",
    "_kg_hide-output": true,
    "_uuid": "a2a6f6391d9aef60715aeaf53747582563ffcfac",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=z.best_ntree_limit-4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "d4f00b9d-6864-47cd-bdf9-ae211853c406",
    "_kg_hide-output": true,
    "_uuid": "ee05365047bd0f179aacc02438eebc6bc83102a6",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "xgb.to_graphviz(z, rankdir='LR', num_trees=z.best_ntree_limit-5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "51c76fe0-cc2a-493c-b1e4-9302482d0f97",
    "_uuid": "275513527d37d24e9eff8896f48dcd275d8cf795",
    "collapsed": true
   },
   "source": [
    "## 7.6 Ensemble\n",
    "\n",
    "- `ensemble` trains on several seeds and averages their probabilities (arithmetic mean).\n",
    "    - using `SEED_LIST`, which does not include `SEED`.\n",
    "    - both `subsample` and `colsample` are random.\n",
    "    - sum log odds is an alternative to arithmetic mean (see [here](https://arbital.com/p/bayes_log_odds/)).\n",
    "- `threshold` converts probabilities to `0/1` (`int`) for submission, using a strict greater than: `y_hat > pr -> 1, else 0`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "b282ba13-2cc3-415a-b039-7bab593e7361",
    "_uuid": "4e4090d167f19d81264ce699a023686cce51cf87",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "p = ensemble(params, n=96)\n",
    "y_hat = threshold(p)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "9cd5641b-bdb5-48ec-b397-13497846cad6",
    "_uuid": "d79edadf2f5a161fc61e709e21ea94ea0dbb786b"
   },
   "source": [
    "One sanity check is predicted # survivors.  Leaderboard probing all zeros scores 0.62679, ie there are 78/209 survivors on the public LB.  Extrapolating this survival rate to the whole `test` (`x2`) predicts 156/418 survivors.  (Note: We can't compare to 78 since we can't distinguish `train`/`test`.)  This matches the # survivors according to Wikipedia as well (exclude crew + `train`): https://en.wikipedia.org/wiki/Sinking_of_the_RMS_Titanic#Casualties_and_survivors"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "11b413ff-6952-4e40-a14e-52eacc852307",
    "_uuid": "b51b408d27c19061b8ef647ecf310254df423b9b",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "sum(y_hat)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "00741d8f-c692-4f87-a366-0c780b58874d",
    "_uuid": "620889c340683f00bf19fcb8edb4666b108f013b"
   },
   "source": [
    "- The few top kernels I checked are all biased toward 0.  This is probably an artifact of using the accuracy metric on an unbalanced (and fairly noisy) dataset, as opposed to using f1 or log loss.\n",
    "\n",
    "- Thresholding to increase or decrease # of predicted survivors sometimes helps; whether it can be done in a principled and robust manner is a different matter.\n",
    "\n",
    "- For reference, Konstantin's 0.83253 kernel predicts 134 survivors."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "a996f805-1afb-4e22-880d-1572ebe7c6b8",
    "_uuid": "9a50a399e006336f4df6e43df43e7ca83278785a",
    "collapsed": true
   },
   "source": [
    "## 7.7 Submit"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "06e6e418-700e-4f8f-aa57-80bcf5ed951b",
    "_uuid": "99f82251335b6ef37c88cc2110da5a864df5fe52"
   },
   "source": [
    "### Public LB: 0.82775"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "e7a25e33-f44a-47f7-aec1-bf494bd8168a",
    "_uuid": "c86cd3e43552c123be8df4a0448757cb073116bf",
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "submit(y_hat, 'xgb')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "71ddcb95-b957-484c-90b5-a6415c20f004",
    "_uuid": "39ccb5b52ad9c2b8e1ecdf18fdf1dd049cbcf6c2",
    "collapsed": true
   },
   "source": [
    "# 8. Final Words"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "9d54736b-d42d-481e-9400-8b85d4f3e59c",
    "_uuid": "26fdbd2447358001763d6f78fd70eb1ba7906993",
    "collapsed": true
   },
   "source": [
    "This is an interesting dataset with a lot of noise, but not so much noise that it's easy to luck into a good score, in my opinion.  I had a hard time crossing 0.78, then 0.79, 0.80, etc, even though each step is only a difference of 2 people.  My models were surprisingly stable in terms of peak score; I'm not sure whether that's a testament to XGBoost, or just an artifact of my approach, or just plain luck/false pattern matching.\n",
    "\n",
    "I actually spent a lot of time trying to build out a robust and principled cross validation workflow. [Version 55](https://www.kaggle.com/numbersareuseful/titanic-starter-with-xgboost-173-209-top-2-lb?scriptVersionId=3452087) is an example of my attempt, using `RandomizedSearchCV`.  It was a complete failure.  Ultimately, I restarted from scratch, simplified my workflow, and changed tactics: focus on feature engineering + learn from other top kernels + switch to hand tuning.\n",
    "\n",
    "At the end of the day, I'm not actually sure whether my model is underfitting or overfitting, and I don't have confidence in my model because the parameters were hand tuned in an ad hoc fashion.  I think parameter search is probably a necessary ingredient for robust and interpretable models, and I think there's a lot of room to build a better model or at least better justify (for or against) the choice of parameters that I'm using.  Model justification (ie, confidence) is just as important as model performance, because generalization is the holy grail.\n",
    "\n",
    "But, this is all I got.  Good luck!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_cell_guid": "4032e05f-f7e9-4c88-89bf-1fe55694a82d",
    "_uuid": "59dfbf458e2e4b58c378fe5080f121a89eafc358",
    "collapsed": true
   },
   "source": [
    "**Questions, comments, criticism, tips & tricks all welcome!**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "_cell_guid": "f02da8c8-1782-4ead-8e70-aa7ae119be42",
    "_uuid": "ace88d6a9d18e1a8a137735a3ceed2df28c3c76e",
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python [default]",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
