{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Chapter 3: Sample Weights\n",
    "___\n",
    "\n",
    "## Exercises\n",
    "\n",
    "\n",
    "**4.1** In Chapter 3, we denoted as t1 a pandas series of timestamps where the first barrier was touched, and the index was the timestamp of the observation. This was the output of the getEvents function.\n",
    "- **(a)** Compute a t1 series on dollar bars derived from E-mini S&P 500 futures tick data.\n",
    "- **(b)** Apply the function mpNumCoEvents to compute the number of overlapping outcomes at each point in time.\n",
    "- **(c)** Plot the time series of the number of concurrent labels on the primary axis, and the time series of exponentially weighted moving standard deviation of returns on the secondary axis.\n",
    "- **(d)** Produce a scatterplot of the number of concurrent labels (x-axis) and the exponentially weighted moving standard deviation of returns (y-axis). Can you appreciate a relationship?\n",
    "\n",
    "**4.2** Using the function mpSampleTW, compute the average uniqueness of each label. What is the first-order serial correlation, AR(1), of this time series? Is it statistically significant? Why?\n",
    "\n",
    "**4.3** Fit a random forest to a financial dataset where $I^{−1} \\sum_{i=1}^{I} \\bar{\\mu}_i << 1$\n",
    "- **(a)** What is the mean out-of-bag accuracy?\n",
    "- **(b)** What is the mean accuracy of k-fold cross-validation (without shuffling) on the same dataset?\n",
    "- **(c)** Why is out-of-bag accuracy so much higher than cross-validation accuracy? Which one is more correct / less biased? What is the source of this bias?\n",
    "\n",
    "**4.4** Modify the code in Section 4.7 to apply an exponential time-decay factor.\n",
    "\n",
    "**4.5** Consider you have applied meta-labels to events determined by a trend-following model. Suppose that two thirds of the labels are 0 and one third of the labels\n",
    "are 1.\n",
    "- **(a)** What happens if you fit a classifier without balancing class weights?\n",
    "- **(b)** A label 1 means a true positive, and a label 0 means a false positive. By applying balanced class weights, we are forcing the classifier to pay more attention to the true positives, and less attention to the false positives. Why does that make sense?\n",
    "- **(c)** What is the distribution of the predicted labels, before and after applying balanced class weights?\n",
    "\n",
    "**4.6** Update the draw probabilities for the final draw in Section 4.5.3.\n",
    "\n",
    "**4.7** In Section 4.5.3, suppose that number 2 is picked again in the second draw. What would be the updated probabilities for the third draw?"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
