{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "vKQIxRkReytl",
      "metadata": {
        "id": "vKQIxRkReytl"
      },
      "source": [
        "Copyright 2023 Google LLC\n",
        "\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "you may not use this file except in compliance with the License.\n",
        "You may obtain a copy of the License at\n",
        "\n",
        "     https://www.apache.org/licenses/LICENSE-2.0\n",
        "\n",
        "Unless required by applicable law or agreed to in writing, software\n",
        "distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "See the License for the specific language governing permissions and\n",
        "limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "zrs1XRdJfDtO",
      "metadata": {
        "id": "zrs1XRdJfDtO"
      },
      "source": [
        "# [WIP] Exploring ML model-environment interactions"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fkr3DaqFfqrr",
      "metadata": {
        "id": "fkr3DaqFfqrr"
      },
      "source": [
        "NOTE: This colab is a work in progress for an upcoming paper."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e4f3b135",
      "metadata": {
        "id": "e4f3b135"
      },
      "source": [
        "In this notebook, we adopt a **causal framework** to explore the impact of **model specification** choices on **algorithmic fairness over time**. Model specification refers to a series of choices that one makes when developing a predictive model, including: \n",
        "\n",
        "1. **Variable operationalization**:\n",
        "    - Given a problem or predictive task stated in natural language, how can we map semantic concepts to features with known types that we can extract, collect, or approximate? \n",
        "    - Note that these mappings may be set-valued; for this reason, steps (1) and (2) are closely intertwined. \n",
        "2. **Variable selection**: \n",
        "    - Inclusion/exclusion of independent and dependent variable(s); identification of proxy variables.\n",
        "3. **Functional form selection**:\n",
        "    - What parametric or distributional assumptions can we make about how the dependent variable is related to each of the independent variable(s), given what we know or hypothesize about the data-generating process? \n",
        "    \n",
        "As a motivating example, we consider the algorithmic decision-making task outlined in [Obermeyer et al. 2019](https://www.ftc.gov/system/files/documents/public_events/1548288/privacycon-2020-ziad_obermeyer.pdf). In this paper, the authors review a machine learning pipeline which has been deployed by multiple health systems to select a subset of patients to participate in care management programs, which have been empirically demonstrated to improve clinical outcomes and reduce costs. As the authors state in the paper's introduction, the underlying question when considering such constrained resource allocation tasks is how to select the subset of patients that will *derive greatest marginal benefit from participation*, relative to non-participation, subject to the satisfaction of budget constraints. However, this type of intervention effect can be difficult to estimate for a variety of reasons in the absence of randomized control trial data. For this reason, the model's developers begin their model specification task by assuming that **need for care** is a suitable proxy for **marginal benefit associated with program participation**. \n",
        "\n",
        "- $f: \\texttt{marginal_benefit_of_program} \\rightarrow \\texttt{need_for_care}$\n",
        "\n",
        "They proceed to identify three ways in which $\\texttt{need_for_care}_t$ might be operationalized, represented by the set-valued function $g$ below:\n",
        "\n",
        "- $g: \\texttt{need_for_care} \\rightarrow \\{\\texttt{cost}, \\ \\texttt{avoidable_cost}, \\ \\texttt{num_active_chronic_conditions}\\}$\n",
        "\n",
        "When it comes to each predictive model's **feature space**, the developers seek to select variables that reflect/represent each patient's $\\texttt{sociodemographic_characteristics }$ and $\\texttt{ medical_history}_{0:t-1}$. These operationalization mappings are formalized below:\n",
        "\n",
        "- $h: \\texttt{socdemo_characteristics} \\rightarrow \\{ \\texttt{gender, age_bucket, insurance_type ...}\\} \\setminus \\{\\texttt{race}\\}$\n",
        "\n",
        "Note here that $\\texttt{gender}$ is assumed to be both binary and time-invariant, and that age at timestep $t-1$ is used when mapping patients to discretized $\\texttt{age_bucket}$s. Additionally, note that the model developers make the deliberate decision to exclude $\\texttt{race}$ presumably in an effort to ensure \"fairness through unawareness\"---i.e., exclusion (during training) of the sensitive attribute(s) believed to be associated with disparate treatment.\n",
        "\n",
        "- $j: \\texttt{medical history} \\rightarrow \\{\\texttt{diagnoses, procedure codes, medications, costs_incurred}\\}$.\n",
        "\n",
        "Note here that the operationalization of $\\texttt{medical history}$ is informed by the availability of longitudinal claims data containing these features. While real-world claims records will typically contain multiple years of observations for each patient, in the synthetic dataset that the authors make publically available, we only observe two timesteps---i.e., features observed at time $t-1$ are used to predict the outcome variable(s) of interest at time $t$. Additionally, we note that many other operationalization choices may be possible, provided that corresponding datasets exist (e.g., medication adherence, exercise, vital signs, patient-reported symptoms, etc. might be available through sensor data and/or mobile apps).  \n",
        " \n",
        "$\\texttt{LASSO}  (h \\cup j)_{t-1} \\rightarrow (g\\circ f)_t$\n",
        "$\n",
        "\n",
        "\n",
        "LASSO (or, more generally, regularization) as a form of variable selection given a high-dimensional and potentially sparse feature space."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "5ad0e5dc",
      "metadata": {
        "id": "5ad0e5dc"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import networkx as nx\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "from typing import Optional\n",
        "from itertools import product,chain\n",
        "import causalnex\n",
        "from math import ceil\n",
        "from sklearn.preprocessing import LabelEncoder\n",
        "import warnings\n",
        "warnings.simplefilter(action='ignore', category=FutureWarning)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f96ee046",
      "metadata": {
        "id": "f96ee046"
      },
      "outputs": [],
      "source": [
        "DATA_DIR = os.path.join(\"..\", \"data\")\n",
        "df = pd.read_csv(os.path.join(DATA_DIR, \"data_new.csv\"))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3d0af3be",
      "metadata": {
        "id": "3d0af3be"
      },
      "outputs": [],
      "source": [
        "# Select columns corresponding to predictive model features \n",
        "# - Taken from https://gitlab.com/labsysmed/dissecting-bias/-/blob/master/code/model/features.py\n",
        "# - Slightly modified syntax but preserves original functionality\n",
        "\n",
        "def get_dem_features(df: pd.DataFrame, prefix: str = 'dem_', excl_race:bool=False) -\u003e [str]:\n",
        "    \"\"\"Select sociodemographic features; \n",
        "        use excl_race flag to determine whether to keep or exclude race\"\"\"\n",
        "    if excl_race:\n",
        "        return [c for c in df.columns if c[:len(prefix)] == prefix and 'race' not in c]\n",
        "    else:\n",
        "        return [c for c in df.columns if c[:len(prefix)] == prefix]\n",
        "\n",
        "def get_comorbidity_features(df: pd.DataFrame) -\u003e [str]:\n",
        "    \"\"\"Select features related to patients' comorbidities at time t-1\"\"\"\n",
        "    comorbidity_sum = 'gagne_sum_tm1'\n",
        "    suffix_elixhauser = '_elixhauser_tm1'\n",
        "    suffix_romano = '_romano_tm1'\n",
        "    \n",
        "    return [c for c in df.columns if c == comorbidity_sum or \n",
        "             suffix_elixhauser in c or suffix_romano in c]\n",
        "\n",
        "def get_cost_features(df: pd.DataFrame, prefix='cost_') -\u003e [str]:\n",
        "    \"\"\"Select features related to patients' incurred costs at time t-1;\n",
        "        exclude features related to costs at time t\"\"\"\n",
        "    return [c for c in df.columns if prefix == c[:len(prefix)] \n",
        "            and c not in ['cost_t', 'cost_avoidable_t']]\n",
        "\n",
        "def get_lab_features(df: pd.DataFrame) -\u003e [str]:\n",
        "    \"\"\"Select features related to patients' lab results at time t-1\"\"\"\n",
        "    suffix_labs_counts = '_tests_tm1'\n",
        "    suffix_labs_low = '-low_tm1'\n",
        "    suffix_labs_high = '-high_tm1'\n",
        "    suffix_labs_normal = '-normal_tm1'\n",
        "    \n",
        "    return [c for c in df.columns if np.any([suffix_labs_counts in c, suffix_labs_low in c, \n",
        "                                         suffix_labs_high in c, suffix_labs_normal in c])]\n",
        "\n",
        "def get_med_features(df: pd.DataFrame, prefix='lasix_') -\u003e [str]:\n",
        "    \"\"\"Select features related to patients' medications at time t-1.\n",
        "        Note that the prefix they use returns only lasix meds, which are diuretics\"\"\"\n",
        "    return [c for c in df.columns if c[:len(prefix)]==prefix]\n",
        "\n",
        "def get_all_features(df: pd.DataFrame, verbose:bool=True) -\u003e [str]:\n",
        "    \"\"\"Return a list of features representing the union over all available feature selection functions\"\"\"\n",
        "    return list(chain(*[f(df) for f in [get_dem_features, get_comorbidity_features, \n",
        "                                          get_cost_features,get_lab_features, get_med_features]]))\n",
        "    \n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "d812bd69",
      "metadata": {
        "id": "d812bd69"
      },
      "outputs": [],
      "source": [
        "def round_to_nearest_hundred(x) -\u003e int:\n",
        "    return int(ceil(x / 100.0)) * 100\n",
        "\n",
        "def one_hot_encode_mean_score(df: pd.DataFrame, var: str, low_if_lt: float, high_if_gt: float) -\u003e (pd.DataFrame, [str]):\n",
        "    var_name = var.replace(\"_t\", \"\")\n",
        "    if var_name[:3] == 'ldl':\n",
        "        var_name = var_name.replace(\"_mean\", \"-mean\")\n",
        "    \n",
        "    df_t = pd.DataFrame()\n",
        "    df_t['{}-low'.format(var_name)] = df[var].apply(lambda x: x \u003c low_if_lt)\n",
        "    df_t['{}-normal'.format(var_name)] = df[var].apply(lambda x: low_if_lt \u003c= x \u003c= high_if_gt)\n",
        "    df_t['{}-high'.format(var_name)] = df[var].apply(lambda x: x \u003e high_if_gt)\n",
        "\n",
        "    assert np.max(np.sum(df_t, axis=1)) == 1\n",
        "    \n",
        "    df_tm1 = df[[\"{}-{}_tm1\".format(x[0], x[1]) for x in  list(product([var_name], [\"low\", \"normal\", \"high\"]))]]\n",
        "    df_tm1.columns = df_t.columns\n",
        "\n",
        "    return pd.concat([df_tm1, df_t]), df_t.columns\n",
        "\n",
        "def discretize_mean_score(df: pd.DataFrame, var:str, low_if_lt: float, high_if_gt: float) -\u003e (pd.DataFrame, [str]):\n",
        "    var_name = var.replace(\"_t\", \"\")\n",
        "    if var_name[:3] == 'ldl':\n",
        "        var_name = var_name.replace(\"_mean\", \"-mean\")\n",
        "    \n",
        "    df_t = pd.DataFrame()\n",
        "    df_t[var_name] = df[var].apply(lambda x: \"low\" if x \u003c low_if_lt else \"normal\" if low_if_lt \u003c= x \u003c= high_if_gt else \"high\" if x \u003e high_if_gt else \"unobs\")\n",
        "    \n",
        "    df_tm1 = df[[\"{}-{}_tm1\".format(x[0], x[1]) for x in  list(product([var_name], [\"low\", \"normal\", \"high\"]))]]\n",
        "    df_tm1.loc[:,var_name] = df_tm1.apply(lambda row: \"low\" if row[\"{}-low_tm1\".format(var_name)] == 1 \n",
        "                                          else \"normal\" if  row[\"{}-normal_tm1\".format(var_name)] == 1 \n",
        "                                          else \"high\" if row[\"{}-high_tm1\".format(var_name)] == 1\n",
        "                                          else \"unobs\",axis=1)\n",
        "    \n",
        "    merged_df = pd.concat([df_tm1[[var_name]], df_t])\n",
        "    merged_df.loc[:,var_name] = merged_df[var_name].astype(\"category\")\n",
        "    \n",
        "    return merged_df, merged_df.columns\n",
        "\n",
        "\n",
        "def get_time_series_for_outcome_vars(df: pd.DataFrame, var:str) -\u003e (pd.DataFrame, [str]):\n",
        "    \n",
        "    ### Construct time-series for each of the outcome variables that they consider in the paper: (see https://gitlab.com/labsysmed/dissecting-bias/-/blob/master/data/data_dictionary.md)\n",
        "\n",
        "    # risk_score_t: Commercial algorithmic risk score prediction for cost in year t, formed using data from year t-1. risk_score_tm1 is NOT computable because we do not have (input) data for year t-2.\n",
        "    \n",
        "    # program_enrolled_t: Indicator for whether patient-year was enrolled in program. FOr models, this is a function of the model and the percentile cutoff. In their data, this is the observed enrollment (ie, based on original alg. see pg. 6 of paper)\n",
        "    #    program_enrolled_tm1 is NOT computable because we don't have (input) data for year t-2. Note: we might choose to default to 0?\n",
        "    # TODO\n",
        "    if var in ['risk_score_t', 'program_enrolled_t']:\n",
        "        pass \n",
        "    \n",
        "    # gagne_sum_t: Total number of active chronic illnesses. gagne_sum_tm1 exists in the data and does not need to be computed.\n",
        "    elif var == 'gagne_sum_t':\n",
        "        return pd.concat([df['gagne_sum_tm1'], df[var]]), ['gagne_sum']\n",
        "    \n",
        "    # Cost_t: Total medical expenditures, rounded to the nearest 100. cost_tm1 IS computable by summing over all costs incurred in year t-1 and rounding to the nearest 100.\n",
        "    elif var == 'cost_t':\n",
        "        cost_features = get_cost_features(df, prefix='cost_')\n",
        "        return pd.concat([df[cost_features].sum(axis=1).apply(lambda x: round_to_nearest_hundred(x)), df[var]]), ['cost']\n",
        "    \n",
        "    # Cost_avoidable_t: Total avoidable (emergency + inpatient) medical expenditures, rounded to nearest 100. \n",
        "    #     cost_avoidable_tm1 IS computable by sum('cost_emergency_tm1', 'cost_ip_medical_tm1', 'cost_ip_surgical_tm1') for year t-1 and rounding to the nearest 100.\n",
        "    elif var == 'cost_avoidable_t':\n",
        "        return pd.concat([df[['cost_emergency_tm1', 'cost_ip_medical_tm1', 'cost_ip_surgical_tm1']].sum(axis=1).apply(lambda x: round_to_nearest_hundred(x)), df[var]]), ['cost_avoidable']\n",
        "    \n",
        "    # Mean systolic blood pressure in year t. We don't have `bps_mean_tm1` but we do have an indicator variable `hypertension_elixhauser_tm1` for hypertension at t-1. \n",
        "    #    Per CDC guidance (https://www.cdc.gov/bloodpressure/about.htm), a person is considered to have high blood pressure (hypertension) w/ systolic blood pressure \u003e= 130 mm Hg.\n",
        "    #    So, we can binarize and concatenate.\n",
        "    elif var == 'bps_mean_t':\n",
        "        return pd.concat([df['hypertension_elixhauser_tm1'], df[var].apply(lambda x: x \u003e= 130.0)]), ['hypertension_elixhauser']\n",
        "    \n",
        "    # Mean HbA1C in year t. We don't have `ghba1c_mean_tm1`, \n",
        "    #    Let x = `mean GHbA1c test result`; then: low := x \u003c4; normal := 4 \u003c= x \u003c= 5.7; high := x \u003e 5.7\n",
        "    elif var == 'ghba1c_mean_t':\n",
        "        return discretize_mean_score(df=df, var=var, low_if_lt=4, high_if_gt=5.7)\n",
        "    \n",
        "    # Mean hematocrit test result in year t. We don't have `hct_mean_tm1`, \n",
        "    #    Let x = `mean hct test result`; then: low := x \u003c35.5; normal := 35.5 \u003c= x \u003c= 48.6; high := x \u003e 48.6\n",
        "    elif var == 'hct_mean_t': \n",
        "        return discretize_mean_score(df=df, var=var, low_if_lt=35.5, high_if_gt=48.6)\n",
        "    \n",
        "    # Mean creatinine test result in year t. We don't have `cre_mean_tm1`, \n",
        "    #    Let x = `mean cre test result`; then: low := x \u003c0.84; normal := 0.84 \u003c= x \u003c= 1.21; high := x \u003e 1.21\n",
        "    elif var == 'cre_mean_t':\n",
        "        return discretize_mean_score(df=df, var=var, low_if_lt=0.84, high_if_gt=1.21)\n",
        "    \n",
        "    # Mean LDL (low-density lipoprotein cholesterol) test result in year t. We don't have `ldl_mean_tm1`, \n",
        "    #    Let x = `mean LDL test result`; then: low := x \u003c50; normal := 50 \u003c= x \u003c= 99; high := x \u003e 99\n",
        "    elif var in ['ldl_mean_t']:\n",
        "        return discretize_mean_score(df=df, var=var, low_if_lt=50, high_if_gt=99)\n",
        "    \n",
        "    else:\n",
        "        raise ValueError(\"Variable `{}` is not currently supported. Supported variables include: gagne_sum_t, cost_t, cost_avoidable_t, bps_mean_t, ghba1c_mean_t, hct_mean_t, cre_mean_t, ldl_mean_t.\".format(var))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "c94fef46",
      "metadata": {
        "id": "c94fef46"
      },
      "outputs": [],
      "source": [
        "def create_long_df_obermeyer(wide_df: pd.DataFrame, t_start:int=0,outcome_vars: [str] = ['gagne_sum_t', 'cost_t', 'cost_avoidable_t', 'bps_mean_t', 'ghba1c_mean_t', 'hct_mean_t', 'cre_mean_t', 'ldl_mean_t'] ):\n",
        "    \n",
        "    df = pd.DataFrame()\n",
        "    df['idx'] = wide_df.index\n",
        "    df['race'] = wide_df['race'].astype(\"category\")\n",
        "    df['gender'] = wide_df['dem_female'].apply(lambda x: 'f' if x == 1 else 'm').astype(\"category\")\n",
        "    df['age_bucket'] = wide_df.apply(lambda x: '18-24' if x['dem_age_band_18-24_tm1']==1\n",
        "                                     else '25-34' if x['dem_age_band_25-34_tm1']==1\n",
        "                                     else '35-44' if x['dem_age_band_35-44_tm1']==1\n",
        "                                     else '45-54' if x['dem_age_band_45-54_tm1']==1\n",
        "                                     else '55-64' if x['dem_age_band_55-64_tm1']==1\n",
        "                                     else '65-74' if x['dem_age_band_65-74_tm1']==1\n",
        "                                     else 'geq_75' if x['dem_age_band_75+_tm1']==1\n",
        "                                     else 'missing',axis=1).astype(\"category\")\n",
        "    df.loc[:,'timestep'] = t_start\n",
        "    \n",
        "    df_t = df.copy()\n",
        "    df_t.loc[:,'timestep'] = t_start + 1\n",
        "    long_df = pd.concat([df,df_t], axis=0)\n",
        "    \n",
        "    cols_df = pd.DataFrame()\n",
        "    colnames = []\n",
        "    \n",
        "    for v in outcome_vars:\n",
        "        time_series_col, cols = get_time_series_for_outcome_vars(df=wide_df, var=v)\n",
        "        cols_df = pd.concat([cols_df, time_series_col], axis=1)\n",
        "        colnames.extend(cols)\n",
        "\n",
        "    cols_df.columns = colnames\n",
        "    long_df = pd.concat([long_df, cols_df],axis=1)\n",
        "    return long_df"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "75edc0db",
      "metadata": {
        "id": "75edc0db"
      },
      "outputs": [],
      "source": [
        "# For reproducibility, imported from https://gitlab.com/labsysmed/dissecting-bias/-/blob/master/code/model/util.py\n",
        "\n",
        "\"\"\"\n",
        "Utility functions.\n",
        "\"\"\"\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "import os\n",
        "import git\n",
        "\n",
        "\n",
        "def convert_to_log(df, col_name):\n",
        "    \"\"\"Convert column to log space.\n",
        "\n",
        "    Defining log as log(x + EPSILON) to avoid division by zero.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    df : pd.DataFrame\n",
        "        Data dataframe.\n",
        "    col_name : str\n",
        "        Name of column in df to convert to log.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    np.ndarray\n",
        "        Values of column in log space\n",
        "\n",
        "    \"\"\"\n",
        "    # This is to avoid division by zero while doing np.log10\n",
        "    EPSILON = 1\n",
        "    return np.log10(df[col_name].values + EPSILON)\n",
        "\n",
        "\n",
        "def convert_to_percentile(df, col_name):\n",
        "    \"\"\"Convert column to percentile.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    df : pd.DataFrame\n",
        "        Data dataframe.\n",
        "    col_name : str\n",
        "        Name of column in df to convert to percentile.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    pd.Series\n",
        "        Column converted to percentile from 1 to 100\n",
        "\n",
        "    \"\"\"\n",
        "    return pd.qcut(df[col_name].rank(method='first'), 100,\n",
        "                   labels=range(1, 101))\n",
        "\n",
        "\n",
        "def get_git_dir():\n",
        "    \"\"\"Get directory where git repo is saved.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    str\n",
        "        Full path of git repo home.\n",
        "\n",
        "    \"\"\"\n",
        "    repo = git.Repo('.', search_parent_directories=True)\n",
        "    return repo.working_tree_dir\n",
        "\n",
        "\n",
        "def create_dir(*args):\n",
        "    \"\"\"Create directory if it does not exist.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    *args : type\n",
        "        Description of parameter `*args`.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    str\n",
        "        Full path of directory.\n",
        "\n",
        "    \"\"\"\n",
        "    fullpath = os.path.join(*args)\n",
        "\n",
        "    # if path does not exist, create it\n",
        "    if not os.path.exists(fullpath):\n",
        "        os.makedirs(fullpath)\n",
        "\n",
        "    return fullpath\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "6f9e5038",
      "metadata": {
        "id": "6f9e5038"
      },
      "outputs": [],
      "source": [
        "# For reproducibility, imported from https://gitlab.com/labsysmed/dissecting-bias/-/blob/master/code/model/model.py\n",
        "\n",
        "\"\"\"\n",
        "Functions for training model.\n",
        "\"\"\"\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "import os\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "\n",
        "def split_by_id(df, id_field='ptid', frac_train=.6):\n",
        "    \"\"\"Split the df by id_field into train/holdout deterministically.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    df : pd.DataFrame\n",
        "        Data dataframe.\n",
        "    id_field : str\n",
        "        Split df by this column (e.g. 'ptid').\n",
        "    frac_train : float\n",
        "        Fraction assigned to train. (1 - frac_train) assigned to holdout.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    pd.DataFrame\n",
        "        Data dataframe with additional column 'split' indication train/holdout\n",
        "\n",
        "    \"\"\"\n",
        "    ptid = np.sort(df[id_field].unique())\n",
        "    print(\"Splitting {:,} unique {}\".format(len(ptid), id_field))\n",
        "\n",
        "    # deterministic split\n",
        "    rs = np.random.RandomState(0)\n",
        "    perm_idx = rs.permutation(len(ptid))\n",
        "    num_train = int(frac_train*len(ptid))\n",
        "\n",
        "    # obtain train/holdout\n",
        "    train_idx = perm_idx[:num_train]\n",
        "    holdout_idx  = perm_idx[num_train:]\n",
        "    ptid_train = ptid[train_idx]\n",
        "    ptid_holdout  = ptid[holdout_idx]\n",
        "    print(\" ...splitting by patient: {:,} train, {:,} holdout \".format(\n",
        "      len(ptid_train), len(holdout_idx)))\n",
        "\n",
        "    # make dictionaries\n",
        "    train_dict = {p: \"train\" for p in ptid_train}\n",
        "    holdout_dict  = {p: \"holdout\"  for p in ptid_holdout}\n",
        "    split_dict = {**train_dict, **holdout_dict}\n",
        "\n",
        "    # add train/holdout split to each\n",
        "    split = []\n",
        "    for e in df[id_field]:\n",
        "        split.append(split_dict[e])\n",
        "    df['split'] = split\n",
        "\n",
        "    return df\n",
        "\n",
        "\n",
        "def get_split_predictions(df, split):\n",
        "    \"\"\"Get predictions for split (train/holdout).\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    df : pd.DataFrame\n",
        "        Data dataframe.\n",
        "    split : str\n",
        "        Name of split (e.g. 'holdout')\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    pd.DataFrame\n",
        "        Subset of df with value split.\n",
        "\n",
        "    \"\"\"\n",
        "    pred_split_df = df[df['split'] == split]\n",
        "    pred_split_df = pred_split_df.drop(columns=['split'])\n",
        "    return pred_split_df\n",
        "\n",
        "\n",
        "def build_formulas(y_col, outcomes):\n",
        "    \"\"\"Build regression formulas for each outcome (y) ~ y_col predictor (x).\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    y_col : str\n",
        "        Algorithm training label.\n",
        "    outcomes : list\n",
        "        All outcomes of interest.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    list\n",
        "        List of all regression formulas.\n",
        "\n",
        "    \"\"\"\n",
        "    if 'risk_score' in y_col:\n",
        "        predictors = ['risk_score_t']\n",
        "    else:\n",
        "        predictors = ['{}_hat'.format(y_col)]\n",
        "\n",
        "    # build all y ~ x formulas\n",
        "    all_formulas = []\n",
        "    for y in outcomes:\n",
        "        for x in predictors:\n",
        "            formula = '{} ~ {}'.format(y, x)\n",
        "            all_formulas.append(formula)\n",
        "    return all_formulas\n",
        "\n",
        "\n",
        "def get_r2_df(df, formulas):\n",
        "    \"\"\"Short summary.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    df : pd.DataFrame\n",
        "        Holdout dataframe.\n",
        "    formulas : list\n",
        "        List of regression formulas.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    pd.DataFrame\n",
        "        DataFrame of formula (y ~ x), holdout_r2, holdout_obs.\n",
        "\n",
        "    \"\"\"\n",
        "    import statsmodels.formula.api as smf\n",
        "    r2_list = []\n",
        "\n",
        "    # run all OLS regressions\n",
        "    for formula in formulas:\n",
        "        model = smf.ols(formula, data=df)\n",
        "        results = model.fit()\n",
        "        r2_dict = {'formula (y ~ x)': formula,\n",
        "                   'holdout_r2': results.rsquared,\n",
        "                   'holdout_obs': results.nobs}\n",
        "        r2_list.append(r2_dict)\n",
        "    return pd.DataFrame(r2_list)\n",
        "\n",
        "\n",
        "def train_lasso(train_df, holdout_df,\n",
        "                x_column_names,\n",
        "                y_col,\n",
        "                outcomes,\n",
        "                n_folds=10,\n",
        "                include_race=False,\n",
        "                plot=False,\n",
        "                output_dir=None):\n",
        "    \"\"\"Train LASSO model and get predictions for holdout.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    train_df : pd.DataFrame\n",
        "        Train dataframe.\n",
        "    holdout_df : pd.DataFrame\n",
        "        Holdout dataframe.\n",
        "    x_column_names : list\n",
        "        List of column names to use as features.\n",
        "    y_col : str\n",
        "        Name of y column (label) to predict.\n",
        "    outcomes : list\n",
        "        All labels (Y) to predict.\n",
        "    n_folds : int\n",
        "        Number of folds for cross validation.\n",
        "    include_race : bool\n",
        "        Whether to include the race variable as a feature (X).\n",
        "    plot : bool\n",
        "        Whether to save the mean square error (MSE) plots.\n",
        "    output_dir : str\n",
        "        Path where to save results.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    r2_df : pd.DataFrame\n",
        "        DataFrame of formula (y ~ x), holdout_r2, holdout_obs.\n",
        "    pred_df : pd.DataFrame\n",
        "        DataFrame of all predictions (train and holdout).\n",
        "    lasso_coef_df : pd.DataFrame\n",
        "        DataFrame of lasso coefficients.\n",
        "\n",
        "    \"\"\"\n",
        "    if not include_race:\n",
        "        # remove the race variable\n",
        "        x_cols = [x for x in x_column_names if x != 'race']\n",
        "    else:\n",
        "        # include the race variable\n",
        "        if 'race' not in x_column_names:\n",
        "            x_cols = x_column_names + ['race']\n",
        "        else:\n",
        "            x_cols = x_column_names\n",
        "\n",
        "    # split X and y\n",
        "    train_X = train_df[x_cols]\n",
        "    train_y = train_df[y_col]\n",
        "\n",
        "    # define cross validation (CV) generator\n",
        "    # separate at the patient level\n",
        "    from sklearn.model_selection import GroupKFold\n",
        "    group_kfold = GroupKFold(n_splits=n_folds)\n",
        "    # for the synthetic data, we split at the observation level ('index')\n",
        "    group_kfold_generator = group_kfold.split(train_X, train_y,\n",
        "                                              groups=train_df['index'])\n",
        "    # train lasso cv model\n",
        "    from sklearn.linear_model import LassoCV\n",
        "    n_alphas = 100\n",
        "    lasso_cv = LassoCV(\n",
        "                       n_alphas=n_alphas,\n",
        "                       cv=group_kfold_generator,\n",
        "                       random_state=0,\n",
        "                       max_iter=10000,\n",
        "                       fit_intercept=True,\n",
        "                       normalize=True)\n",
        "    lasso_cv.fit(train_X, train_y)\n",
        "    alpha = lasso_cv.alpha_\n",
        "    train_r2 = lasso_cv.score(train_X, train_y)\n",
        "    train_nobs = len(train_X)\n",
        "\n",
        "    # plot\n",
        "    if plot:\n",
        "        plt.figure()\n",
        "        alphas = lasso_cv.alphas_\n",
        "\n",
        "        for i in range(n_folds):\n",
        "            plt.plot(alphas, lasso_cv.mse_path_[:, i], ':', label='fold {}'.format(i))\n",
        "        plt.plot(alphas, lasso_cv.mse_path_.mean(axis=-1), 'k',\n",
        "                 label='Average across the folds', linewidth=2)\n",
        "        plt.axvline(lasso_cv.alpha_, linestyle='--', color='k',\n",
        "                    label='alpha: CV estimate')\n",
        "\n",
        "        plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n",
        "\n",
        "        plt.xlabel(r'$\\alpha$')\n",
        "        plt.ylabel('MSE')\n",
        "        plt.title('Mean square error (MSE) on each fold predicting {}'.format(y_col))\n",
        "        plt.xscale('log')\n",
        "\n",
        "        if include_race:\n",
        "            filename = 'model_lasso_{}_race.png'.format(y_col)\n",
        "        else:\n",
        "            filename = 'model_lasso_{}.png'.format(y_col)\n",
        "        output_dir = create_dir(output_dir)\n",
        "        output_filepath = os.path.join(output_dir, filename)\n",
        "        plt.savefig(output_filepath, bbox_inches='tight', dpi=500)\n",
        "\n",
        "    # lasso coefficients\n",
        "    coef_col_name = '{}_race_coef'.format(y_col) if include_race else '{}_coef'.format(y_col)\n",
        "    lasso_coef_df = pd.DataFrame({'{}_coef'.format(y_col): lasso_cv.coef_}, index=train_X.columns)\n",
        "\n",
        "    # number of lasso features\n",
        "    original_features = len(x_cols)\n",
        "    n_features = len(lasso_coef_df)\n",
        "\n",
        "    def predictions_df(x_vals, y_col, split):\n",
        "        \"\"\"Short summary.\n",
        "\n",
        "        Parameters\n",
        "        ----------\n",
        "        x_vals : pd.DataFrame\n",
        "            DataFrame of all X values.\n",
        "        y_col : str\n",
        "            Name of y column (label) to predict.\n",
        "        split : str\n",
        "            Name of split (e.g. 'holdout').\n",
        "\n",
        "        Returns\n",
        "        -------\n",
        "        pd.DataFrame\n",
        "            DataFrame with 'y_hat' (prediction), 'y_hat_percentile', 'split'\n",
        "\n",
        "        \"\"\"\n",
        "        y_hat = lasso_cv.predict(x_vals)\n",
        "        y_hat_col = '{}_hat'.format(y_col)\n",
        "        y_hat_df = pd.DataFrame(y_hat, columns=[y_hat_col])\n",
        "        y_hat_percentile = convert_to_percentile(y_hat_df, y_hat_col)\n",
        "\n",
        "        # include column for y_hat percentile\n",
        "        y_hat_percentile_df = pd.DataFrame(y_hat_percentile)\n",
        "        y_hat_percentile_df.columns = ['{}_hat_percentile'.format(y_col)]\n",
        "\n",
        "        pred_df = pd.concat([y_hat_df, y_hat_percentile_df], axis=1)\n",
        "        pred_df['split'] = split\n",
        "\n",
        "        return pred_df\n",
        "\n",
        "    # predict in train\n",
        "    train_df_pred = predictions_df(train_X, y_col, 'train')\n",
        "\n",
        "    # predict in holdout\n",
        "    holdout_X = holdout_df[x_cols]\n",
        "    holdout_df_pred = predictions_df(holdout_X, y_col, 'holdout')\n",
        "\n",
        "    # predictions\n",
        "    pred_df = pd.concat([train_df_pred, holdout_df_pred])\n",
        "\n",
        "    # r2\n",
        "    holdout_Y_pred = pd.concat([holdout_df[outcomes], holdout_df_pred], axis=1)\n",
        "    formulas = build_formulas(y_col, outcomes)\n",
        "    r2_df = get_r2_df(holdout_Y_pred, formulas)\n",
        "\n",
        "    return r2_df, pred_df, lasso_coef_df\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "62460666",
      "metadata": {
        "id": "62460666"
      },
      "outputs": [],
      "source": [
        "#### For reproducibility, imported from https://gitlab.com/labsysmed/dissecting-bias/-/blob/master/code/model/main.py\n",
        "\n",
        "\"\"\"\n",
        "Main script to train lasso model and save predictions.\n",
        "\"\"\"\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "import os\n",
        "\n",
        "\n",
        "\n",
        "def load_data_df():\n",
        "    \"\"\"Load data dataframe.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    pd.DataFrame\n",
        "        DataFrame to use for analysis.\n",
        "\n",
        "    \"\"\"\n",
        "    # define filepath\n",
        "    #git_dir = get_git_dir()\n",
        "    #data_fp = os.path.join(git_dir, 'data', 'data_new.csv')\n",
        "    data_fp = os.path.join(DATA_DIR, \"data_new.csv\")\n",
        "\n",
        "    # load df\n",
        "    data_df = pd.read_csv(data_fp)\n",
        "\n",
        "    # because we removed patient\n",
        "    data_df = data_df.reset_index()\n",
        "    return data_df\n",
        "\n",
        "\n",
        "def get_Y_x_df(df, verbose):\n",
        "    \"\"\"Get dataframe with relevant x and Y columns.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    df : pd.DataFrame\n",
        "        Data dataframe.\n",
        "    verbose : bool\n",
        "        Print statistics of features.\n",
        "\n",
        "    Returns\n",
        "    -------\n",
        "    all_Y_x_df : pd.DataFrame\n",
        "        Dataframe with x (features) and y (labels) columns\n",
        "    x_column_names : list\n",
        "        List of all x column names (features).\n",
        "    Y_predictors : list\n",
        "        All labels (Y) to predict.\n",
        "\n",
        "    \"\"\"\n",
        "    # cohort columns\n",
        "    cohort_cols = ['index']\n",
        "\n",
        "    # features (x)\n",
        "    x_column_names = get_all_features(df, verbose)\n",
        "\n",
        "    # include log columns\n",
        "    df['log_cost_t'] = convert_to_log(df, 'cost_t')\n",
        "    df['log_cost_avoidable_t'] = convert_to_log(df, 'cost_avoidable_t')\n",
        "\n",
        "    # labels (Y) to predict\n",
        "    Y_predictors = ['log_cost_t', 'gagne_sum_t', 'log_cost_avoidable_t']\n",
        "\n",
        "    # redefine 'race' variable as indicator\n",
        "    df['dem_race_black'] = np.where(df['race'] == 'black', 1, 0)\n",
        "\n",
        "    # additional metrics used for table 2 and table 3\n",
        "    table_metrics = ['dem_race_black', 'risk_score_t', 'program_enrolled_t',\n",
        "                     'cost_t', 'cost_avoidable_t']\n",
        "\n",
        "    # combine all features together -- this forms the Y_x df\n",
        "    all_Y_x_df = df[cohort_cols + x_column_names + Y_predictors + table_metrics].copy()\n",
        "\n",
        "    return all_Y_x_df, x_column_names, Y_predictors\n",
        "\n",
        "\n",
        "def main():\n",
        "    # load data\n",
        "    data_df = load_data_df()\n",
        "\n",
        "    # subset to relevant columns\n",
        "    all_Y_x_df, x_column_names, Y_predictors = get_Y_x_df(data_df, verbose=True)\n",
        "\n",
        "    # assign to 2/3 train, 1/3 holdout\n",
        "    all_Y_x_df = split_by_id(all_Y_x_df, id_field='index',\n",
        "                                   frac_train=.67)\n",
        "\n",
        "    # define train, holdout\n",
        "    # reset_index for pd.concat() along column\n",
        "    train_df = all_Y_x_df[all_Y_x_df['split'] == 'train'].reset_index(drop=True)\n",
        "    holdout_df = all_Y_x_df[all_Y_x_df['split'] == 'holdout'].reset_index(drop=True)\n",
        "\n",
        "    # define output dir to save results\n",
        "    #git_dir = util.get_git_dir()\n",
        "    OUTPUT_DIR = create_dir(os.path.join(DATA_DIR, 'results'))\n",
        "\n",
        "    # define parameters\n",
        "    include_race = False\n",
        "    n_folds = 10\n",
        "    save_plot = False\n",
        "    save_r2 = True\n",
        "\n",
        "    # train model with Y = 'log_cost_t'\n",
        "    log_cost_r2_df, \\\n",
        "    pred_log_cost_df, \\\n",
        "    log_cost_lasso_coef_df = train_lasso(train_df,\n",
        "                                               holdout_df,\n",
        "                                               x_column_names,\n",
        "                                               y_col='log_cost_t',\n",
        "                                               outcomes=Y_predictors,\n",
        "                                               n_folds=n_folds,\n",
        "                                               include_race=include_race,\n",
        "                                               plot=save_plot,\n",
        "                                               output_dir=OUTPUT_DIR)\n",
        "\n",
        "    # train model with Y = 'gagne_sum_t'\n",
        "    gagne_sum_t_r2_df, \\\n",
        "    pred_gagne_sum_t_df, \\\n",
        "    gagne_sum_t_lasso_coef_df = train_lasso(train_df,\n",
        "                                                  holdout_df,\n",
        "                                                  x_column_names,\n",
        "                                                  y_col='gagne_sum_t',\n",
        "                                                  outcomes=Y_predictors,\n",
        "                                                  n_folds=n_folds,\n",
        "                                                  include_race=include_race,\n",
        "                                                  plot=save_plot,\n",
        "                                                  output_dir=OUTPUT_DIR)\n",
        "\n",
        "    # train model with Y = 'log_cost_avoidable_t'\n",
        "    log_cost_avoidable_r2_df, \\\n",
        "    pred_log_cost_avoidable_df, \\\n",
        "    log_cost_avoidable_lasso_coef_df = train_lasso(train_df,\n",
        "                                                         holdout_df,\n",
        "                                                         x_column_names,\n",
        "                                                         y_col='log_cost_avoidable_t',\n",
        "                                                         outcomes=Y_predictors,\n",
        "                                                         n_folds=n_folds,\n",
        "                                                         include_race=include_race,\n",
        "                                                         plot=save_plot,\n",
        "                                                         output_dir=OUTPUT_DIR)\n",
        "\n",
        "    if save_r2:\n",
        "        formulas = build_formulas('risk_score_t', outcomes=Y_predictors)\n",
        "        risk_score_r2_df = get_r2_df(holdout_df, formulas)\n",
        "\n",
        "        r2_df = pd.concat([risk_score_r2_df,\n",
        "                           log_cost_r2_df,\n",
        "                           gagne_sum_t_r2_df,\n",
        "                           log_cost_avoidable_r2_df])\n",
        "\n",
        "        # save r2 file CSV\n",
        "        if include_race:\n",
        "            filename = 'model_r2_race.csv'\n",
        "        else:\n",
        "            filename = 'model_r2.csv'\n",
        "        output_filepath = os.path.join(OUTPUT_DIR, filename)\n",
        "        print('...writing to {}'.format(output_filepath))\n",
        "        r2_df.to_csv(output_filepath, index=False)\n",
        "\n",
        "    def get_split_predictions(df, split):\n",
        "        pred_split_df = df[df['split'] == split]\n",
        "        pred_split_df = pred_split_df.drop(columns=['split'])\n",
        "        return pred_split_df\n",
        "\n",
        "    # get holdout predictions\n",
        "    holdout_log_cost_df = get_split_predictions(pred_log_cost_df,\n",
        "                                                split='holdout')\n",
        "    holdout_gagne_sum_t_df = get_split_predictions(pred_gagne_sum_t_df,\n",
        "                                                   split='holdout')\n",
        "    holdout_log_cost_avoidable_df = get_split_predictions(pred_log_cost_avoidable_df,\n",
        "                                                          split='holdout')\n",
        "\n",
        "    holdout_pred_df = pd.concat([holdout_df, holdout_log_cost_df,\n",
        "                                 holdout_gagne_sum_t_df,\n",
        "                                 holdout_log_cost_avoidable_df], axis=1)\n",
        "    \n",
        "    print(holdout_pred_df.columns, \"log_cost_t\" in holdout_pred_df.columns)\n",
        "\n",
        "    holdout_pred_df_subset = holdout_pred_df[['index', 'split', 'dem_race_black',\n",
        "                                              'gagne_sum_t',\n",
        "                                              'cost_t', 'log_cost_t', 'cost_avoidable_t', 'log_cost_avoidable_t',\n",
        "                                              'program_enrolled_t',\n",
        "                                              'risk_score_t', #ytrue\n",
        "                                              'log_cost_t_hat', 'log_cost_t_hat_percentile', #yhat_a\n",
        "                                              'gagne_sum_t_hat', 'gagne_sum_t_hat_percentile', #yhat_b\n",
        "                                              'log_cost_avoidable_t_hat', 'log_cost_avoidable_t_hat_percentile']].copy()  #yhat_c\n",
        "\n",
        "    # add risk_score_percentile column\n",
        "    holdout_pred_df_subset['risk_score_t_percentile'] = \\\n",
        "        convert_to_percentile(holdout_pred_df_subset, 'risk_score_t')\n",
        "\n",
        "    # save to CSV\n",
        "    if include_race:\n",
        "        filename = 'model_lasso_predictors_race.csv'\n",
        "    else:\n",
        "        filename = 'model_lasso_predictors.csv'\n",
        "    output_filepath = os.path.join(OUTPUT_DIR, filename)\n",
        "    print('...HOLDOUT PREDICTIONS saved to {}'.format(output_filepath))\n",
        "    holdout_pred_df_subset.to_csv(output_filepath, index=False)\n",
        "    #print(holdout_pred_df_subset.head())\n",
        "    return holdout_pred_df_subset\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "8dfc5cbd",
      "metadata": {
        "id": "8dfc5cbd"
      },
      "outputs": [],
      "source": [
        "def build_model_long_df(long_df: pd.DataFrame, hdf: pd.DataFrame, t_start:int=0, ref_threshold:float=0.55, enroll_threshold=0.97, logged_dvs: [str] = ['cost_t','cost_avoidable_t']):\n",
        "    \n",
        "    mdf = pd.DataFrame()\n",
        "    holdout_ldf = pd.merge(hdf[['index']], long_df[long_df.timestep==t_start], left_on='index', right_on='idx', how='inner')\n",
        "    \n",
        "    for model in ['lasso_log_cost_t', 'lasso_log_cost_avoidable_t', 'lasso_gagne_sum_t']:\n",
        "\n",
        "        dv = model.split(\"lasso_\")[1]\n",
        "        log_dv = dv.replace(\"log_\",\"\") in logged_dvs\n",
        "        \n",
        "        temp = pd.DataFrame()\n",
        "        temp['idx'] = hdf['index'].copy()\n",
        "        temp['split'] = hdf['split'].copy()\n",
        "        temp['model_name'] = model\n",
        "        temp['ref_threshold'] = ref_threshold # todo: make these options from lists/maybe model-specific \n",
        "        temp['enroll_threshold'] = enroll_threshold\n",
        "        temp['dv'] = model if model == \"status_quo\" else dv\n",
        "        temp['timestep']= t_start\n",
        "        temp['ytrue'] = convert_to_log(holdout_ldf[holdout_ldf['timestep'] == t_start], dv.replace(\"log_\", \"\").replace(\"_t\", \"\")).copy() if log_dv  else holdout_ldf[holdout_ldf['timestep'] == t_start][dv.replace(\"_t\", \"\")].copy() \n",
        "        #temp['log_ytrue'] = convert_to_log(holdout_ldf[holdout_ldf['timestep'] == t_start], dv.replace(\"log_\", \"\").replace(\"_t\", \"\")).copy() if log_dv  else np.nan\n",
        "        temp['yhat'] = 0 #no yhat at t=0\n",
        "        #temp['log_yhat'] = np.nan #no yhat at t=0\n",
        "        temp['log_dv_flag'] = log_dv\n",
        "        temp['yhat_percentile'] = 0 #no yhat at t=0\n",
        "        temp['decision'] = \"none\" # no decision at t= 0 \n",
        "        temp['program_enrolled'] = \"none\"\n",
        "        temp['sq_vs_decision'] = \"none\" # no decision at t=0\n",
        "\n",
        "        temp_t1 = temp.copy()\n",
        "        temp_t1.loc[:,'timestep'] += 1\n",
        "        temp_t1['ytrue'] = hdf[dv].copy()\n",
        "        #temp_t1['log_ytrue'] = np.nan if dv not in logged_dvs else hdf[\"{}\".format(dv)]\n",
        "        temp_t1['yhat'] = hdf[\"{}_hat\".format(dv)]\n",
        "        #temp_t1['log_yhat'] = np.nan if  dv.replace(\"log_\",\"\") not in logged_dvs else hdf[\"{}_hat\".format(dv)]\n",
        "        temp_t1['log_dv_flag'] = log_dv\n",
        "        temp_t1['yhat_percentile'] = hdf[\"{}_hat_percentile\".format(dv)].astype(float) /100\n",
        "        temp_t1['decision'] = temp_t1['yhat_percentile'].apply(lambda x: \"none\" if x \u003c ref_threshold\n",
        "                                                               else \"referred\" if ref_threshold \u003c= x \u003c enroll_threshold \n",
        "                                                               else \"enrolled\") # nans?\n",
        "        \n",
        "        temp_t1['program_enrolled'] = hdf['program_enrolled_t'].apply(lambda x: \"enrolled\" if x == 1 else \"not enrolled\")\n",
        "        \n",
        "        temp_t1['sq_vs_decision'] = hdf['program_enrolled_t'].apply(lambda x: \"enrolled\" if x == 1 else \"none/ref\") + temp_t1['decision'].apply(lambda x: \"_{}\".format(x))\n",
        "        \n",
        "        long_df = pd.concat([temp, temp_t1])\n",
        "        mdf = mdf.append(long_df)\n",
        "        \n",
        "    long_df_for_graph = pd.merge(ldf, mdf, on=['idx', 'timestep'], how='inner')        \n",
        "    return mdf, long_df_for_graph\n",
        "    "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "105aba54",
      "metadata": {
        "id": "105aba54",
        "outputId": "0460ad3c-b9f2-4be9-b5a3-3837e843f830"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/tmp/ipykernel_18034/16629552.py:30: SettingWithCopyWarning: \n",
            "A value is trying to be set on a copy of a slice from a DataFrame.\n",
            "Try using .loc[row_indexer,col_indexer] = value instead\n",
            "\n",
            "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
            "  df_tm1.loc[:,var_name] = df_tm1.apply(lambda row: \"low\" if row[\"{}-low_tm1\".format(var_name)] == 1\n",
            "/tmp/ipykernel_18034/16629552.py:30: SettingWithCopyWarning: \n",
            "A value is trying to be set on a copy of a slice from a DataFrame.\n",
            "Try using .loc[row_indexer,col_indexer] = value instead\n",
            "\n",
            "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
            "  df_tm1.loc[:,var_name] = df_tm1.apply(lambda row: \"low\" if row[\"{}-low_tm1\".format(var_name)] == 1\n",
            "/tmp/ipykernel_18034/16629552.py:30: SettingWithCopyWarning: \n",
            "A value is trying to be set on a copy of a slice from a DataFrame.\n",
            "Try using .loc[row_indexer,col_indexer] = value instead\n",
            "\n",
            "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
            "  df_tm1.loc[:,var_name] = df_tm1.apply(lambda row: \"low\" if row[\"{}-low_tm1\".format(var_name)] == 1\n",
            "/tmp/ipykernel_18034/16629552.py:30: SettingWithCopyWarning: \n",
            "A value is trying to be set on a copy of a slice from a DataFrame.\n",
            "Try using .loc[row_indexer,col_indexer] = value instead\n",
            "\n",
            "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
            "  df_tm1.loc[:,var_name] = df_tm1.apply(lambda row: \"low\" if row[\"{}-low_tm1\".format(var_name)] == 1\n"
          ]
        }
      ],
      "source": [
        "ldf = create_long_df_obermeyer(wide_df = df.copy())"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "6bb57418",
      "metadata": {
        "id": "6bb57418",
        "outputId": "42498d24-bb75-4d82-98c8-23c4c56a4581"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Splitting 48,784 unique index\n",
            " ...splitting by patient: 32,685 train, 16,099 holdout \n",
            "...writing to ../data/results/model_r2.csv\n",
            "Index(['index', 'dem_female', 'dem_age_band_18-24_tm1',\n",
            "       'dem_age_band_25-34_tm1', 'dem_age_band_35-44_tm1',\n",
            "       'dem_age_band_45-54_tm1', 'dem_age_band_55-64_tm1',\n",
            "       'dem_age_band_65-74_tm1', 'dem_age_band_75+_tm1',\n",
            "       'alcohol_elixhauser_tm1',\n",
            "       ...\n",
            "       'program_enrolled_t', 'cost_t', 'cost_avoidable_t', 'split',\n",
            "       'log_cost_t_hat', 'log_cost_t_hat_percentile', 'gagne_sum_t_hat',\n",
            "       'gagne_sum_t_hat_percentile', 'log_cost_avoidable_t_hat',\n",
            "       'log_cost_avoidable_t_hat_percentile'],\n",
            "      dtype='object', length=165) True\n",
            "...HOLDOUT PREDICTIONS saved to ../data/results/model_lasso_predictors.csv\n"
          ]
        }
      ],
      "source": [
        "hdf = main()\n",
        "_, long_df_for_graph = build_model_long_df(long_df=ldf, hdf=hdf)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "001c101b",
      "metadata": {
        "id": "001c101b"
      },
      "outputs": [],
      "source": [
        "struct_data = long_df_for_graph.copy()\n",
        "struct_data = struct_data.drop(['idx'],axis=1)\n",
        "non_numeric_columns = list(struct_data.select_dtypes(exclude=[np.number]).columns)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "85a5e549",
      "metadata": {
        "id": "85a5e549",
        "outputId": "0b079640-aeb5-4b34-b1d4-252090f485d7"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\u003cdiv\u003e\n",
              "\u003cstyle scoped\u003e\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "\u003c/style\u003e\n",
              "\u003ctable border=\"1\" class=\"dataframe\"\u003e\n",
              "  \u003cthead\u003e\n",
              "    \u003ctr style=\"text-align: right;\"\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003erace\u003c/th\u003e\n",
              "      \u003cth\u003egender\u003c/th\u003e\n",
              "      \u003cth\u003eage_bucket\u003c/th\u003e\n",
              "      \u003cth\u003egagne_sum\u003c/th\u003e\n",
              "      \u003cth\u003ecost\u003c/th\u003e\n",
              "      \u003cth\u003ecost_avoidable\u003c/th\u003e\n",
              "      \u003cth\u003ehypertension_elixhauser\u003c/th\u003e\n",
              "      \u003cth\u003eghba1c_mean\u003c/th\u003e\n",
              "      \u003cth\u003ehct_mean\u003c/th\u003e\n",
              "      \u003cth\u003ecre_mean\u003c/th\u003e\n",
              "      \u003cth\u003e...\u003c/th\u003e\n",
              "      \u003cth\u003eref_threshold\u003c/th\u003e\n",
              "      \u003cth\u003eenroll_threshold\u003c/th\u003e\n",
              "      \u003cth\u003edv\u003c/th\u003e\n",
              "      \u003cth\u003eytrue\u003c/th\u003e\n",
              "      \u003cth\u003eyhat\u003c/th\u003e\n",
              "      \u003cth\u003elog_dv_flag\u003c/th\u003e\n",
              "      \u003cth\u003eyhat_percentile\u003c/th\u003e\n",
              "      \u003cth\u003edecision\u003c/th\u003e\n",
              "      \u003cth\u003eprogram_enrolled\u003c/th\u003e\n",
              "      \u003cth\u003esq_vs_decision\u003c/th\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003etimestep\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "      \u003cth\u003e\u003c/th\u003e\n",
              "    \u003c/tr\u003e\n",
              "  \u003c/thead\u003e\n",
              "  \u003ctbody\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e0\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.00\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e0\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.00\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e0\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.00\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e0\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e5\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e15300.0\u003c/td\u003e\n",
              "      \u003ctd\u003e9300.0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e4.184720\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.00\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e0\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e5\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e15300.0\u003c/td\u003e\n",
              "      \u003ctd\u003e9300.0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e3.968530\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.00\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e...\u003c/th\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e1\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e4\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e24200.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0.928810\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.70\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e6\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e1\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e4\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e24200.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e1.814359\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.74\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e6\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e1\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1700.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e3.230704\u003c/td\u003e\n",
              "      \u003ctd\u003e2.822738\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.03\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e5\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e1\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1700.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0.499221\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0.21\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e5\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "    \u003ctr\u003e\n",
              "      \u003cth\u003e1\u003c/th\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e1700.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.0\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e3\u003c/td\u003e\n",
              "      \u003ctd\u003e...\u003c/td\u003e\n",
              "      \u003ctd\u003e0.55\u003c/td\u003e\n",
              "      \u003ctd\u003e0.97\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.000000\u003c/td\u003e\n",
              "      \u003ctd\u003e0.113619\u003c/td\u003e\n",
              "      \u003ctd\u003e0\u003c/td\u003e\n",
              "      \u003ctd\u003e0.12\u003c/td\u003e\n",
              "      \u003ctd\u003e1\u003c/td\u003e\n",
              "      \u003ctd\u003e2\u003c/td\u003e\n",
              "      \u003ctd\u003e5\u003c/td\u003e\n",
              "    \u003c/tr\u003e\n",
              "  \u003c/tbody\u003e\n",
              "\u003c/table\u003e\n",
              "\u003cp\u003e96594 rows × 23 columns\u003c/p\u003e\n",
              "\u003c/div\u003e"
            ],
            "text/plain": [
              "          race  gender  age_bucket  gagne_sum     cost  cost_avoidable  \\\n",
              "timestep                                                                 \n",
              "0            1       0           1          0      0.0             0.0   \n",
              "0            1       0           1          0      0.0             0.0   \n",
              "0            1       0           1          0      0.0             0.0   \n",
              "0            1       0           5          2  15300.0          9300.0   \n",
              "0            1       0           5          2  15300.0          9300.0   \n",
              "...        ...     ...         ...        ...      ...             ...   \n",
              "1            1       0           4          3  24200.0             0.0   \n",
              "1            1       0           4          3  24200.0             0.0   \n",
              "1            1       1           1          0   1700.0             0.0   \n",
              "1            1       1           1          0   1700.0             0.0   \n",
              "1            1       1           1          0   1700.0             0.0   \n",
              "\n",
              "          hypertension_elixhauser  ghba1c_mean  hct_mean  cre_mean  ...  \\\n",
              "timestep                                                            ...   \n",
              "0                               0            3         3         3  ...   \n",
              "0                               0            3         3         3  ...   \n",
              "0                               0            3         3         3  ...   \n",
              "0                               1            0         1         2  ...   \n",
              "0                               1            0         1         2  ...   \n",
              "...                           ...          ...       ...       ...  ...   \n",
              "1                               0            3         3         3  ...   \n",
              "1                               0            3         3         3  ...   \n",
              "1                               0            3         3         3  ...   \n",
              "1                               0            3         3         3  ...   \n",
              "1                               0            3         3         3  ...   \n",
              "\n",
              "          ref_threshold  enroll_threshold  dv     ytrue      yhat  \\\n",
              "timestep                                                            \n",
              "0                  0.55              0.97   2  0.000000  0.000000   \n",
              "0                  0.55              0.97   1  0.000000  0.000000   \n",
              "0                  0.55              0.97   0  0.000000  0.000000   \n",
              "0                  0.55              0.97   2  4.184720  0.000000   \n",
              "0                  0.55              0.97   1  3.968530  0.000000   \n",
              "...                 ...               ...  ..       ...       ...   \n",
              "1                  0.55              0.97   1  0.000000  0.928810   \n",
              "1                  0.55              0.97   0  3.000000  1.814359   \n",
              "1                  0.55              0.97   2  3.230704  2.822738   \n",
              "1                  0.55              0.97   1  0.000000  0.499221   \n",
              "1                  0.55              0.97   0  0.000000  0.113619   \n",
              "\n",
              "          log_dv_flag  yhat_percentile  decision  program_enrolled  \\\n",
              "timestep                                                             \n",
              "0                   1             0.00         1                 1   \n",
              "0                   1             0.00         1                 1   \n",
              "0                   0             0.00         1                 1   \n",
              "0                   1             0.00         1                 1   \n",
              "0                   1             0.00         1                 1   \n",
              "...               ...              ...       ...               ...   \n",
              "1                   1             0.70         2                 2   \n",
              "1                   0             0.74         2                 2   \n",
              "1                   1             0.03         1                 2   \n",
              "1                   1             0.21         1                 2   \n",
              "1                   0             0.12         1                 2   \n",
              "\n",
              "          sq_vs_decision  \n",
              "timestep                  \n",
              "0                      3  \n",
              "0                      3  \n",
              "0                      3  \n",
              "0                      3  \n",
              "0                      3  \n",
              "...                  ...  \n",
              "1                      6  \n",
              "1                      6  \n",
              "1                      5  \n",
              "1                      5  \n",
              "1                      5  \n",
              "\n",
              "[96594 rows x 23 columns]"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "\n",
        "\n",
        "le = LabelEncoder()\n",
        "\n",
        "for col in non_numeric_columns:\n",
        "    struct_data[col] = le.fit_transform(struct_data[col])\n",
        "\n",
        "struct_data.head(5)\n",
        "struct_data.set_index('timestep')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "9c10ac03",
      "metadata": {
        "id": "9c10ac03",
        "outputId": "13a4b7e9-dda0-4e5f-b91e-dd9abc43a548"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/usr/local/google/home/cherlihy/anaconda3/envs/dissecting_bias/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
            "  from .autonotebook import tqdm as notebook_tqdm\n"
          ]
        }
      ],
      "source": [
        "from causalnex.structure.notears import from_pandas\n",
        "from causalnex.structure.dynotears import from_pandas_dynamic\n",
        "\n",
        "sm = from_pandas_dynamic(struct_data, p=1)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "70226973",
      "metadata": {
        "id": "70226973"
      },
      "outputs": [],
      "source": [
        "from causalnex.plots import plot_structure, NODE_STYLE, EDGE_STYLE\n",
        "from IPython.display import Image\n",
        "\n",
        "sm.remove_edges_below_threshold(0.8)\n",
        "\n",
        "viz = plot_structure(\n",
        "    sm,\n",
        "    graph_attributes={\"scale\": \"0.5\"},\n",
        "    all_node_attributes=NODE_STYLE.WEAK,\n",
        "    all_edge_attributes=EDGE_STYLE.WEAK,\n",
        "    prog='fdp',\n",
        ")\n",
        "Image(viz.draw(format='png'))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f163a92a",
      "metadata": {
        "id": "f163a92a"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "dissecting bias",
      "language": "python",
      "name": "dissecting_bias"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.15"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
