{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "ec17da31",
   "metadata": {},
   "source": [
    "\n",
    "# Getting Started\n",
    "Here, we will go over the following with StageNet across all utility modules in PyHealth:\n",
    "\n",
    "1. Loading the data\n",
    "2. Task Processing (with padding to ensure compatibility)\n",
    "3. ML Model Initialization \n",
    "4. Model training\n",
    "5. Holdout Inference on Sets of Codes Not in Vocabulary\n",
    "6. Interpretability Example with DeepLift\n",
    "\n",
    "## Installation\n",
    "\n",
    "Install the latest alpha release of StageNet modernized for PyHealth:\n",
    "\n",
    "```bash\n",
    "pip install pyhealth==2.0a10\n",
    "```\n",
    "\n",
    "## Loading Data\n",
    "\n",
    "Load the PyHealth dataset for mortality prediction.\n",
    "\n",
    "PyHealth datasets use a `config.yaml` file to define:\n",
    "- Input tables (.csv, .tsv, etc.)\n",
    "- Features to extract\n",
    "- Aggregation methods\n",
    "\n",
    "The result is a single dataframe where each row represents one patient and their features.\n",
    "\n",
    "For more details on PyHealth datasets, see [this resource](https://colab.research.google.com/drive/1voSx7wEfzXfEf2sIfW6b-8p1KqMyuWxK#scrollTo=NSrb2PGFqUgS).\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "fd30b75b",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/johnwu3/miniconda3/envs/medical_coding_demo/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Memory usage Starting MIMIC4Dataset init: 797.9 MB\n",
      "Initializing MIMIC4EHRDataset with tables: ['patients', 'admissions', 'diagnoses_icd', 'procedures_icd', 'labevents'] (dev mode: True)\n",
      "Using default EHR config: /home/johnwu3/projects/PyHealth_Branch_Testing/PyHealth/pyhealth/datasets/configs/mimic4_ehr.yaml\n",
      "Memory usage Before initializing mimic4_ehr: 797.9 MB\n",
      "Duplicate table names in tables list. Removing duplicates.\n",
      "Initializing mimic4_ehr dataset from /srv/local/data/physionet.org/files/mimiciv/2.2/ (dev mode: False)\n",
      "Scanning table: diagnoses_icd from /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/diagnoses_icd.csv.gz\n",
      "Joining with table: /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/admissions.csv.gz\n",
      "Original path does not exist. Using alternative: /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/admissions.csv\n",
      "Scanning table: admissions from /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/admissions.csv.gz\n",
      "Original path does not exist. Using alternative: /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/admissions.csv\n",
      "Scanning table: procedures_icd from /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/procedures_icd.csv.gz\n",
      "Joining with table: /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/admissions.csv.gz\n",
      "Original path does not exist. Using alternative: /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/admissions.csv\n",
      "Scanning table: labevents from /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/labevents.csv.gz\n",
      "Joining with table: /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/d_labitems.csv.gz\n",
      "Scanning table: patients from /srv/local/data/physionet.org/files/mimiciv/2.2/hosp/patients.csv.gz\n",
      "Scanning table: icustays from /srv/local/data/physionet.org/files/mimiciv/2.2/icu/icustays.csv.gz\n",
      "Memory usage After initializing mimic4_ehr: 798.9 MB\n",
      "Memory usage After EHR dataset initialization: 798.9 MB\n",
      "Memory usage Before combining data: 798.9 MB\n",
      "Combining data from ehr dataset\n",
      "Creating combined dataframe\n",
      "Memory usage After combining data: 798.9 MB\n",
      "Memory usage Completed MIMIC4Dataset init: 798.9 MB\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "Example of using StageNet for mortality prediction on MIMIC-IV.\n",
    "\n",
    "This example demonstrates:\n",
    "1. Loading MIMIC-IV data\n",
    "2. Applying the MortalityPredictionStageNetMIMIC4 task\n",
    "3. Creating a SampleDataset with StageNet processors\n",
    "4. Training a StageNet model\n",
    "\"\"\"\n",
    "\n",
    "from pyhealth.datasets import (\n",
    "    MIMIC4Dataset,\n",
    "    get_dataloader,\n",
    "    split_by_patient,\n",
    ")\n",
    "from pyhealth.models import StageNet\n",
    "from pyhealth.tasks import MortalityPredictionStageNetMIMIC4\n",
    "from pyhealth.trainer import Trainer\n",
    "\n",
    "# STEP 1: Load MIMIC-IV base dataset\n",
    "base_dataset = MIMIC4Dataset(\n",
    "    ehr_root=\"/srv/local/data/physionet.org/files/mimiciv/2.2/\",\n",
    "    ehr_tables=[\n",
    "        \"patients\",\n",
    "        \"admissions\",\n",
    "        \"diagnoses_icd\",\n",
    "        \"procedures_icd\",\n",
    "        \"labevents\",\n",
    "    ],\n",
    "    dev=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "68cf0f1c",
   "metadata": {},
   "source": [
    "## Input and Output Schemas\n",
    "Input and output schemas map feature keys (e.g., \"labs\", \"icd_codes\") to StageNet processors. Each processor converts features into tuple objects used for training and inference.\n",
    "\n",
    "**Required format:** Each feature processed in our task call must follow this structure:\n",
    "```python\n",
    "\"feature\": (my_times_list, my_values_list)\n",
    "```\n",
    "We offer two types of StageNet processors, one for categorical variables, and the other for numerical feature variables. Our goal here is to represent each feature as a pre-defined tuple (time, value) that we can later pass to StageNet for processing.\n",
    "\n",
    "\n",
    "## What are these processors?\n",
    "\n",
    "Effectively processors take existing data variables and turns them into a tensor format. Here, we define a set of custom processors so we can leverage StageNet's ability to take in a time-series set of time intervals and feature sets.\n",
    "\n",
    "## StageNetProcessor - For Categories (Labels)\n",
    "\n",
    "**What it handles:** Text labels like diagnosis codes, medication names, or lab test types.\n",
    "\n",
    "**What it does:**\n",
    "- Takes lists of codes (like `[\"diabetes\", \"hypertension\"]`)\n",
    "- Converts each word into a unique number (like `[\"diabetes\"=1, \"hypertension\"=2]`)\n",
    "- Keeps track of when things happened (timestamps)\n",
    "- Can handle nested lists (like multiple codes per visit)\n",
    "\n",
    "**Example:** If a patient had 3 doctor visits with different diagnoses, this processor remembers what diagnosis happened at each visit and when.\n",
    "\n",
    "## StageNetTensorProcessor - For Numbers (Measurements)\n",
    "\n",
    "**What it handles:** Actual measurements like blood pressure, temperature, or lab values.\n",
    "\n",
    "**What it does:**\n",
    "- Takes lists of numbers (like `[98.6, 99.1, 98.8]` for temperatures)\n",
    "- Fills in missing measurements using the last known value (forward-fill)\n",
    "- Keeps track of when measurements were taken\n",
    "- Can handle multiple measurements at once (like blood pressure AND heart rate)\n",
    "\n",
    "**Example:** If a patient's heart rate was measured as `[72, None, 68]`, it fills in the missing value as `[72, 72, 68]` (copying the last known value).\n",
    "\n",
    "## How Time Processing Works\n",
    "\n",
    "Both processors handle time information in a flexible way:\n",
    "\n",
    "**Input formats accepted:**\n",
    "- Simple list: `[0.0, 1.5, 3.0]` - time intervals in hours/days\n",
    "- Nested list: `[[0.0], [1.5], [3.0]]` - automatically flattened\n",
    "- No time: `None` - when timing doesn't matter\n",
    "\n",
    "**What the time means:**\n",
    "- Times represent intervals or delays between events\n",
    "- For example: `[0.0, 2.5, 1.0]` could mean \"first event at start, second event 2.5 hours later, third event 1 hour after that\"\n",
    "- Times are converted to float tensors so the model can learn temporal patterns\n",
    "\n",
    "**Example:**\n",
    "```python\n",
    "# Patient temperature readings\n",
    "data = {\n",
    "    \"value\": [98.6, 99.1, 98.8],  # temperatures in °F\n",
    "    \"time\": [0.0, 2.0, 1.0]        # hours since previous admissions\n",
    "}\n",
    "```\n",
    "\n",
    "The processor keeps the time and values paired together, so the model knows that 99.1°F was recorded at 2 hours after admission.\n",
    "\n",
    "For syntactic reasons, we add the suffix \"Ex\" as they're already implemented in PyHealth. This is more to showcase what's happening underneath the hood."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "ba3e055d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Any, Dict, List, Optional, Tuple\n",
    "\n",
    "import torch\n",
    "\n",
    "from pyhealth.processors import register_processor\n",
    "from pyhealth.processors.base_processor import FeatureProcessor\n",
    "\n",
    "@register_processor(\"stagenet_ex\")\n",
    "class StageNetProcessor(FeatureProcessor):\n",
    "    \"\"\"\n",
    "    Feature processor for StageNet CODE inputs with coupled value/time data.\n",
    "\n",
    "    This processor handles categorical code sequences (flat or nested).\n",
    "    For numeric features, use StageNetTensorProcessor instead.\n",
    "\n",
    "    Input Format (tuple):\n",
    "        (time, values) where:\n",
    "        - time: List of scalars [0.0, 2.0, 1.3] or None\n",
    "        - values: [\"code1\", \"code2\"] or [[\"A\", \"B\"], [\"C\"]]\n",
    "\n",
    "    The processor automatically detects:\n",
    "    - List of strings -> flat code sequences\n",
    "    - List of lists of strings -> nested code sequences\n",
    "\n",
    "    Args:\n",
    "        padding: Additional padding to add on top of the observed maximum nested\n",
    "            sequence length. The actual padding length will be observed_max + padding.\n",
    "            This ensures the processor can handle sequences longer than those in the\n",
    "            training data. Default: 0 (no extra padding). Only applies to nested sequences.\n",
    "\n",
    "    Returns:\n",
    "        Tuple of (time_tensor, value_tensor) where time_tensor can be None\n",
    "\n",
    "    Examples:\n",
    "        >>> # Case 1: Code sequence with time\n",
    "        >>> processor = StageNetProcessor()\n",
    "        >>> data = ([0.0, 1.5, 2.3], [\"code1\", \"code2\", \"code3\"])\n",
    "        >>> time, values = processor.process(data)\n",
    "        >>> values.shape  # (3,) - sequence of code indices\n",
    "        >>> time.shape    # (3,) - time intervals\n",
    "\n",
    "        >>> # Case 2: Nested codes with time (with custom padding for extra capacity)\n",
    "        >>> processor = StageNetProcessor(padding=20)\n",
    "        >>> data = ([0.0, 1.5], [[\"A\", \"B\"], [\"C\"]])\n",
    "        >>> time, values = processor.process(data)\n",
    "        >>> values.shape  # (2, observed_max + 20) - padded nested sequences\n",
    "        >>> time.shape    # (2,)\n",
    "\n",
    "        >>> # Case 3: Codes without time\n",
    "        >>> data = (None, [\"code1\", \"code2\"])\n",
    "        >>> time, values = processor.process(data)\n",
    "        >>> values.shape  # (2,)\n",
    "        >>> time          # None\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, padding: int = 0):\n",
    "        # <unk> will be set to len(vocab) after fit\n",
    "        self.code_vocab: Dict[Any, int] = {\"<unk>\": None, \"<pad>\": 0}\n",
    "        self._next_index = 1\n",
    "        self._is_nested = None  # Will be determined during fit\n",
    "        # Max inner sequence length for nested codes\n",
    "        self._max_nested_len = None\n",
    "        self._padding = padding  # Additional padding beyond observed max\n",
    "\n",
    "    def fit(self, samples: List[Dict], key: str) -> None:\n",
    "        \"\"\"Build vocabulary and determine input structure.\n",
    "\n",
    "        Args:\n",
    "            samples: List of sample dictionaries\n",
    "            key: The key in samples that contains tuple (time, values)\n",
    "        \"\"\"\n",
    "        # Examine first non-None sample to determine structure\n",
    "        for sample in samples:\n",
    "            if key in sample and sample[key] is not None:\n",
    "                # Unpack tuple: (time, values)\n",
    "                time_data, value_data = sample[key]\n",
    "\n",
    "                # Determine nesting level for codes\n",
    "                if isinstance(value_data, list) and len(value_data) > 0:\n",
    "                    first_elem = value_data[0]\n",
    "\n",
    "                    if isinstance(first_elem, str):\n",
    "                        # Case 1: [\"code1\", \"code2\", ...]\n",
    "                        self._is_nested = False\n",
    "                    elif isinstance(first_elem, list):\n",
    "                        if len(first_elem) > 0 and isinstance(first_elem[0], str):\n",
    "                            # Case 2: [[\"A\", \"B\"], [\"C\"], ...]\n",
    "                            self._is_nested = True\n",
    "                break\n",
    "\n",
    "        # Build vocabulary for codes and find max nested length\n",
    "        max_inner_len = 0\n",
    "        for sample in samples:\n",
    "            if key in sample and sample[key] is not None:\n",
    "                # Unpack tuple: (time, values)\n",
    "                time_data, value_data = sample[key]\n",
    "\n",
    "                if self._is_nested:\n",
    "                    # Nested codes\n",
    "                    for inner_list in value_data:\n",
    "                        # Track max inner length\n",
    "                        max_inner_len = max(max_inner_len, len(inner_list))\n",
    "                        for code in inner_list:\n",
    "                            if code is not None and code not in self.code_vocab:\n",
    "                                self.code_vocab[code] = self._next_index\n",
    "                                self._next_index += 1\n",
    "                else:\n",
    "                    # Flat codes\n",
    "                    for code in value_data:\n",
    "                        if code is not None and code not in self.code_vocab:\n",
    "                            self.code_vocab[code] = self._next_index\n",
    "                            self._next_index += 1\n",
    "\n",
    "        # Store max nested length: add user-specified padding to observed maximum\n",
    "        # This ensures the processor can handle sequences longer than those in training data\n",
    "        if self._is_nested:\n",
    "            observed_max = max(1, max_inner_len)\n",
    "            self._max_nested_len = observed_max + self._padding\n",
    "\n",
    "        # Set <unk> token to the next available index\n",
    "        # Since <unk> is already in the vocab dict, we use _next_index\n",
    "        self.code_vocab[\"<unk>\"] = self._next_index\n",
    "\n",
    "    def process(\n",
    "        self, value: Tuple[Optional[List], List]\n",
    "    ) -> Tuple[Optional[torch.Tensor], torch.Tensor]:\n",
    "        \"\"\"Process tuple format data into tensors.\n",
    "\n",
    "        Args:\n",
    "            value: Tuple of (time, values) where values are codes\n",
    "\n",
    "        Returns:\n",
    "            Tuple of (time_tensor, value_tensor), time can be None\n",
    "        \"\"\"\n",
    "        # Unpack tuple: (time, values)\n",
    "        time_data, value_data = value\n",
    "\n",
    "        # Encode codes to indices\n",
    "        if self._is_nested:\n",
    "            # Nested codes: [[\"A\", \"B\"], [\"C\"]]\n",
    "            value_tensor = self._encode_nested_codes(value_data)\n",
    "        else:\n",
    "            # Flat codes: [\"code1\", \"code2\"]\n",
    "            value_tensor = self._encode_codes(value_data)\n",
    "\n",
    "        # Process time if present\n",
    "        time_tensor = None\n",
    "        if time_data is not None and len(time_data) > 0:\n",
    "            # Handle both [0.0, 1.5] and [[0.0], [1.5]] formats\n",
    "            if isinstance(time_data[0], list):\n",
    "                # Flatten [[0.0], [1.5]] -> [0.0, 1.5]\n",
    "                time_data = [t[0] if isinstance(t, list) else t for t in time_data]\n",
    "            time_tensor = torch.tensor(time_data, dtype=torch.float)\n",
    "\n",
    "        return (time_tensor, value_tensor)\n",
    "\n",
    "    def _encode_codes(self, codes: List[str]) -> torch.Tensor:\n",
    "        \"\"\"Encode flat code list to indices.\"\"\"\n",
    "        # Handle empty code list - return single padding token\n",
    "        if len(codes) == 0:\n",
    "            return torch.tensor([self.code_vocab[\"<pad>\"]], dtype=torch.long)\n",
    "\n",
    "        indices = []\n",
    "        for code in codes:\n",
    "            if code is None or code not in self.code_vocab:\n",
    "                indices.append(self.code_vocab[\"<unk>\"])\n",
    "            else:\n",
    "                indices.append(self.code_vocab[code])\n",
    "        return torch.tensor(indices, dtype=torch.long)\n",
    "\n",
    "    def _encode_nested_codes(self, nested_codes: List[List[str]]) -> torch.Tensor:\n",
    "        \"\"\"Encode nested code lists to padded 2D tensor.\n",
    "\n",
    "        Pads all inner sequences to self._max_nested_len (global max).\n",
    "        \"\"\"\n",
    "        # Handle empty nested codes (no visits/events)\n",
    "        # Return single padding token with shape (1, max_len)\n",
    "        if len(nested_codes) == 0:\n",
    "            pad_token = self.code_vocab[\"<pad>\"]\n",
    "            return torch.tensor([[pad_token] * self._max_nested_len], dtype=torch.long)\n",
    "\n",
    "        encoded_sequences = []\n",
    "        # Use global max length determined during fit\n",
    "        max_len = self._max_nested_len\n",
    "\n",
    "        for inner_codes in nested_codes:\n",
    "            indices = []\n",
    "            for code in inner_codes:\n",
    "                if code is None or code not in self.code_vocab:\n",
    "                    indices.append(self.code_vocab[\"<unk>\"])\n",
    "                else:\n",
    "                    indices.append(self.code_vocab[code])\n",
    "            # Pad to GLOBAL max_len\n",
    "            while len(indices) < max_len:\n",
    "                indices.append(self.code_vocab[\"<pad>\"])\n",
    "            encoded_sequences.append(indices)\n",
    "\n",
    "        return torch.tensor(encoded_sequences, dtype=torch.long)\n",
    "\n",
    "    def size(self) -> int:\n",
    "        \"\"\"Return vocabulary size.\"\"\"\n",
    "        return len(self.code_vocab)\n",
    "\n",
    "    def __repr__(self):\n",
    "        if self._is_nested:\n",
    "            return (\n",
    "                f\"StageNetProcessor(is_nested={self._is_nested}, \"\n",
    "                f\"vocab_size={len(self.code_vocab)}, \"\n",
    "                f\"max_nested_len={self._max_nested_len}, \"\n",
    "                f\"padding={self._padding})\"\n",
    "            )\n",
    "        else:\n",
    "            return (\n",
    "                f\"StageNetProcessor(is_nested={self._is_nested}, \"\n",
    "                f\"vocab_size={len(self.code_vocab)}, \"\n",
    "                f\"padding={self._padding})\"\n",
    "            )\n",
    "\n",
    "\n",
    "@register_processor(\"stagenet_tensor_ex\")\n",
    "class StageNetTensorProcessor(FeatureProcessor):\n",
    "    \"\"\"\n",
    "    Feature processor for StageNet NUMERIC inputs with coupled value/time data.\n",
    "\n",
    "    This processor handles numeric feature sequences (flat or nested) and applies\n",
    "    forward-fill imputation to handle missing values (NaN/None).\n",
    "    For categorical codes, use StageNetProcessor instead.\n",
    "\n",
    "    Format:\n",
    "    {\n",
    "        \"value\": [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],  # nested numerics\n",
    "        \"time\": [0.0, 1.5] or None\n",
    "    }\n",
    "\n",
    "    The processor automatically detects:\n",
    "    - List of numbers -> flat numeric sequences\n",
    "    - List of lists of numbers -> nested numeric sequences (feature vectors)\n",
    "\n",
    "    Imputation Strategy:\n",
    "    - Forward-fill: Missing values (NaN/None) are filled with the last observed\n",
    "      value for that feature dimension. If no prior value exists, 0.0 is used.\n",
    "    - Applied per feature dimension independently\n",
    "\n",
    "    Returns:\n",
    "        Tuple of (time_tensor, value_tensor) where time_tensor can be None\n",
    "\n",
    "    Examples:\n",
    "        >>> # Case 1: Feature vectors with missing values\n",
    "        >>> processor = StageNetTensorProcessor()\n",
    "        >>> data = {\n",
    "        ...     \"value\": [[1.0, None, 3.0], [None, 5.0, 6.0], [7.0, 8.0, None]],\n",
    "        ...     \"time\": [0.0, 1.5, 3.0]\n",
    "        ... }\n",
    "        >>> time, values = processor.process(data)\n",
    "        >>> values  # [[1.0, 0.0, 3.0], [1.0, 5.0, 6.0], [7.0, 8.0, 6.0]]\n",
    "        >>> values.dtype  # torch.float32\n",
    "        >>> time.shape    # (3,)\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self):\n",
    "        self._size = None  # Feature dimension (set during fit)\n",
    "        self._is_nested = None\n",
    "\n",
    "    def fit(self, samples: List[Dict], key: str) -> None:\n",
    "        \"\"\"Determine input structure.\n",
    "\n",
    "        Args:\n",
    "            samples: List of sample dictionaries\n",
    "            key: The key in samples that contains tuple (time, values)\n",
    "        \"\"\"\n",
    "        # Examine first non-None sample to determine structure\n",
    "        for sample in samples:\n",
    "            if key in sample and sample[key] is not None:\n",
    "                # Unpack tuple: (time, values)\n",
    "                time_data, value_data = sample[key]\n",
    "\n",
    "                # Determine nesting level for numerics\n",
    "                if isinstance(value_data, list) and len(value_data) > 0:\n",
    "                    first_elem = value_data[0]\n",
    "\n",
    "                    if isinstance(first_elem, (int, float)):\n",
    "                        # Flat numeric: [1.5, 2.0, ...]\n",
    "                        self._is_nested = False\n",
    "                        self._size = 1\n",
    "                    elif isinstance(first_elem, list):\n",
    "                        if len(first_elem) > 0:\n",
    "                            if isinstance(first_elem[0], (int, float)):\n",
    "                                # Nested numerics: [[1.0, 2.0], [3.0, 4.0]]\n",
    "                                self._is_nested = True\n",
    "                                self._size = len(first_elem)\n",
    "                break\n",
    "\n",
    "    def process(\n",
    "        self, value: Tuple[Optional[List], List]\n",
    "    ) -> Tuple[Optional[torch.Tensor], torch.Tensor]:\n",
    "        \"\"\"Process tuple format numeric data into tensors.\n",
    "\n",
    "        Applies forward-fill imputation to handle NaN/None values.\n",
    "        For each feature dimension, missing values are filled with the\n",
    "        last observed value (or 0.0 if no prior value exists).\n",
    "\n",
    "        Args:\n",
    "            value: Tuple of (time, values) where values are numerics\n",
    "\n",
    "        Returns:\n",
    "            Tuple of (time_tensor, value_tensor), time can be None\n",
    "        \"\"\"\n",
    "        # Unpack tuple: (time, values)\n",
    "        time_data, value_data = value\n",
    "\n",
    "        # Convert to numpy for easier imputation handling\n",
    "        import numpy as np\n",
    "\n",
    "        value_array = np.array(value_data, dtype=float)\n",
    "\n",
    "        # Apply forward-fill imputation\n",
    "        if value_array.ndim == 1:\n",
    "            # Flat numeric: [1.5, 2.0, nan, 3.0, ...]\n",
    "            last_value = 0.0\n",
    "            for i in range(len(value_array)):\n",
    "                if not np.isnan(value_array[i]):\n",
    "                    last_value = value_array[i]\n",
    "                else:\n",
    "                    value_array[i] = last_value\n",
    "        elif value_array.ndim == 2:\n",
    "            # Feature vectors: [[1.0, nan, 3.0], [nan, 5.0, 6.0]]\n",
    "            num_features = value_array.shape[1]\n",
    "            for f in range(num_features):\n",
    "                last_value = 0.0\n",
    "                for t in range(value_array.shape[0]):\n",
    "                    if not np.isnan(value_array[t, f]):\n",
    "                        last_value = value_array[t, f]\n",
    "                    else:\n",
    "                        value_array[t, f] = last_value\n",
    "\n",
    "        # Convert to float tensor\n",
    "        value_tensor = torch.tensor(value_array, dtype=torch.float)\n",
    "\n",
    "        # Process time if present\n",
    "        time_tensor = None\n",
    "        if time_data is not None and len(time_data) > 0:\n",
    "            # Handle both [0.0, 1.5] and [[0.0], [1.5]] formats\n",
    "            if isinstance(time_data[0], list):\n",
    "                # Flatten [[0.0], [1.5]] -> [0.0, 1.5]\n",
    "                time_data = [t[0] if isinstance(t, list) else t for t in time_data]\n",
    "            time_tensor = torch.tensor(time_data, dtype=torch.float)\n",
    "\n",
    "        return (time_tensor, value_tensor)\n",
    "\n",
    "    @property\n",
    "    def size(self):\n",
    "        \"\"\"Return feature dimension.\"\"\"\n",
    "        return self._size\n",
    "\n",
    "    def __repr__(self):\n",
    "        return (\n",
    "            f\"StageNetTensorProcessor(is_nested={self._is_nested}, \"\n",
    "            f\"feature_dim={self._size})\"\n",
    "        )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d590f39",
   "metadata": {},
   "source": [
    "## Defining a Our StageNet-specific Task\n",
    "\n",
    "We'll predict patient mortality using StageNet across time-series data from multiple visits. Each visit includes:\n",
    "\n",
    "- Diagnosis codes\n",
    "- Procedure codes\n",
    "- Lab events\n",
    "\n",
    "Here, each feature will also need have its own corresponding time intervals. As defined by the StageNet paper, each time interval is defined as the difference in time between the current visit and the previous visit. \n",
    "\n",
    "To define a task, specify the `__call__` method, input schema, and output schema. For a detailed explanation, see [this tutorial](https://colab.research.google.com/drive/1kKKBVS_GclHoYTbnOtjyYnSee79hsyT?usp=sharing).\n",
    "\n",
    "### Helper Functions\n",
    "\n",
    "Use `patient.get_events()` to retrieve all events from a specific table, with optional filtering. See the [MIMIC-IV YAML file](https://github.com/sunlabuiuc/PyHealth/blob/master/pyhealth/datasets/configs/mimic4_ehr.yaml) for available tables."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a2288cdc",
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import datetime\n",
    "from typing import Any, ClassVar, Dict, List, Tuple\n",
    "\n",
    "import polars as pl\n",
    "\n",
    "from pyhealth.tasks.base_task import BaseTask\n",
    "\n",
    "\n",
    "class MortalityPredictionStageNetMIMIC4(BaseTask):\n",
    "    \"\"\"Task for predicting mortality using MIMIC-IV with StageNet format.\n",
    "\n",
    "    This task creates PATIENT-LEVEL samples (not visit-level) by aggregating\n",
    "    all admissions for each patient. ICD codes (diagnoses + procedures) and\n",
    "    lab results across all visits are combined with time intervals calculated\n",
    "    from the patient's first admission timestamp.\n",
    "\n",
    "    Time Calculation:\n",
    "        - ICD codes: Hours from previous admission (0 for first visit,\n",
    "          then time intervals between consecutive visits)\n",
    "        - Labs: Hours from admission start (within-visit measurements)\n",
    "\n",
    "    Lab Processing:\n",
    "        - 10-dimensional vectors (one per lab category)\n",
    "        - Multiple itemids per category → take first observed value\n",
    "        - Missing categories → None/NaN in vector\n",
    "\n",
    "    Args:\n",
    "        padding: Additional padding for StageNet processor to handle\n",
    "            sequences longer than observed during training. Default: 0.\n",
    "\n",
    "    Attributes:\n",
    "        task_name (str): The name of the task.\n",
    "        input_schema (Dict[str, str]): The schema for input data:\n",
    "            - icd_codes: Combined diagnosis + procedure ICD codes\n",
    "              (stagenet format, nested by visit)\n",
    "            - labs: Lab results (stagenet_tensor, 10D vectors per timestamp)\n",
    "        output_schema (Dict[str, str]): The schema for output data:\n",
    "            - mortality: Binary indicator (1 if any admission had mortality)\n",
    "    \"\"\"\n",
    "\n",
    "    task_name: str = \"MortalityPredictionStageNetMIMIC4\"\n",
    "\n",
    "    def __init__(self, padding: int = 0):\n",
    "        \"\"\"Initialize task with optional padding parameter.\n",
    "\n",
    "        Args:\n",
    "            padding: Additional padding for nested sequences. Default: 0.\n",
    "        \"\"\"\n",
    "        self.padding = padding\n",
    "        # Use tuple format to pass kwargs to processor\n",
    "        self.input_schema: Dict[str, Tuple[str, Dict[str, Any]]] = {\n",
    "            \"icd_codes\": (\"stagenet\", {\"padding\": padding}),\n",
    "            \"labs\": (\"stagenet_tensor\", {}),\n",
    "        }\n",
    "        self.output_schema: Dict[str, str] = {\"mortality\": \"binary\"}\n",
    "\n",
    "    # Organize lab items by category\n",
    "    # Each category will map to ONE dimension in the output vector\n",
    "    LAB_CATEGORIES: ClassVar[Dict[str, List[str]]] = {\n",
    "        \"Sodium\": [\"50824\", \"52455\", \"50983\", \"52623\"],\n",
    "        \"Potassium\": [\"50822\", \"52452\", \"50971\", \"52610\"],\n",
    "        \"Chloride\": [\"50806\", \"52434\", \"50902\", \"52535\"],\n",
    "        \"Bicarbonate\": [\"50803\", \"50804\"],\n",
    "        \"Glucose\": [\"50809\", \"52027\", \"50931\", \"52569\"],\n",
    "        \"Calcium\": [\"50808\", \"51624\"],\n",
    "        \"Magnesium\": [\"50960\"],\n",
    "        \"Anion Gap\": [\"50868\", \"52500\"],\n",
    "        \"Osmolality\": [\"52031\", \"50964\", \"51701\"],\n",
    "        \"Phosphate\": [\"50970\"],\n",
    "    }\n",
    "\n",
    "    # Ordered list of category names (defines vector dimension order)\n",
    "    LAB_CATEGORY_NAMES: ClassVar[List[str]] = [\n",
    "        \"Sodium\",\n",
    "        \"Potassium\",\n",
    "        \"Chloride\",\n",
    "        \"Bicarbonate\",\n",
    "        \"Glucose\",\n",
    "        \"Calcium\",\n",
    "        \"Magnesium\",\n",
    "        \"Anion Gap\",\n",
    "        \"Osmolality\",\n",
    "        \"Phosphate\",\n",
    "    ]\n",
    "\n",
    "    # Flat list of all lab item IDs for filtering\n",
    "    LABITEMS: ClassVar[List[str]] = [\n",
    "        item for itemids in LAB_CATEGORIES.values() for item in itemids\n",
    "    ]\n",
    "\n",
    "    def __call__(self, patient: Any) -> List[Dict[str, Any]]:\n",
    "        \"\"\"Process a patient to create mortality prediction samples.\n",
    "\n",
    "        Creates ONE sample per patient with all admissions aggregated.\n",
    "        Time intervals are calculated between consecutive admissions.\n",
    "\n",
    "        Args:\n",
    "            patient: Patient object with get_events method\n",
    "\n",
    "        Returns:\n",
    "            List with single sample containing patient_id, all conditions,\n",
    "            procedures, labs across visits, and final mortality label\n",
    "        \"\"\"\n",
    "        # Filter patients by age (>= 18)\n",
    "        demographics = patient.get_events(event_type=\"patients\")\n",
    "        if not demographics:\n",
    "            return []\n",
    "\n",
    "        demographics = demographics[0]\n",
    "        try:\n",
    "            anchor_age = int(demographics.anchor_age)\n",
    "            if anchor_age < 18:\n",
    "                return []\n",
    "        except (ValueError, TypeError, AttributeError):\n",
    "            # If age can't be determined, skip patient\n",
    "            return []\n",
    "\n",
    "        # Get all admissions\n",
    "        admissions = patient.get_events(event_type=\"admissions\")\n",
    "        if len(admissions) < 1:\n",
    "            return []\n",
    "\n",
    "        # Initialize aggregated data structures\n",
    "        # List of ICD codes (diagnoses + procedures) per visit\n",
    "        all_icd_codes = []\n",
    "        all_icd_times = []  # Time from previous admission per visit\n",
    "        all_lab_values = []  # List of 10D lab vectors\n",
    "        all_lab_times = []  # Time from admission start per measurement\n",
    "\n",
    "        # Track previous admission timestamp for interval calculation\n",
    "        previous_admission_time = None\n",
    "\n",
    "        # Track if patient had any mortality event\n",
    "        final_mortality = 0\n",
    "\n",
    "        # Process each admission\n",
    "        for i, admission in enumerate(admissions):\n",
    "            # Parse admission and discharge times\n",
    "            try:\n",
    "                admission_time = admission.timestamp\n",
    "                admission_dischtime = datetime.strptime(\n",
    "                    admission.dischtime, \"%Y-%m-%d %H:%M:%S\"\n",
    "                )\n",
    "            except (ValueError, AttributeError):\n",
    "                # Skip if timestamps invalid\n",
    "                continue\n",
    "\n",
    "            # Skip if discharge is before admission (data quality issue)\n",
    "            if admission_dischtime < admission_time:\n",
    "                continue\n",
    "\n",
    "            # Calculate time from previous admission (in hours)\n",
    "            # First admission will have time = 0\n",
    "            if previous_admission_time is None:\n",
    "                time_from_previous = 0.0\n",
    "            else:\n",
    "                time_from_previous = (\n",
    "                    admission_time - previous_admission_time\n",
    "                ).total_seconds() / 3600.0\n",
    "\n",
    "            # Update previous admission time for next iteration\n",
    "            previous_admission_time = admission_time\n",
    "\n",
    "            # Update mortality label if this admission had mortality\n",
    "            try:\n",
    "                if int(admission.hospital_expire_flag) == 1:\n",
    "                    final_mortality = 1\n",
    "            except (ValueError, TypeError, AttributeError):\n",
    "                pass\n",
    "\n",
    "            # Get diagnosis codes for this admission using hadm_id\n",
    "            diagnoses_icd = patient.get_events(\n",
    "                event_type=\"diagnoses_icd\",\n",
    "                filters=[(\"hadm_id\", \"==\", admission.hadm_id)],\n",
    "            )\n",
    "            visit_diagnoses = [\n",
    "                event.icd_code\n",
    "                for event in diagnoses_icd\n",
    "                if hasattr(event, \"icd_code\") and event.icd_code\n",
    "            ]\n",
    "\n",
    "            # Get procedure codes for this admission using hadm_id\n",
    "            procedures_icd = patient.get_events(\n",
    "                event_type=\"procedures_icd\",\n",
    "                filters=[(\"hadm_id\", \"==\", admission.hadm_id)],\n",
    "            )\n",
    "            visit_procedures = [\n",
    "                event.icd_code\n",
    "                for event in procedures_icd\n",
    "                if hasattr(event, \"icd_code\") and event.icd_code\n",
    "            ]\n",
    "\n",
    "            # Combine diagnoses and procedures into single ICD code list\n",
    "            visit_icd_codes = visit_diagnoses + visit_procedures\n",
    "\n",
    "            if visit_icd_codes:\n",
    "                all_icd_codes.append(visit_icd_codes)\n",
    "                all_icd_times.append(time_from_previous)\n",
    "\n",
    "            # Get lab events for this admission\n",
    "            labevents_df = patient.get_events(\n",
    "                event_type=\"labevents\",\n",
    "                start=admission_time,\n",
    "                end=admission_dischtime,\n",
    "                return_df=True,\n",
    "            )\n",
    "\n",
    "            # Filter to relevant lab items\n",
    "            labevents_df = labevents_df.filter(\n",
    "                pl.col(\"labevents/itemid\").is_in(self.LABITEMS)\n",
    "            )\n",
    "\n",
    "            # Parse storetime and filter\n",
    "            if labevents_df.height > 0:\n",
    "                labevents_df = labevents_df.with_columns(\n",
    "                    pl.col(\"labevents/storetime\").str.strptime(\n",
    "                        pl.Datetime, \"%Y-%m-%d %H:%M:%S\"\n",
    "                    )\n",
    "                )\n",
    "                labevents_df = labevents_df.filter(\n",
    "                    pl.col(\"labevents/storetime\") <= admission_dischtime\n",
    "                )\n",
    "\n",
    "                if labevents_df.height > 0:\n",
    "                    # Select relevant columns\n",
    "                    labevents_df = labevents_df.select(\n",
    "                        pl.col(\"timestamp\"),\n",
    "                        pl.col(\"labevents/itemid\"),\n",
    "                        pl.col(\"labevents/valuenum\").cast(pl.Float64),\n",
    "                    )\n",
    "\n",
    "                    # Group by timestamp and aggregate into 10D vectors\n",
    "                    # For each timestamp, create vector of lab categories\n",
    "                    unique_timestamps = sorted(\n",
    "                        labevents_df[\"timestamp\"].unique().to_list()\n",
    "                    )\n",
    "\n",
    "                    for lab_ts in unique_timestamps:\n",
    "                        # Get all lab events at this timestamp\n",
    "                        ts_labs = labevents_df.filter(pl.col(\"timestamp\") == lab_ts)\n",
    "\n",
    "                        # Create 10-dimensional vector (one per category)\n",
    "                        lab_vector = []\n",
    "                        for category_name in self.LAB_CATEGORY_NAMES:\n",
    "                            category_itemids = self.LAB_CATEGORIES[category_name]\n",
    "\n",
    "                            # Find first matching value for this category\n",
    "                            category_value = None\n",
    "                            for itemid in category_itemids:\n",
    "                                matching = ts_labs.filter(\n",
    "                                    pl.col(\"labevents/itemid\") == itemid\n",
    "                                )\n",
    "                                if matching.height > 0:\n",
    "                                    category_value = matching[\"labevents/valuenum\"][0]\n",
    "                                    break\n",
    "\n",
    "                            lab_vector.append(category_value)\n",
    "\n",
    "                        # Calculate time from admission start (hours)\n",
    "                        time_from_admission = (\n",
    "                            lab_ts - admission_time\n",
    "                        ).total_seconds() / 3600.0\n",
    "\n",
    "                        all_lab_values.append(lab_vector)\n",
    "                        all_lab_times.append(time_from_admission)\n",
    "\n",
    "        # Skip if no lab events (required for this task)\n",
    "        if len(all_lab_values) == 0:\n",
    "            return []\n",
    "\n",
    "        # Also skip if no ICD codes across all admissions\n",
    "        if len(all_icd_codes) == 0:\n",
    "            return []\n",
    "\n",
    "        # Format as tuples: (time, values)\n",
    "        # ICD codes: nested list with times\n",
    "        icd_codes_data = (all_icd_times, all_icd_codes)\n",
    "\n",
    "        # Labs: list of 10D vectors with times\n",
    "        labs_data = (all_lab_times, all_lab_values)\n",
    "\n",
    "        # Create single patient-level sample\n",
    "        sample = {\n",
    "            \"patient_id\": patient.patient_id,\n",
    "            \"icd_codes\": icd_codes_data,\n",
    "            \"labs\": labs_data,\n",
    "            \"mortality\": final_mortality,\n",
    "        }\n",
    "        return [sample]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "38b799ce",
   "metadata": {},
   "source": [
    "## Setting the task and caching the data for quicker use down the road with padding\n",
    "We can finally set our task and get our training set below. Notice that we save a processed version of our dataset in .parquet files in our \"cache_dir\" here. We can also define a number of works for faster parallel processing (note this can be unstable if the value is too high).\n",
    "\n",
    "We can also save and load processors so we don't need to refit the processor again (and we can also transfer processors across different samples)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "8e01f7ec",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "=== Loading Pre-fitted Processors ===\n",
      "✓ Loaded input processors from ../../output/processors/stagenet_mortality_mimic4/input_processors.pkl\n",
      "✓ Loaded output processors from ../../output/processors/stagenet_mortality_mimic4/output_processors.pkl\n",
      "Setting task MortalityPredictionStageNetMIMIC4 for mimic4 base dataset...\n",
      "Generating samples with 1 worker(s)...\n",
      "Collecting global event dataframe...\n",
      "Dev mode enabled: limiting to 1000 patients\n",
      "Collected dataframe with shape: (398360, 39)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Generating samples for MortalityPredictionStageNetMIMIC4 with 1 worker: 100%|██████████| 1000/1000 [00:33<00:00, 30.24it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Caching samples to ../../mimic4_stagenet_cache_v3/MortalityPredictionStageNetMIMIC4.parquet\n",
      "Failed to cache samples: failed to determine supertype of list[f64] and list[list[str]]\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "Processing samples: 100%|██████████| 445/445 [00:00<00:00, 4542.98it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generated 445 samples for task MortalityPredictionStageNetMIMIC4\n",
      "Total samples: 445\n",
      "Input schema: {'icd_codes': ('stagenet', {'padding': 20}), 'labs': ('stagenet_tensor', {})}\n",
      "Output schema: {'mortality': 'binary'}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "from pyhealth.datasets.utils import save_processors, load_processors\n",
    "import os \n",
    "processor_dir = \"../../output/processors/stagenet_mortality_mimic4\"\n",
    "cache_dir = \"../../mimic4_stagenet_cache_v3\"\n",
    "\n",
    "if os.path.exists(os.path.join(processor_dir, \"input_processors.pkl\")):\n",
    "    print(\"\\n=== Loading Pre-fitted Processors ===\")\n",
    "    input_processors, output_processors = load_processors(processor_dir)\n",
    "\n",
    "    sample_dataset = base_dataset.set_task(\n",
    "        MortalityPredictionStageNetMIMIC4(padding=20),\n",
    "        num_workers=1,\n",
    "        cache_dir=cache_dir,\n",
    "        input_processors=input_processors,\n",
    "        output_processors=output_processors,\n",
    "    )\n",
    "else:\n",
    "    print(\"\\n=== Fitting New Processors ===\")\n",
    "    sample_dataset = base_dataset.set_task(\n",
    "        MortalityPredictionStageNetMIMIC4(padding=20),\n",
    "        num_workers=1,\n",
    "        cache_dir=cache_dir,\n",
    "    )\n",
    "\n",
    "    # Save processors for future runs\n",
    "    print(\"\\n=== Saving Processors ===\")\n",
    "    save_processors(sample_dataset, processor_dir)\n",
    "\n",
    "print(f\"Total samples: {len(sample_dataset)}\")\n",
    "print(f\"Input schema: {sample_dataset.input_schema}\")\n",
    "print(f\"Output schema: {sample_dataset.output_schema}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "1c765bec",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Sample structure:\n",
      "  Patient ID: 11204801\n",
      "ICD Codes: (tensor([  0.0000, 366.8167,  10.1500]), tensor([[3656, 1338,  344, 3656, 1599, 1082, 3656,  491,  189,   16,  985,  357,\n",
      "         3656, 3656, 1339, 3656,  812, 2523,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0],\n",
      "        [ 189,   16, 1082, 1491, 3656, 1262, 3656,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0],\n",
      "        [ 189, 3656, 3656,  302, 3656, 1056,  953,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,\n",
      "            0,    0,    0,    0,    0,    0,    0,    0,    0,    0]]))\n",
      "  Labs shape: 28 timesteps\n",
      "  Mortality: tensor([0.])\n"
     ]
    }
   ],
   "source": [
    "# Inspect a sample\n",
    "sample = sample_dataset.samples[0]\n",
    "print(\"\\nSample structure:\")\n",
    "print(f\"  Patient ID: {sample['patient_id']}\")\n",
    "print(f\"ICD Codes: {sample['icd_codes']}\")\n",
    "print(f\"  Labs shape: {len(sample['labs'][0])} timesteps\")\n",
    "print(f\"  Mortality: {sample['mortality']}\")\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65704934",
   "metadata": {},
   "source": [
    "## Train, Validation, Test Splits and Training\n",
    "\n",
    "This section fundamentally follows any typical training pipeline. We don't recommend the PyHealth trainer beyond just testing out baselines, but any code you write here should flexibly translate to more advanced deep learning training packages like PyTorch lightning and many others."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "1708dca9",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'icd_codes': ('stagenet', {'padding': 20}), 'labs': ('stagenet_tensor', {})}"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sample_dataset.input_schema"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "0333b99e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Model initialized with 5110705 parameters\n",
      "StageNet(\n",
      "  (embedding_model): EmbeddingModel(embedding_layers=ModuleDict(\n",
      "    (icd_codes): Embedding(3657, 128, padding_idx=0)\n",
      "    (labs): Linear(in_features=10, out_features=128, bias=True)\n",
      "  ))\n",
      "  (stagenet): ModuleDict(\n",
      "    (icd_codes): StageNetLayer(\n",
      "      (kernel): Linear(in_features=129, out_features=1542, bias=True)\n",
      "      (recurrent_kernel): Linear(in_features=385, out_features=1542, bias=True)\n",
      "      (nn_scale): Linear(in_features=384, out_features=64, bias=True)\n",
      "      (nn_rescale): Linear(in_features=64, out_features=384, bias=True)\n",
      "      (nn_conv): Conv1d(384, 384, kernel_size=(10,), stride=(1,))\n",
      "      (nn_dropconnect): Dropout(p=0.3, inplace=False)\n",
      "      (nn_dropconnect_r): Dropout(p=0.3, inplace=False)\n",
      "      (nn_dropout): Dropout(p=0.3, inplace=False)\n",
      "      (nn_dropres): Dropout(p=0.3, inplace=False)\n",
      "    )\n",
      "    (labs): StageNetLayer(\n",
      "      (kernel): Linear(in_features=129, out_features=1542, bias=True)\n",
      "      (recurrent_kernel): Linear(in_features=385, out_features=1542, bias=True)\n",
      "      (nn_scale): Linear(in_features=384, out_features=64, bias=True)\n",
      "      (nn_rescale): Linear(in_features=64, out_features=384, bias=True)\n",
      "      (nn_conv): Conv1d(384, 384, kernel_size=(10,), stride=(1,))\n",
      "      (nn_dropconnect): Dropout(p=0.3, inplace=False)\n",
      "      (nn_dropconnect_r): Dropout(p=0.3, inplace=False)\n",
      "      (nn_dropout): Dropout(p=0.3, inplace=False)\n",
      "      (nn_dropres): Dropout(p=0.3, inplace=False)\n",
      "    )\n",
      "  )\n",
      "  (fc): Linear(in_features=768, out_features=1, bias=True)\n",
      ")\n",
      "Metrics: ['pr_auc', 'roc_auc', 'accuracy', 'f1']\n",
      "Device: cuda:4\n",
      "\n",
      "Training:\n",
      "Batch size: 256\n",
      "Optimizer: <class 'torch.optim.adam.Adam'>\n",
      "Optimizer params: {'lr': 1e-05}\n",
      "Weight decay: 0.0\n",
      "Max grad norm: None\n",
      "Val dataloader: <torch.utils.data.dataloader.DataLoader object at 0x7f8c12189e10>\n",
      "Monitor: roc_auc\n",
      "Monitor criterion: max\n",
      "Epochs: 1\n",
      "Patience: None\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 0 / 1: 100%|██████████| 2/2 [00:01<00:00,  1.29it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--- Train epoch-0, step-2 ---\n",
      "loss: 0.6814\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "Evaluation: 100%|██████████| 1/1 [00:00<00:00,  4.44it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--- Eval epoch-0, step-2 ---\n",
      "pr_auc: 0.1812\n",
      "roc_auc: 0.7250\n",
      "accuracy: 0.8636\n",
      "f1: 0.0000\n",
      "loss: 0.6494\n",
      "New best roc_auc score (0.7250) at epoch-0, step-2\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded best model\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluation: 100%|██████████| 1/1 [00:00<00:00,  4.66it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Test Results:\n",
      "  pr_auc: 0.2735\n",
      "  roc_auc: 0.4211\n",
      "  accuracy: 0.7111\n",
      "  f1: 0.1333\n",
      "  loss: 0.6701\n",
      "\n",
      "Sample predictions:\n",
      "  Predicted probabilities: tensor([[0.5163],\n",
      "        [0.4799],\n",
      "        [0.4796],\n",
      "        [0.4591],\n",
      "        [0.5046]], device='cuda:4')\n",
      "  True labels: tensor([[0.],\n",
      "        [0.],\n",
      "        [0.],\n",
      "        [0.],\n",
      "        [0.]], device='cuda:4')\n"
     ]
    }
   ],
   "source": [
    "# STEP 3: Split dataset\n",
    "train_dataset, val_dataset, test_dataset = split_by_patient(\n",
    "    sample_dataset, [0.8, 0.1, 0.1]\n",
    ")\n",
    "\n",
    "# Create dataloaders\n",
    "train_loader = get_dataloader(train_dataset, batch_size=256, shuffle=True)\n",
    "val_loader = get_dataloader(val_dataset, batch_size=256, shuffle=False)\n",
    "test_loader = get_dataloader(test_dataset, batch_size=256, shuffle=False)\n",
    "\n",
    "# STEP 4: Initialize StageNet model\n",
    "model = StageNet(\n",
    "    dataset=sample_dataset,\n",
    "    embedding_dim=128,\n",
    "    chunk_size=128,\n",
    "    levels=3,\n",
    "    dropout=0.3,\n",
    ")\n",
    "\n",
    "num_params = sum(p.numel() for p in model.parameters())\n",
    "print(f\"\\nModel initialized with {num_params} parameters\")\n",
    "\n",
    "# STEP 5: Train the model\n",
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    device=\"cuda:4\",  # or \"cpu\"\n",
    "    metrics=[\"pr_auc\", \"roc_auc\", \"accuracy\", \"f1\"],\n",
    ")\n",
    "\n",
    "# 1 epoch for demonstration; increase for real training, it should work pretty well closer to 50\n",
    "trainer.train(\n",
    "    train_dataloader=train_loader,\n",
    "    val_dataloader=val_loader,\n",
    "    epochs=1,\n",
    "    monitor=\"roc_auc\",\n",
    "    optimizer_params={\"lr\": 1e-5},\n",
    ")\n",
    "\n",
    "# STEP 6: Evaluate on test set\n",
    "results = trainer.evaluate(test_loader)\n",
    "print(\"\\nTest Results:\")\n",
    "for metric, value in results.items():\n",
    "    print(f\"  {metric}: {value:.4f}\")\n",
    "\n",
    "# STEP 7: Inspect model predictions\n",
    "sample_batch = next(iter(test_loader))\n",
    "with torch.no_grad():\n",
    "    output = model(**sample_batch)\n",
    "\n",
    "print(\"\\nSample predictions:\")\n",
    "print(f\"  Predicted probabilities: {output['y_prob'][:5]}\")\n",
    "print(f\"  True labels: {output['y_true'][:5]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e877f9cf",
   "metadata": {},
   "source": [
    "## Inference On a Holdout Set Example\n",
    "Below, we'll generate some pseudo samples with a bunch of unknown tokens and visit lengths beyond what's observed in the training set."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "59475dc0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyhealth.datasets.base_dataset import SampleDataset\n",
    "import random\n",
    "import numpy as np\n",
    "\n",
    "def generate_holdout_set(\n",
    "    sample_dataset: SampleDataset, num_samples: int = 10, seed: int = 42\n",
    ") -> SampleDataset:\n",
    "    \"\"\"Generate synthetic hold-out set with unseen codes and varying lengths.\n",
    "\n",
    "    This function creates synthetic samples to test the processor's ability to:\n",
    "    1. Handle completely unseen tokens (mapped to <unk>)\n",
    "    2. Handle sequence lengths larger than training but within padding\n",
    "\n",
    "    Args:\n",
    "        sample_dataset: Original SampleDataset with fitted processors\n",
    "        num_samples: Number of synthetic samples to generate\n",
    "        seed: Random seed for reproducibility\n",
    "\n",
    "    Returns:\n",
    "        SampleDataset with synthetic samples using fitted processors\n",
    "    \"\"\"\n",
    "    random.seed(seed)\n",
    "    np.random.seed(seed)\n",
    "\n",
    "    # Get the fitted processors\n",
    "    icd_processor = sample_dataset.input_processors[\"icd_codes\"]\n",
    "\n",
    "    # Get max nested length from ICD processor\n",
    "    max_icd_len = icd_processor._max_nested_len\n",
    "    # Handle both old and new processor versions\n",
    "    padding = getattr(icd_processor, \"_padding\", 0)\n",
    "\n",
    "    print(\"\\n=== Hold-out Set Generation ===\")\n",
    "    print(f\"Processor attributes: {dir(icd_processor)}\")\n",
    "    print(f\"Has _padding attribute: {hasattr(icd_processor, '_padding')}\")\n",
    "    print(f\"ICD max nested length: {max_icd_len}\")\n",
    "    print(f\"Padding (via getattr): {padding}\")\n",
    "    if hasattr(icd_processor, \"_padding\"):\n",
    "        print(f\"Padding (direct access): {icd_processor._padding}\")\n",
    "    print(f\"Observed max (without padding): {max_icd_len - padding}\")\n",
    "\n",
    "    synthetic_samples = []\n",
    "\n",
    "    for i in range(num_samples):\n",
    "        # Generate random number of visits (1-5)\n",
    "        num_visits = random.randint(1, 5)\n",
    "\n",
    "        # Generate ICD codes with unseen tokens\n",
    "        icd_codes_list = []\n",
    "        icd_times_list = []\n",
    "\n",
    "        for visit_idx in range(num_visits):\n",
    "            # Generate sequence length between observed_max and max_icd_len\n",
    "            # This tests the padding capacity\n",
    "            observed_max = max_icd_len - padding\n",
    "            seq_len = random.randint(max(1, observed_max - 2), max_icd_len - 1)\n",
    "\n",
    "            # Generate unseen codes\n",
    "            visit_codes = [f\"NEWCODE_{i}_{visit_idx}_{j}\" for j in range(seq_len)]\n",
    "            icd_codes_list.append(visit_codes)\n",
    "\n",
    "            # Generate time intervals (hours from previous visit)\n",
    "            if visit_idx == 0:\n",
    "                icd_times_list.append(0.0)\n",
    "            else:\n",
    "                icd_times_list.append(random.uniform(24.0, 720.0))\n",
    "\n",
    "        # Generate lab data (10-dimensional vectors)\n",
    "        num_lab_timestamps = random.randint(5, 15)\n",
    "        lab_values_list = []\n",
    "        lab_times_list = []\n",
    "\n",
    "        for ts_idx in range(num_lab_timestamps):\n",
    "            # Generate 10D vector with some random values and some None\n",
    "            lab_vector = []\n",
    "            for dim in range(10):\n",
    "                if random.random() < 0.8:  # 80% chance of value\n",
    "                    lab_vector.append(random.uniform(50.0, 150.0))\n",
    "                else:\n",
    "                    lab_vector.append(None)\n",
    "\n",
    "            lab_values_list.append(lab_vector)\n",
    "            lab_times_list.append(random.uniform(0.0, 48.0))\n",
    "\n",
    "        # Create sample in the expected format (before processing)\n",
    "        synthetic_sample = {\n",
    "            \"patient_id\": f\"HOLDOUT_PATIENT_{i}\",\n",
    "            \"icd_codes\": (icd_times_list, icd_codes_list),\n",
    "            \"labs\": (lab_times_list, lab_values_list),\n",
    "            \"mortality\": random.randint(0, 1),\n",
    "        }\n",
    "\n",
    "        synthetic_samples.append(synthetic_sample)\n",
    "\n",
    "    # Create a new SampleDataset with the FITTED processors\n",
    "    holdout_dataset = SampleDataset(\n",
    "        samples=synthetic_samples,\n",
    "        input_schema=sample_dataset.input_schema,\n",
    "        output_schema=sample_dataset.output_schema,\n",
    "        dataset_name=f\"{sample_dataset.dataset_name}_holdout\",\n",
    "        task_name=sample_dataset.task_name,\n",
    "        input_processors=sample_dataset.input_processors,\n",
    "        output_processors=sample_dataset.output_processors,\n",
    "    )\n",
    "\n",
    "    print(f\"Generated {len(holdout_dataset)} synthetic samples\")\n",
    "    sample_seq_lens = [len(s[\"icd_codes\"][1]) for s in synthetic_samples[:3]]\n",
    "    print(f\"Sample ICD sequence lengths: {sample_seq_lens}\")\n",
    "    sample_codes_per_visit = [\n",
    "        [len(visit) for visit in s[\"icd_codes\"][1]] for s in synthetic_samples[:3]\n",
    "    ]\n",
    "    print(f\"Sample codes per visit: {sample_codes_per_visit}\")\n",
    "\n",
    "    return holdout_dataset\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "a9d898c3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "=== Hold-out Set Generation ===\n",
      "Processor attributes: ['__abstractmethods__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', '__weakref__', '_abc_impl', '_encode_codes', '_encode_nested_codes', '_is_nested', '_max_nested_len', '_next_index', '_padding', 'code_vocab', 'fit', 'load', 'process', 'save', 'size']\n",
      "Has _padding attribute: True\n",
      "ICD max nested length: 70\n",
      "Padding (via getattr): 20\n",
      "Padding (direct access): 20\n",
      "Observed max (without padding): 50\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing samples: 100%|██████████| 10/10 [00:00<00:00, 4420.17it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generated 10 synthetic samples\n",
      "Sample ICD sequence lengths: [1, 2, 2]\n",
      "Sample codes per visit: [[70], [70, 70], [70, 70]]\n",
      "\n",
      "=== Inspecting Processed Hold-out Samples ===\n",
      "Feature Tensor Dimensions\n",
      "torch.Size([10, 5, 70])\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "holdout_dataset = generate_holdout_set(sample_dataset, num_samples=10, seed=42)\n",
    "# Create dataloader for hold-out set\n",
    "holdout_loader = get_dataloader(holdout_dataset, batch_size=16, shuffle=False)\n",
    "# Inspect processed samples\n",
    "print(\"\\n=== Inspecting Processed Hold-out Samples ===\")\n",
    "holdout_batch = next(iter(holdout_loader))\n",
    "print(\"Feature Tensor Dimensions\")\n",
    "print(holdout_batch[\"icd_codes\"][1].shape)\n",
    "with torch.no_grad():\n",
    "    holdout_output = model(**holdout_batch)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a3d1b7f",
   "metadata": {},
   "source": [
    "## Post-hoc ML processing (Interpretability)\n",
    "We note that once the model's trained and evaluation metrics are derived. People may be interested in things like post-hoc interpretability or uncertainty quantification.\n",
    "\n",
    "We note that this is quite a work-in-progress for PyHealth 2.0, but the roadmap includes the following:\n",
    "\n",
    "- Integrated Gradients (deep NN-based interpretability)\n",
    "- Conformal Prediction: We do have many other UQ techniques [here](https://pyhealth.readthedocs.io/en/latest/api/calib.html)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "f268b6ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyhealth.medcode import CrossMap, InnerMap\n",
    "\n",
    "LAB_CATEGORY_NAMES = MortalityPredictionStageNetMIMIC4.LAB_CATEGORY_NAMES\n",
    "\n",
    "def unravel(flat_index: int, shape: torch.Size):\n",
    "    coords = []\n",
    "    remaining = flat_index\n",
    "    for dim in reversed(shape):\n",
    "        coords.append(remaining % dim)\n",
    "        remaining //= dim\n",
    "    return list(reversed(coords))\n",
    "\n",
    "def decode_token(idx: int, processor, feature_key: str):\n",
    "    icd10cm = InnerMap.load(\"ICD10CM\")\n",
    "    icd10pc = InnerMap.load(\"ICD10PROC\")\n",
    "    if processor is None or not hasattr(processor, \"code_vocab\"):\n",
    "        return str(idx)\n",
    "    reverse_vocab = {index: token for token, index in processor.code_vocab.items()}\n",
    "    token = reverse_vocab.get(idx, f\"<UNK:{idx}>\")\n",
    "\n",
    "    if feature_key == \"icd_codes\" and token not in {\"<unk>\", \"<pad>\"}:\n",
    "        desc = None\n",
    "        if token in icd10cm:\n",
    "            desc = icd10cm.lookup(token)\n",
    "        elif token in icd10pc:\n",
    "            desc = icd10pc.lookup(token)\n",
    "\n",
    "        if desc:\n",
    "            return f\"{token}: {desc}\"\n",
    "\n",
    "    return token\n",
    "\n",
    "\n",
    "def print_top_attributions(\n",
    "    attributions,\n",
    "    batch,\n",
    "    processors,\n",
    "    top_k: int = 10,\n",
    "):\n",
    "    for feature_key, attr in attributions.items():\n",
    "        attr_cpu = attr.detach().cpu()\n",
    "        if attr_cpu.dim() == 0 or attr_cpu.size(0) == 0:\n",
    "            continue\n",
    "\n",
    "        feature_input = batch[feature_key]\n",
    "        if isinstance(feature_input, tuple):\n",
    "            feature_input = feature_input[1]\n",
    "        feature_input = feature_input.detach().cpu()\n",
    "\n",
    "        flattened = attr_cpu[0].flatten()\n",
    "        if flattened.numel() == 0:\n",
    "            continue\n",
    "\n",
    "        print(f\"\\nFeature: {feature_key}\")\n",
    "        k = min(top_k, flattened.numel())\n",
    "        top_values, top_indices = torch.topk(flattened.abs(), k=k)\n",
    "        processor = processors.get(feature_key) if processors else None\n",
    "        is_continuous = torch.is_floating_point(feature_input)\n",
    "\n",
    "        for rank, (_, flat_idx) in enumerate(zip(top_values, top_indices), 1):\n",
    "            attribution_value = flattened[flat_idx].item()\n",
    "            coords = unravel(flat_idx.item(), attr_cpu[0].shape)\n",
    "\n",
    "            if is_continuous:\n",
    "                actual_value = feature_input[0][tuple(coords)].item()\n",
    "                label = \"\"\n",
    "                if feature_key == \"labs\" and len(coords) >= 1:\n",
    "                    lab_idx = coords[-1]\n",
    "                    if lab_idx < len(LAB_CATEGORY_NAMES):\n",
    "                        label = f\"{LAB_CATEGORY_NAMES[lab_idx]} \"\n",
    "                print(\n",
    "                    f\"  {rank:2d}. idx={coords} {label}value={actual_value:.4f} \"\n",
    "                    f\"attr={attribution_value:+.6f}\"\n",
    "                )\n",
    "            else:\n",
    "                token_idx = int(feature_input[0][tuple(coords)].item())\n",
    "                token = decode_token(token_idx, processor, feature_key)\n",
    "                print(\n",
    "                    f\"  {rank:2d}. idx={coords} token='{token}' \"\n",
    "                    f\"attr={attribution_value:+.6f}\"\n",
    "                )\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "d65d228e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Model prediction for the sampled patient:\n",
      "  True label: 0\n",
      "  Predicted class: 0\n",
      "  Probabilities: [0.5162854]\n",
      "\n",
      "Feature: icd_codes\n",
      "   1. idx=[0, 7] token='E669: Obesity, unspecified' attr=-0.002022\n",
      "   2. idx=[0, 9] token='Z87891: Personal history of nicotine dependence' attr=+0.001884\n",
      "   3. idx=[0, 2] token='E119: Type 2 diabetes mellitus without complications' attr=-0.001407\n",
      "   4. idx=[0, 6] token='K219: Gastro-esophageal reflux disease without esophagitis' attr=+0.001085\n",
      "   5. idx=[0, 3] token='J449: Chronic obstructive pulmonary disease, unspecified' attr=+0.000941\n",
      "   6. idx=[12, 27] token='<pad>' attr=-0.000623\n",
      "   7. idx=[0, 4] token='J45909: Unspecified asthma, uncomplicated' attr=-0.000595\n",
      "   8. idx=[12, 38] token='<pad>' attr=+0.000491\n",
      "   9. idx=[12, 0] token='<pad>' attr=+0.000483\n",
      "  10. idx=[12, 4] token='<pad>' attr=+0.000476\n",
      "\n",
      "Feature: labs\n",
      "   1. idx=[244, 5] Calcium value=0.0000 attr=+0.001602\n",
      "   2. idx=[244, 3] Bicarbonate value=0.0000 attr=+0.001602\n",
      "   3. idx=[244, 4] Glucose value=0.0000 attr=+0.001602\n",
      "   4. idx=[244, 6] Magnesium value=0.0000 attr=+0.001602\n",
      "   5. idx=[244, 1] Potassium value=0.0000 attr=+0.001602\n",
      "   6. idx=[244, 7] Anion Gap value=0.0000 attr=+0.001602\n",
      "   7. idx=[244, 8] Osmolality value=0.0000 attr=+0.001602\n",
      "   8. idx=[244, 9] Phosphate value=0.0000 attr=+0.001602\n",
      "   9. idx=[244, 2] Chloride value=0.0000 attr=+0.001602\n",
      "  10. idx=[244, 0] Sodium value=0.0000 attr=+0.001602\n"
     ]
    }
   ],
   "source": [
    "from pyhealth.interpret.methods import DeepLift, IntegratedGradients\n",
    "def move_batch_to_device(batch, target_device):\n",
    "    moved = {}\n",
    "    for key, value in batch.items():\n",
    "        if isinstance(value, torch.Tensor):\n",
    "            moved[key] = value.to(target_device)\n",
    "        elif isinstance(value, tuple):\n",
    "            moved[key] = tuple(v.to(target_device) for v in value)\n",
    "        else:\n",
    "            moved[key] = value\n",
    "    return moved\n",
    "\n",
    "device = torch.device(\"cpu\")\n",
    "model.to(device)\n",
    "ig = IntegratedGradients(model)\n",
    "\n",
    "\n",
    "sample_batch = next(iter(test_loader))\n",
    "sample_batch_device = move_batch_to_device(sample_batch, device)\n",
    "\n",
    "with torch.no_grad():\n",
    "    output = model(**sample_batch_device)\n",
    "    probs = output[\"y_prob\"]\n",
    "    preds = torch.argmax(probs, dim=-1)\n",
    "    label_key = model.label_key\n",
    "    true_label = sample_batch_device[label_key]\n",
    "\n",
    "    print(\"\\nModel prediction for the sampled patient:\")\n",
    "    print(f\"  True label: {int(true_label.cpu()[0].item())}\")\n",
    "    print(f\"  Predicted class: {int(preds.cpu()[0].item())}\")\n",
    "    print(f\"  Probabilities: {probs[0].cpu().numpy()}\")\n",
    "\n",
    "\n",
    "attributions = ig.attribute(**sample_batch_device)\n",
    "print_top_attributions(attributions, sample_batch_device, input_processors, top_k=10)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "902bb29c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def build_random_embedding_baseline(\n",
    "    model: StageNet,\n",
    "    batch: dict,\n",
    "    scale: float = 0.01,\n",
    "    seed: int = 42,\n",
    ") -> dict:\n",
    "    \"\"\"Construct a non-empty baseline directly in embedding space.\n",
    "\n",
    "    DeepLIFT subtracts the baseline embedding from the actual embedding.\n",
    "    Using pure zeros collapses StageNet masks (all visits become padding),\n",
    "    so we add small random noise to keep at least one timestep active.\n",
    "    \"\"\"\n",
    "\n",
    "    torch.manual_seed(seed)\n",
    "    feature_inputs = {}\n",
    "    for key in model.feature_keys:\n",
    "        value = batch[key]\n",
    "        if isinstance(value, tuple):\n",
    "            value = value[1]\n",
    "        feature_inputs[key] = value.to(model.device)\n",
    "\n",
    "    embedded = model.embedding_model(feature_inputs)\n",
    "    baseline = {}\n",
    "    for key, emb in embedded.items():\n",
    "        baseline[key] = torch.randn_like(emb) * scale\n",
    "    return baseline\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b32ef9e4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Feature: icd_codes\n",
      "   1. idx=[0, 1] token='42832: Chronic diastolic heart failure' attr=+0.079825\n",
      "   2. idx=[0, 6] token='V5861: Long-term (current) use of anticoagulants' attr=-0.070667\n",
      "   3. idx=[0, 5] token='V4501: Cardiac pacemaker in situ' attr=-0.058043\n",
      "   4. idx=[0, 10] token='370: Keratitis' attr=+0.056914\n",
      "   5. idx=[2, 10] token='V4501: Cardiac pacemaker in situ' attr=-0.050888\n",
      "   6. idx=[0, 7] token='4019: Unspecified essential hypertension' attr=-0.048502\n",
      "   7. idx=[0, 3] token='4280: Congestive heart failure, unspecified' attr=+0.045676\n",
      "   8. idx=[0, 2] token='4233: Cardiac tamponade' attr=+0.037603\n",
      "   9. idx=[2, 13] token='4019: Unspecified essential hypertension' attr=-0.031371\n",
      "  10. idx=[2, 5] token='4280: Congestive heart failure, unspecified' attr=-0.025716\n",
      "\n",
      "Feature: labs\n",
      "   1. idx=[400, 5] Calcium value=0.0000 attr=+0.004160\n",
      "   2. idx=[400, 3] Bicarbonate value=0.0000 attr=+0.004160\n",
      "   3. idx=[400, 4] Glucose value=0.0000 attr=+0.004160\n",
      "   4. idx=[400, 6] Magnesium value=0.0000 attr=+0.004160\n",
      "   5. idx=[400, 1] Potassium value=0.0000 attr=+0.004160\n",
      "   6. idx=[400, 7] Anion Gap value=0.0000 attr=+0.004160\n",
      "   7. idx=[400, 8] Osmolality value=0.0000 attr=+0.004160\n",
      "   8. idx=[400, 9] Phosphate value=0.0000 attr=+0.004160\n",
      "   9. idx=[400, 2] Chloride value=0.0000 attr=+0.004160\n",
      "  10. idx=[400, 0] Sodium value=0.0000 attr=+0.004160\n"
     ]
    }
   ],
   "source": [
    "deeplift = DeepLift(model)\n",
    "\n",
    "random_baseline = build_random_embedding_baseline(model, sample_batch_device)\n",
    "attributions = deeplift.attribute(\n",
    "    baseline=random_baseline,\n",
    "    **sample_batch_device,\n",
    ")\n",
    "print_top_attributions(attributions, sample_batch_device, input_processors, top_k=10)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "medical_coding_demo",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
