{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f3514a01-741a-4561-aabf-b28d97a942b3",
   "metadata": {},
   "outputs": [],
   "source": [
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09805c49-fc9c-4d59-a179-655294642c7e",
   "metadata": {},
   "source": [
    "## Training and Validation for Torchvision Maskrcnn on Volpy Data \n",
    "Tutorial for training and valdiation Maskrcnn model\n",
    "\n",
    "@authors: Changjia Cai, Erik Thompson, and Manuel Paez\n",
    "\n",
    "Date Created: August 28th, 2024\n",
    "\n",
    "Date Updated: July 7th, 2025"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "77758223-d306-4487-b65c-1c14fc9556fd",
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import os\n",
    "from skimage.color import rgb2gray, gray2rgb\n",
    "import torch\n",
    "from torch.optim.lr_scheduler import CyclicLR\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "from torchvision import tv_tensors\n",
    "from tqdm import tqdm\n",
    "\n",
    "from config import Config\n",
    "from model import get_model_instance_segmentation, mrcnn_inference, thresholded_predictions \n",
    "from neurons import NeuronsDataset, train_one_epoch, validate, perform_final_evaluation\n",
    "from utils import ScaleImage, collate_fn, data_transform, f1_score, nf_match_neurons_in_binary_masks, normalize_image\n",
    "from visualize import apply_masks, draw_boxes, vp_load_image"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e34eb60b-2b21-4556-bfa5-279265f185b4",
   "metadata": {},
   "source": [
    "#### Check if cuda is available"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d19e5e25-d963-42bb-97eb-aab92a63e655",
   "metadata": {},
   "outputs": [],
   "source": [
    "device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\n",
    "print(f\"Using device: {device}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "047d8de9-bb94-49ab-a980-39ad76b0ba1a",
   "metadata": {},
   "source": [
    "## Volpy Data, Training, and Validation Sets\n",
    "There are 24 datasets in total, 3 types of different voltage imaging datasets recorded from mouse L1 cortex (L1), mouse hippocampus (HPC), and zebrafish tegmental area (TEG). Set up the image directories from https://zenodo.org/records/4515768 as follows:\n",
    "\n",
    "    volpy_training_data/\n",
    "        images/\n",
    "            HPC.29.04.npz\n",
    "            ...\n",
    "        masks/\n",
    "            HPC.29.04_mask.npz\n",
    "            ...      \n",
    "\n",
    "### Config \n",
    "The base configuration class 'Config' from 'config.py' contains the variables necessary for training and validation on the Volpy Dataset. The following 'DATASET_PATH' should be directed to 'volpy_training_data'. \n",
    "\n",
    "    Config: \n",
    "        # Paths\n",
    "        DATASET_PATH = r'~./volpy_training_data/'\n",
    "        MODEL_SAVE_DIR = r'~/volpy_models/'\n",
    "\n",
    "        # Model and Training Hyperparameters\n",
    "        NUM_CLASSES = 1 + 1  # Background + Neuron\n",
    "        BATCH_SIZE = 2 \n",
    "        NUM_EPOCHS = 100\n",
    "        MAX_LR = 0.005\n",
    "        BASE_LR = 0.000001\n",
    "        STEP_SIZE_UP = 3\n",
    "        STEP_SIZE_DOWN = 7\n",
    "\n",
    "        # Data Loading, Splitting, and Inference\n",
    "        # IMAGES_PER_GPU = 2\n",
    "        RANDOM_SPLIT = False # True for random split, False for fixed split from map below.\n",
    "        NUM_TEST_RANDOM = 8\n",
    "        NUM_TORCH_WORKERS = 4 \n",
    "        DATASET_REGION_MAP = {\n",
    "            'HPC': [0, 1, 2, 3],\n",
    "            'L1': [12, 13, 14],\n",
    "            'TEG': [21],\n",
    "            'Train': [4, 5, 6, 7, 8, 9, 10, 11, 15, 16, 17, 18, 19, 20, 22, 23]\n",
    "        }\n",
    "\n",
    "        INFERENCE_THRESHOLD = 0.5\n",
    "\n",
    "        # Logging and Saving Frequency\n",
    "        PRINT_FREQ = 1 \n",
    "        SAVE_FREQ = 20 \n",
    "\n",
    "### Dataset \n",
    "\n",
    "Building on the standard `torch.utils.data.Dataset`class, we use 'NeuronsDataset' from neurons.py. The  `__getitem__` method of this class should return an `image` and a `target` dictionary delineating the different objects (box/mask) in the image:\n",
    "\n",
    "    image: torchvision.tv_tensors.Image of shape [3, H, W]: can be a pure tensor, or a PIL Image of size (H, W)\n",
    "    target: a dict containing the following keys\n",
    "        masks : torchvision uint8 binary masks for each object (N,H,W) (N masks)\n",
    "        boxes (bounding boxes)  (nx4)\n",
    "        labels (int) label for each bounding box (note 0 is background, so if you have no bg, start with 1)\n",
    "        image_id (int) unique image id\n",
    "        area (float) area of bounding box \n",
    "        iscrowd (uint8) instances with `iscrowd=True` will be ignored during evaluation \n",
    "\n",
    "The 'data_transform' function from neurons.py should return a training pipeline for training. \n",
    "For windows, set the number of workers to 0. Otherwise, set to 4+\n",
    "\n",
    "#### Dataset Split\n",
    "The dataset split can be randomized split (such that each of L1, HPC, and TEG has a validation and training set) or can be set fixed. \n",
    "\n",
    "#### Data Transforms\n",
    "From 'utils.py':\n",
    "\n",
    "        transforms.append(T.RandomHorizontalFlip(p=0.5))\n",
    "        transforms.append(T.RandomVerticalFlip(p=0.5))\n",
    "        transforms.append(T.RandomApply([T.RandomRotation(degrees=(-5, 5))], p=0.5))\n",
    "        transforms.append(T.ColorJitter(brightness=0.5,\n",
    "                                        contrast=0.5,\n",
    "                                        saturation=0.5,\n",
    "                                        hue=0))\n",
    "        transforms.append(T.GaussianBlur(kernel_size=(5, 5), sigma=(0.001, 0.3)))\n",
    "        transforms.append(T.SanitizeBoundingBoxes(min_size=2))\n",
    "\n",
    "#### Additional Training Setup:\n",
    "Best Validation indices: [ 0, 1, 2, 3, 12, 13, 14, 21] \n",
    "\n",
    "Best Training indices:[4, 5, 6, 7, 8, 9, 10, 11, 15, 16, 17, 18, 19, 20, 22, 23] "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5670e1de-796a-4bcf-b731-712dd76312ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "config = Config()\n",
    "\"\"\" Main function to run the training and validation pipeline.\"\"\"\n",
    "os.makedirs(config.MODEL_SAVE_DIR, exist_ok=True)\n",
    "\n",
    "print(\"Loading datasets...\")\n",
    "dataset_train = NeuronsDataset(config.DATA_DIR, data_transform(train=True))\n",
    "dataset_val = NeuronsDataset(config.DATA_DIR, data_transform(train=False))\n",
    "\n",
    "if config.RANDOM_SPLIT:\n",
    "    print(\"Using random split for train/validation sets.\")\n",
    "    indices = list(range(len(dataset_train_instance)))\n",
    "    np.random.shuffle(indices)\n",
    "    train_indices = indices[:-config.NUM_TEST_RANDOM]\n",
    "    val_indices = indices[-config.NUM_TEST_RANDOM:]\n",
    "else:\n",
    "    print(\"Using fixed split based on DATASET_REGION_MAP.\")\n",
    "    train_indices = config.DATASET_REGION_MAP['Train']\n",
    "    val_indices = [idx for region, inds in config.DATASET_REGION_MAP.items() if region != 'Train' for idx in inds]\n",
    "    \n",
    "val_indices_path = os.path.join(config.MODEL_SAVE_DIR, 'validation_indices.npy')\n",
    "np.save(val_indices_path, val_indices)\n",
    "print(f\"Validation indices for this run have been saved to {val_indices_path}\")\n",
    "\n",
    "dataset_train = torch.utils.data.Subset(dataset_train, train_indices)\n",
    "dataset_val = torch.utils.data.Subset(dataset_val, val_indices)\n",
    "\n",
    "data_loader_train = DataLoader(dataset_train, batch_size=config.BATCH_SIZE, shuffle=True,\n",
    "                                   num_workers=config.NUM_TORCH_WORKERS, collate_fn=collate_fn)\n",
    "data_loader_val = DataLoader(dataset_val, batch_size=1, shuffle=False,\n",
    "                                 num_workers=config.NUM_TORCH_WORKERS, collate_fn=collate_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e8a7d7ce-86e2-4793-ba6c-bda144cb8f79",
   "metadata": {},
   "source": [
    "### Model\n",
    "We use a model pre-trained on the COCO dataset, and fine-tune the last layer. \n",
    "\n",
    "### Optimizer + lr scheduler\n",
    "We use the SGD optimizer and a CycleLR scheduler "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d00273fd-00bc-4677-af15-5f8634ba2c97",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Model\n",
    "model = get_model_instance_segmentation(config.NUM_CLASSES)\n",
    "model.to(device)\n",
    "\n",
    "# Optimizer + lr scheduler\n",
    "params = [p for p in model.parameters() if p.requires_grad]\n",
    "optimizer = torch.optim.SGD(params, lr=config.MAX_LR, momentum=0.9, weight_decay=0.0001) \n",
    "lr_scheduler = CyclicLR(optimizer, base_lr=config.BASE_LR, # Initial learning rate which is the lower boundary in the cycle for each parameter group\n",
    "                        max_lr=config.MAX_LR, # Upper learning rate boundaries in the cycle for each parameter group\n",
    "                        step_size_up=config.STEP_SIZE_UP, # Number of training iterations in the increasing half of a cycle\n",
    "                        step_size_down=config.STEP_SIZE_DOWN,\n",
    "                        mode=\"triangular2\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb45e0ac-d86d-4fb7-b1e6-8bb9bc51bff8",
   "metadata": {},
   "source": [
    "### Training the Network"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d9105c1-cce9-40a6-a318-fb94d4e22e5a",
   "metadata": {},
   "outputs": [],
   "source": [
    "all_train_losses, all_val_losses, all_lrs = [], [], []\n",
    "print(f\"**TRAIN {config.NUM_EPOCHS} epochs. PRINT every {config.PRINT_FREQ} epoch(s). \"\n",
    "          f\"SAVE every {config.SAVE_FREQ} epoch(s).**\")\n",
    "\n",
    "for epoch in range(config.NUM_EPOCHS):\n",
    "    train_loss = train_one_epoch(model, optimizer, data_loader_train, device, epoch)\n",
    "    val_loss = validate(model, data_loader_val, device, epoch)\n",
    "    current_lr = optimizer.param_groups[0][\"lr\"]\n",
    "\n",
    "    all_train_losses.append(train_loss)\n",
    "    all_val_losses.append(val_loss)\n",
    "    all_lrs.append(current_lr)\n",
    "\n",
    "    lr_scheduler.step()\n",
    "\n",
    "    if (epoch + 1) % config.PRINT_FREQ == 0:\n",
    "        print(f\"Epoch {epoch+1}/{config.NUM_EPOCHS} | Train Loss: {train_loss:.4f}, Val Loss: {val_loss:.4f}, LR: {current_lr:.6f}\")\n",
    "\n",
    "    if (epoch + 1) % config.SAVE_FREQ == 0:\n",
    "        model_path = os.path.join(config.MODEL_SAVE_DIR, f'mrcnn_epoch_{epoch+1}.pt')\n",
    "        torch.save(model.state_dict(), model_path)\n",
    "        print(f\"\\t Model saved to {model_path}\")\n",
    "\n",
    "history = {'train_loss': all_train_losses, 'val_loss': all_val_losses, 'lr': all_lrs}\n",
    "torch.save(history, os.path.join(config.MODEL_SAVE_DIR, 'volpy_train_history.pt'))\n",
    "print(\"\\nDONE!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e543c84-d526-4d1a-83a3-a2d6fa510b7d",
   "metadata": {},
   "source": [
    "### Plot Loss Function "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eab1beb7-7102-4960-8556-ac76f3a7704d",
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(np.array(all_train_losses), color='blue', marker='.', label='train')\n",
    "plt.plot(np.array(all_val_losses), color='red', marker='.', label='validation')\n",
    "plt.legend()\n",
    "plt.xlabel('epoch')\n",
    "plt.ylabel('net loss')\n",
    "plt.title('loss function across different epochs')\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79544191-a34f-43b0-8081-dd5bcfc0f53a",
   "metadata": {},
   "source": [
    "### Plot Learning Rate vs Epoch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9385a18e-1afc-4401-a0da-ac40200a7f26",
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(all_lrs, marker='.')\n",
    "plt.xlabel('epoch')\n",
    "plt.ylabel('learning rate')\n",
    "plt.title('learning rate across different epochs')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9a368a1-b14c-40c6-a14e-a777de676378",
   "metadata": {},
   "source": [
    "### Inference using Test Set\n",
    "\n",
    "For this, we will compute F1 scores for all datasets\n",
    "\n",
    "#### Perform_final_evaluation \n",
    "From 'neurons.py', runs inference on the validtion set, calculates F1 scores for each region (i.e. HPC, L1, TEG), and reports the results. \n",
    "\n",
    "Note: Aim for 74 % >"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4095866b-0ce0-4004-9c38-314cf9dd1e54",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"Loads a trained model and runs inference on the validation set.\"\"\"\n",
    "val_indices_path = os.path.join(config.MODEL_SAVE_DIR, 'validation_indices.npy')\n",
    "    \n",
    "val_indices = np.load(val_indices_path)\n",
    "\n",
    "full_dataset = NeuronsDataset(config.DATA_DIR, data_transform(train=False))\n",
    "dataset_val = torch.utils.data.Subset(full_dataset, val_indices)\n",
    "data_loader_val = DataLoader(dataset_val, batch_size=1, shuffle=False, num_workers=config.NUM_TORCH_WORKERS, collate_fn=collate_fn)\n",
    "\n",
    "perform_final_evaluation(model, config, device, plot_results=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a4afa8f9-9e8c-4a0a-bf98-064b3596f23a",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "caiman_pytorch2",
   "language": "python",
   "name": "caiman_pytorch2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
