{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_x6HaLO5rCf9",
    "colab_type": "text"
   },
   "source": [
    "# Introduction to Medical Image Registration with DeepReg, between Old and New\n",
    "\n",
    "### Authors: \n",
    "  - Nina Montana Brown <sup>1, 2</sup>\n",
    "  - Yunguan Fu <sup>1, 2, 3</sup>\n",
    "  - Shaheer Saeed <sup>1, 2</sup>\n",
    "  - Adrià Casamitjana <sup>2</sup>\n",
    "  - Zachary Baum <sup>1, 2</sup>\n",
    "  - Rémi Delaunay <sup>1, 2, 4</sup>\n",
    "  - Qianye Yang <sup>1, 2</sup>\n",
    "  - Alexander Grimwood <sup>1, 2</sup>\n",
    "  - Zhe Min <sup>1</sup>\n",
    "  - Ester Bonmati <sup>1, 2</sup>\n",
    "  - Tom Vercauteren <sup>4</sup>\n",
    "  - Matthew J. Clarkson <sup>1, 2</sup>\n",
    "  - Yipeng Hu <sup>1, 2</sup>\n",
    "\n",
    "Affiliations:\n",
    " - [1] Wellcome/EPSRC Centre for Surgical and Interventional Sciences, University College London\n",
    " - [2] Centre for Medical Image Computing, University College London\n",
    " - [3] InstaDeep\n",
    " - [4] Department of Surgical & Interventional Engineering, King’s College London\n",
    "\n",
    "# Table of Contents\n",
    "1. [Objective of the Tutorial](#obj)\n",
    "2. [Set-up](#setup)\n",
    "3. [Introduction to Registration](#IntroReg)\n",
    "4. [Registration with Deep Learning](#DeepRegistrationIntro)\n",
    "5. [Two Classical Registration Examples](#classical-examples) \n",
    "6. [An adapted DeepReg Demo](#deep-example)\n",
    "7. [Concluding Remarks](#conclusion)\n",
    "8. [References](#references)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "K7RjsIyi7CqZ",
    "colab_type": "text"
   },
   "source": [
    "# Objective of the Tutorial <a name=\"obj\"></a>\n",
    "This tutorial introduces a new open-source project [DeepReg](https://github.com/DeepRegNet/DeepReg), currently based on the latest release of TensorFlow 2. This package is designed to accelerate research in image registration using parallel computing and deep learning by providing simple, tested entry points to pre-designed networks for users to get a head start with. Additionally, DeepReg provides more basic functionalities such as custom TensorFlow layers which allow more seasoned researchers to build more complex functionalities.\n",
    "\n",
    "A previous MICCAI workshop [learn2reg](https://learn2reg.github.io/) provided an excelent example of novel algorithms and interesting approaches in this active research area, whilst this tutorial explores the strength of the simple, yet generalisable design of DeepReg. \n",
    "- Explain basic concepts in medical image registration;\n",
    "- Explore the links between the modern algorithms using neural networks and the classical iterative algorithms (also using DeepReg);\n",
    "- Introduce the versatile capability of DeepReg, with diverse examples in real clinical challenges.\n",
    "\n",
    "Since DeepReg has a pre-packaged command line interface, minimum scripting and coding expereince with DeepReg is necessary to follow along this tutorial. Accompanying the tutorial is the [DeepReg documentation](https://deepreg.readthedocs.io/en/latest/) and a growing number of demos using real, open-accesible clinical data. This tutorial will get you started with DeepReg by illustrating a number of examples with step-by-step instructions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ey3CnrZuBtCi",
    "colab_type": "text"
   },
   "source": [
    "# Set-up <a name=\"setup\"></a>\n",
    "This tutorial depends on the package [DeepReg](https://github.com/DeepRegNet/DeepReg), which in turn has external dependencies which are managed by `pip`. The current version is implemented as a TensorFlow 2 and Python>=3.7.\n",
    "\n",
    "(You should be able to follow along in a local copy of the notebook - in this case, follow instructions for set up at the [quickstart guide](https://deepreg.readthedocs.io/en/latest/getting_started/install.html).)\n",
    "\n",
    "Training DNNs is computationally expensive. We have tested this demo with GPUs provided by Google through Google Colab. Training times have been roughly measured and indicated where appropriate. You can run this on CPU but we have not tested how long it would take."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "hLZnPHlqZM9R",
    "colab_type": "text"
   },
   "source": [
    "First, ensure that you have GPU enabled for more efficient training. To do this, go to the Edit tab on the upper left hand bar: Edit > Click on Notebook Settings > Enable GPU"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "oedTHyNoDWzw",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "import os\n",
    "\n",
    "os.chdir(\"/content\")\n",
    "# Make a directory \"MICCAI_2020_reg_tutorial\"\n",
    "if not os.path.exists(\"./MICCAI_2020_reg_tutorial\"):\n",
    "  os.makedirs(\"./MICCAI_2020_reg_tutorial\")\n",
    "# Move into the dir\n",
    "os.chdir(\"./MICCAI_2020_reg_tutorial\")\n",
    "print(os.getcwd())"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Z-mMEp4d9ZP8",
    "colab_type": "text"
   },
   "source": [
    "Now we set up these dependencies by installing DeepReg. This may take a few minutes. You may need to restart the runtime first time installing and there might be a few version conflicts between the pre-installed datascience and deepreg libraries - but these are not required in this tutorial."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "oCMVBQEKnrNh",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# Clone the DeepReg repository which contains the code\n",
    "! git clone https://github.com/DeepRegNet/DeepReg\n",
    "%cd ./DeepReg/\n",
    "# Switch to a fixed version\n",
    "! git checkout tags/miccai2020-challenge\n",
    "# pip install into the notebook env\n",
    "! pip install -e . --no-cache-dir\n",
    "print(os.getcwd())"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "aErnxCVsPY7b",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We import some utility modules.\n",
    "import nibabel\n",
    "import tensorflow as tf \n",
    "import deepreg.model.layer as layer\n",
    "import deepreg.model.loss.image as image_loss\n",
    "import deepreg.model.loss.deform as deform_loss\n",
    "import deepreg.model.layer_util as layer_util\n",
    "import matplotlib.pyplot as plt\n",
    "import os\n",
    "import h5py\n",
    "import numpy as np\n",
    "from tensorflow.keras.utils import get_file\n",
    "\n",
    "# We set the plot size to some parameters.\n",
    "plt.rcParams[\"figure.figsize\"] = (100,100)\n",
    "print(os.getcwd())\n",
    "if not os.getcwd() == \"/content/MICCAI_2020_reg_tutorial/DeepReg\":\n",
    "  os.chdir(\"/content/MICCAI_2020_reg_tutorial/DeepReg/\")\n",
    "  print(os.getcwd())"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "E-svl-40ryKV",
    "colab_type": "text"
   },
   "source": [
    "# Introduction to Registration <a name=\"IntroReg\"></a>\n",
    "\n",
    "Image registration is the mapping of one image coordinate system to another and can be sub-divided into rigid registrations and non-rigid registrations, depending on whether or not higher-dimensional tissue deformations are modeled as opposed to, for example, a 6 degree-of-freedom (3 translational axes + 3 rotational axes) rigid transformation. Data may be aligned in many ways - spatially or temporally being two key ones. Image registration is an essential process in many clinical applications and computer-assisted interventions [1, 11]. \n",
    "\n",
    "Applications of medical image registration include - but are not limited to:\n",
    "* Multi-modal registration for image-guided surgery: for example, aligning real-time ultrasound scans to pre-operative CT or MRI scans to real-time achieve guidance in abdominal applications [2, 3].\n",
    "* Atlas-based image segmentation: aligning new images to those carefully segmented, such that the reference segmentations can be propagated to those new images [4].\n",
    "* Longitudinal comparison of images for a given patient with the same imaging modality: for example, comparing the outcome of given cancer treatment in a patients' scans over time [5, 16].\n",
    "* Inter-subject comparison: for example, a population study of organ shapes [6].\n",
    "\n",
    "Typically, we refer to one of the images in the pair as the *moving* image and the other as the *fixed* image. The goal is to find the *correspondence* that aligns the moving image to the fixed image - the transform will project the *moving* coordinates into the *fixed* coordinate space. The correspondence specifies the mapping between all voxels from one image to those from another image. The correspondence can be represented by a dense displacement field (DDF) [9], defined as a set of displacement vectors for all pixels or voxels from one image to another. By using these displacement vectors, one image can be \"warped\" to become more \"similar\" to another.\n",
    "\n",
    "The below animation shows a non-rigid example. A 3D grid, representing voxels in a 3D image volume, is being warped by displacing each voxel positions.\n",
    "\n",
    "![Alt Text](https://github.com/YipengHu/example-data/raw/master/media4demo/grid_warp_0.gif)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8-HV2XYmRp1e",
    "colab_type": "text"
   },
   "source": [
    "## Classical vs Learning-based Methods\n",
    "Image registration has been an active area of research for decades. Historically, image registration algorithms posed registration as an optimization problem between a given pair of moving and fixed images. In this tutorial, we refer these algorithms as to the _classical methods_ - if they only use a pair of images, as opposed to the _learning-based_ algorithms, which require a seperate training step with many more pairs of training images (just like all other machine learning problems)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "CRBUHroPRFzI",
    "colab_type": "text"
   },
   "source": [
    "\n",
    "## Classical Registration Methods\n",
    "\n",
    "In the classical methods, a pre-defined _transformation model_, rigid or nonrigid, is iteratively _optimised_ to minimize a _similarty measure_ - a metric that quantifies how \"similar\" the warped moving image and the fixed image are.\n",
    "\n",
    "Similarity measures can be designed to consider only important image features (extracted from a pre-processing step) or directly sample all intensity values from both images. As such, we can subdivide algorithms into two sub-types:\n",
    "\n",
    "* **Feature-based registration**: Important features in images are used to calculate transformations between the dataset pairs. For example, point set registration - a type of features widely used in many applications - finds a transformation between point clouds. These types of transformations can be estimated using Iterative Closest Point (ICP) [7] or coherent point drift [8] (CPD), for rigid or nonrigid transformation, respectively.\n",
    "\n",
    "  For example, the basis of ICP is to iteratively minimise the distance between the two point clouds by matching the points from one set to the closest points in the other set. The transformation can then be estimated from the found set of corresponding point pairs and repeating the process many times to update the correspondence and the transformation in an alternate fashion.\n",
    "\n",
    "\n",
    "* **Intensity-based registration**: Typically, medical imaging data does not come in point cloud format, but rather, 2D, 3D, and 4D matrices with a range of intensity values at each pixel or voxel. As such, different measures can be used directly on the intensity distributions of the data to measure the similarity between the moving and fixed images. Examples of measures are cross-correlation, mutual information, and simple sum-square-difference - these intensity-based algorithms can optimize a transformation model directly using images without the feature extraction step.\n",
    "\n",
    "Many of today's deep-learning-based methods have been heavily influenced - and derived their methods from - these prior areas of research."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZCCb9pW5RJ4B",
    "colab_type": "text"
   },
   "source": [
    "## Why use Deep Learning for Medical Image Registration?\n",
    "Usually, it is challenging for classical methods to handle real-time registration of large feature sets or high dimensional image volumes owing to their computationally intense nature, especially in the case of 3D or high dimensional nonrigid registration. State-of-the-art classical methods that are implemented on GPU still struggle for real-time performance for many time-critical clinical applications.\n",
    "\n",
    "Secondly, classical algorithms are inherently pairwise approaches that can not directly take into account population data statistics and relying on well-designed transformation models and valid similarity being available and robust, challenging for many real-world tasks.\n",
    "\n",
    "In contrast, the computationally efficient inference and the ability to model complex, non-linear transformations of learning-based methods have motivated the development of neural networks that infer the optimal transformation from unseen data [1]. \n",
    "\n",
    "However, it is important to point out that \n",
    "* Many deep-learning-based methods are still subject to the limitations discussed with classical methods, especially those that borrow transformation models and similarity measures directly from the classical algorithms;\n",
    "* Deep learning models are limited at inference time by how the model was trained - it is well known that deep learning models can overfit to the training data;\n",
    "* Deep learning models can be more computationally intensive to train than classical methods at inference;\n",
    "* Classical algorithms have been refined for many clinical applications and still work well."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Icbcy_ayQE_u",
    "colab_type": "text"
   },
   "source": [
    "\n",
    "# Registration with Deep Learning <a name=\"DeepRegistrationIntro\"></a>\n",
    "\n",
    "In recent years, learning-based image registration has been reformulated as a machine learning problem, in which, many pairs of moving and fixed images are passed to a machine learning model (usually a neural network nowadays) to predict a transformation between a new pair of images.\n",
    "\n",
    "In this tutorial, we investigate three factors that determine a deep learning approach for image registration:\n",
    "\n",
    "1. What type of network output is one trying to predict?\n",
    "2. What type of image data is being registered? Are there any other data, such as segmentations, to support the registration?\n",
    "3. Are the data paired? Are they labeled?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SIUQl8CnQUVe",
    "colab_type": "text"
   },
   "source": [
    "## Types of Network Output\n",
    "\n",
    "We need to choose what type of network output we want to predict. \n",
    "\n",
    "- **Predicting a dense displacement field**\n",
    "\n",
    "  Given a pair of moving and fixed images, a registration network can be trained to output dense displacement field (DDF)  [9] of the same shape as the moving image. Each value in the DDF can be considered as the placement of the corresponding pixel / voxel of the moving image. Therefore, the DDF defines a mapping from the moving image's coordinates to the fixed image.\n",
    "\n",
    "  _In this tutorial, we mainly focus on DDF-based methods._\n",
    "\n",
    "\n",
    "- **Predict a static velocity field**\n",
    "\n",
    "  Another option is to predict a static dense velocity field (SVF or DVF) between a pair of images, such that a diffeomorphic DDF can be numerically integrated. We refer you to [9] and [15] for more details.\n",
    "\n",
    "\n",
    "- **Predict an affine transformation**\n",
    "\n",
    "  A more constrained option is to predict an affine transformation and parameterize the affine transformation matrix to 12 degrees of freedom. The DDF can then be computed to resample the moving images in fixed image space.\n",
    "\n",
    "\n",
    "- **Predict a region of interest**\n",
    "\n",
    "  Instead of outputting the transformation between coordinates, given moving image, fixed image, and a region of interest (ROI) in the moving image, the network can predict the ROI in fixed image directly. Interested readers are referred to the MICCAI 2019 paper [10].\n",
    "\n",
    "\n",
    "## Data availability, level of supervision and network training strategies\n",
    "\n",
    "Depending on the availability of the data labels, registration networks can be trained with different approaches. These will influence our loss choice.\n",
    "\n",
    "### Unsupervised\n",
    "\n",
    "When multiple labels are available for each image, the labels can be sampled during training, such that only one label per image is used in each iteration of the data set (epoch). We expand on this for different dataset loaders in the [DeepReg dataset loader API](https://deepreg.readthedocs.io/en/latest/api/loader.html#) but do not need this for the tutorials in this notebook.\n",
    "\n",
    "![Unsupervised DDF-based registration network](https://github.com/DeepRegNet/DeepReg/blob/main/docs/source/_images/registration-ddf-nn-unsupervised.svg?raw=true)\n",
    "\n",
    "The loss function often consists of the intensity-based loss and deformation loss.\n",
    "\n",
    "### Weakly-supervised\n",
    "\n",
    "When an intensity-based loss is not appropriate for the image pair one would like to register, the training can take a pair of corresponding moving and fixed labels (in addition to the image pair), represented by binary masks, to compute a label dissimilarity (label based loss) to drive the registration.\n",
    "\n",
    "Combined with the regularisation on the predicted displacement field, this forms a weakly-supervised training. An illustration of a weakly-supervised DDF-based registration network is provided below.\n",
    "\n",
    "When multiple labels are available for each image, the labels can be sampled during training, such that only one label per image is used in each iteration of the data set (epoch). Details are again provided in the [DeepReg dataset loader API](https://deepreg.readthedocs.io/en/latest/docs/dataset_loader.html) but not required for the tutorials.\n",
    "\n",
    "![Weakly-supervised DDF-based registration network](https://github.com/DeepRegNet/DeepReg/blob/main/docs/source/_images/registration-ddf-nn-weakly-supervised.svg?raw=true)\n",
    "\n",
    "### Combined\n",
    "\n",
    "When the data label is available, combining intensity-based, label-based, and\n",
    "deformation based losses together has shown superior registration accuracy, compared to\n",
    "unsupervised and weakly supervised methods. Following is an illustration of a combined\n",
    "DDF-based registration network.\n",
    "\n",
    "![Combined DDF-based registration network](https://github.com/DeepRegNet/DeepReg/blob/main/docs/source/_images/registration-ddf-nn-combined.svg?raw=true)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ND4KBjaxSuYb",
    "colab_type": "text"
   },
   "source": [
    "**A note on the relationship between feature-based registration and weakly-supervised registration**\n",
    "These segmentations, typically highlighting specific anatomical or pathological regions of interest (ROIs) in the scan(s), may also be considered a form of image features, extracted manually or using automated methods. The similarity measures or distance functions used in classical feature-based registration methods can then be used to drive the training of the weakly-supervised registration networks. These measures include the overlap between ROIs or Euclidian distance between the ROI centroids. A key insight is that the weakly-supervised learning described above is to learn the feature extraction together with the alignment of the features in an end-to-end manner. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "pn1IiKXOrpmJ",
    "colab_type": "text"
   },
   "source": [
    "## Loss Functions\n",
    "\n",
    "We aim to train a network to predict some transformation between a pair of images that is likely. To do this, we need to define what is a \"likely\" transformation. This is done via a *loss function*.\n",
    "\n",
    "The loss function defined to train a registration network will depend on the type of data we have access to, yet another methodological detail drawn substantial experience from the classical methods."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "n7ojdeNNPY7e",
    "colab_type": "text"
   },
   "source": [
    "- **Label based loss**\n",
    "\n",
    "  Provided labels for the input images, a label based loss may be used to measure the (dis)similarity of warped regions of interest. Having computed a transformation between images using the net, one of the labels is warped and compared to the ground truth image label.\n",
    " Labels are typically manually contoured organs.\n",
    "\n",
    "  The common loss function is Dice loss, Jacard and average cross-entropy over all\n",
    "  voxels, which are measures of the overlap of the ROIs. \n",
    "  For example, the Dice score between two sets, X and Y, is defined like:\n",
    "\n",
    "  $$Dice = \\frac{2 (X \\cap Y)}{|X| + |Y|}$$\n",
    "\n",
    "    Lets illustrate with some examples. We are using head and neck CT scans data [12]. The data is openly accesible, the original source can be found [here.](https://wiki.cancerimagingarchive.net/display/Public/Head-Neck-PET-CT)\n",
    "\n",
    "    The labels for this data-label pair indicate the location of the spinal cord and brainstem, which typically are regions to be avoided during radiotherapy interventions.\n",
    "\n",
    "    \n",
    "  \n",
    "\n",
    "\n",
    "  "
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "WKUVt8VTPY7f",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# Import medical data - we are going to use the head and neck CT data to show the losses.\n",
    "MAIN_PATH = os.getcwd()\n",
    "PROJECT_DIR = os.path.join(MAIN_PATH, \"demos/classical_ct_headandneck_affine\")\n",
    "\n",
    "DATA_PATH = \"dataset\"\n",
    "FILE_PATH = os.path.abspath(os.path.join(PROJECT_DIR, DATA_PATH, \"demo.h5\"))\n",
    "ORIGIN = \"https://github.com/yipenghu/example-data/raw/master/hnct/demo.h5\"\n",
    "\n",
    "if os.path.exists(FILE_PATH):\n",
    "    os.remove(FILE_PATH)   \n",
    "else:\n",
    "  os.makedirs(os.path.join(PROJECT_DIR, DATA_PATH))\n",
    "\n",
    "get_file(FILE_PATH, ORIGIN)\n",
    "print(\"CT head-and-neck data downloaded: %s.\" % FILE_PATH)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "maMigkE7PY7i",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We define a function to visualise the results of the overlap for label based loss\n",
    "from skimage.color  import label2rgb\n",
    "def pred_label_comparison(pred, mask, shape_pred, thresh=0.5):\n",
    "    \"\"\"\n",
    "    Compares prediction array to mask array and returns\n",
    "    rgb image with color coding corresponding to prediction\n",
    "    outcome.\n",
    "    True positive = white\n",
    "    True negative = black\n",
    "    False positive = green\n",
    "    False negative = red\n",
    "\n",
    "    INPUTS:\n",
    "    - pred: [K M N] np.array, containing K probability maps\n",
    "    outrue_posut from model.\n",
    "    - mask: [K M N] np.array of 0s, 1s, containing K ground\n",
    "    truths for corresponding prediction.\n",
    "    - thresh: (OPT) = float between [0 1], corresponding\n",
    "    to value above which to threshold predictions.\n",
    "\n",
    "    OUTPUTS:\n",
    "    - label: [K M N 3] np.array, containing K RGB images\n",
    "    colour coded to show areas of intersections between\n",
    "    masks and predictions.\n",
    "    \"\"\"\n",
    "    # Create outrue_posut np.array to store images\n",
    "    label = np.zeros((shape_pred[0], shape_pred[1], shape_pred[2], 3))\n",
    "\n",
    "    # Thresholding pred\n",
    "    pred_thresh = pred > thresh\n",
    "\n",
    "    # Creating inverse to the masks and predictions\n",
    "    mask_not = np.logical_not(mask)\n",
    "    pred_not = np.logical_not(pred_thresh)\n",
    "\n",
    "    # Finding intersections\n",
    "    true_pos_array = np.logical_and(pred_thresh, mask)\n",
    "    false_pos_array = np.logical_and(pred_thresh, mask_not)\n",
    "    false_neg_array = np.logical_and(pred_not, mask)\n",
    "\n",
    "    # Labelling via color\n",
    "    false_pos_labels = 2*false_pos_array # green\n",
    "    false_neg_labels = 3*false_neg_array # red\n",
    "    label_array = true_pos_array + false_pos_labels + false_neg_labels\n",
    "\n",
    "    # Compare all preds to masks\n",
    "    for i in range(shape_pred[0]):\n",
    "        label[i, :, :, :] = label2rgb(\n",
    "            label_array[i, :, :],\n",
    "            bg_label=0,\n",
    "            bg_color=(0, 0, 0),\n",
    "            colors=[(1, 1, 1), (0, 1, 0), (1, 0, 0)])\n",
    "        # 1=(tp, white), 2=(fp, green), 3=(fn, red)\n",
    "\n",
    "    return label"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "JVQ93_CHPY7k",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We set the plot size to some parameters.\n",
    "plt.rcParams[\"figure.figsize\"] = (10,10)\n",
    "\n",
    "# Opening the file\n",
    "fid = h5py.File(FILE_PATH, \"r\")\n",
    "\n",
    "# Getting the image and label\n",
    "fixed_image = tf.cast(tf.expand_dims(fid[\"image\"], axis=0), dtype=tf.float32)\n",
    "fixed_labels = tf.cast(tf.expand_dims(fid[\"label\"], axis=0), dtype=tf.float32)\n",
    "\n",
    "# Getting the 0th slice in the tensor\n",
    "fixed_image_0 = fixed_image[0, ..., 0]\n",
    "# Getting the 0th slice foreground label, at index 1 in the label tensor.\n",
    "fixed_label_0 = fixed_labels[0, ..., 0, 1]"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JztvbSU0PY7n",
    "colab_type": "text"
   },
   "source": [
    "To compare two labels which are similar to illustrate the losses, we will slightly warp them using an affine transform. DeepReg has a set of utility functions which can be used to warp image tensors quickly. We introduce their functionalities below as we will use them throughout the rest of the tutorial.\n",
    "\n",
    "* random_transform_generator: generates a batch of 3D tf.Tensor affine transforms \n",
    "* get_reference_grid: creates a mesh tf.Tensor of certain dimensions\n",
    "* warp_grid: using the random_transform_generator transforms (or other), we warp the reference mesh.\n",
    "* resample: resamples an image/label tensor using the warped grid.\n",
    "\n",
    "For a more in depth view of the functions refer to the [documentation](https://github.com/DeepRegNet/DeepReg/blob/main/deepreg/model/layer_util.py)."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "y8SdoXJBPY7n",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# Simulate a warped label\n",
    "# The following function generates a random transform.\n",
    "transform_random = layer_util.random_transform_generator(batch_size=1, scale=0.02, seed=4)\n",
    "\n",
    "# We create a reference grid of image size\n",
    "grid_ref = layer_util.get_reference_grid(grid_size=fixed_labels.shape[1:4])\n",
    "\n",
    "# We warp our reference grid by our random transform\n",
    "grid_random = layer_util.warp_grid(grid_ref, transform_random)\n",
    "# We resample the fixed image with the random transform to create a distorted\n",
    "# image, which we will use as our moving image.\n",
    "moving_label = layer_util.resample(vol=fixed_labels, loc=grid_random)[0, ..., 0, 1]\n",
    "moving_image = layer_util.resample(vol=fixed_image, loc=grid_random)[0, ..., 0]\n",
    "\n",
    "fig, axs = plt.subplots(1, 2)\n",
    "axs[0].imshow(fixed_label_0)\n",
    "axs[1].imshow(fixed_image[0, ..., 0])\n",
    "axs[0].set_title(\"Fixed label\")\n",
    "axs[1].set_title(\"Fixed image\")\n",
    "plt.show()\n",
    "fig, axs = plt.subplots(1, 2)\n",
    "axs[1].imshow(moving_image)\n",
    "axs[0].imshow(moving_label>0.1)\n",
    "axs[0].set_title(\"Moving label\")\n",
    "axs[1].set_title(\"Moving image\")\n",
    "plt.show()"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "AiOd8Gh_PY7p",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We set the plot size to some parameters.\n",
    "plt.rcParams[\"figure.figsize\"] = (10,10)\n",
    "\n",
    "comparison = pred_label_comparison(np.expand_dims(moving_label, axis=0), np.expand_dims(fixed_label_0, axis=0), [1, 128, 128], thresh=0.1)\n",
    "\n",
    "plt.imshow(comparison[0, :, :])\n",
    "plt.title(\"Comparing fixed and moving label\")\n",
    "plt.show()"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tymmqXqGPY7s",
    "colab_type": "text"
   },
   "source": [
    "Where the white pixels indicate true positives, the green pixels indicate false positives (ie where the moving label has a segmentation where the fixed label does not) and the red pixels indicate false negatives (ie where the moving label lacks segmented pixels with respect to the fixed label).\n",
    "\n",
    "Lets calculate the dice score using a function from DeepReg - it will result in a value between 0 (no overlap) and 1 (perfect overlap)."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "Rm_AYHX_PY7t",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "from deepreg.model.loss.label import dice_score\n",
    "# Calculating dice - we need [batch, dim1, dim2, dim3], so we expand the labels axes'\n",
    "batch_moving_label = np.expand_dims(np.expand_dims(moving_label, axis=0), axis=-1)\n",
    "batch_fixed_label = np.expand_dims(np.expand_dims(fixed_label_0, axis=0), axis=-1)\n",
    "\n",
    "score_warped = dice_score(batch_moving_label, batch_fixed_label)\n",
    "print(\"Score for dissimilar labels: {}\".format(score_warped))\n"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "lAodf-ymPY7v",
    "colab_type": "text"
   },
   "source": [
    "We would expect the Dice Score between the same label to be perfect - lets check:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "BByQcPsbPY7v",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "score_same = dice_score(batch_fixed_label, batch_fixed_label)\n",
    "print(\"Score for same labels: {}\".format(score_same))"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "jipGX5E_PY7y",
    "colab_type": "text"
   },
   "source": [
    "We can use this score as a measure of registration via label-driven methods such as weakly-supervised and conditional segmentation: we want to maximise the overlap score such that the two features are as similar as possible. So, to convert the score into a loss we should minimise the negative overlap measure (eg. loss = 1 - dice_score) to maximise overlap of the regions during training.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BhNRoqjHPY7y",
    "colab_type": "text"
   },
   "source": [
    "- **Intensity based (image based) loss**\n",
    "\n",
    "  This type of loss measures the dissimilarity of the fixed image and warped moving\n",
    "  image, which is adapted from the classical image registration methods. Intensity based\n",
    "  loss can be highly modality-dependent. The common loss functions are normalized cross correlation (NCC), sum of squared distance (SSD), and mutual information (MI) with their variants. \n",
    "\n",
    "  For example, the sum of square differences takes the direct difference in intensity values between moving and fixed image tensors of dimensions (batch, I, J, K, channels) as a measure of similarity by calculating the average difference per tensor:\n",
    "  $$SSD = \\frac{1}{I \\times J \\times  K \\times  C}\\sum\\limits_{i,j,k,c}(moving_{i,j,k,c} - fixed_{i,j,k,c})^{2} $$\n",
    "  "
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "4erV3EwrPY7z",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# Illustrate intensity based loss\n",
    "from deepreg.model.loss.image import ssd\n",
    "\n",
    "# The ssd function requires [batch, dim1, dim2, dim3, ch] sized tensors -\n",
    "# expand the image dims as ours are [batch, dim1, dim2, dim3]\n",
    "moving_image = layer_util.resample(vol=fixed_image, loc=grid_random)\n",
    "\n",
    "fig, axs = plt.subplots(1, 2)\n",
    "axs[0].imshow(moving_image[0, ..., 0])\n",
    "axs[1].imshow(fixed_image[0, ..., 0])\n",
    "axs[0].set_title(\"Moving image\")\n",
    "axs[1].set_title(\"Fixed image\")\n",
    "plt.show()\n",
    "\n",
    "# We can visualise the difference between tensors by calculating a new tensor\n",
    "tensor_ssd = tf.square(moving_image[0, ..., 0] - fixed_image[0, ..., 0])\n",
    "plt.imshow(tensor_ssd)\n",
    "plt.show()\n",
    "\n",
    "# We calculate over all the images\n",
    "ssd_loss = ssd(np.expand_dims(moving_image, axis=-1), np.expand_dims(fixed_image, axis=-1))\n",
    "print(\"SSD loss between the image tensors: {}\".format(ssd_loss))"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "POUws1EwPY71",
    "colab_type": "text"
   },
   "source": [
    "- **Deformation loss**\n",
    "\n",
    "  Additionally, training may be regularised by computing the \"likelihood\" of a given displacement field. High deformation losses point to very unlikely displacement due to high gradients of the field - typically, deformation losses ensure smoothness in the displacement field. For DDFs, typical regularisation losses are bending energy losses, L1 or L2 norms of the displacement gradients."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iDRbUdZjq0_A",
    "colab_type": "text"
   },
   "source": [
    "## Image Registration with Deep Learning: Summary <a name=\"DeepRegistrationIntro\"></a>\n",
    "\n",
    "For deep learning methods, pairs of images, denoted as moving\n",
    "and fixed images, are passed to the network to predict a transformation between the images.\n",
    "\n",
    "The deep learning approach for medical image registration will depend on mainly three factors:\n",
    "\n",
    "1. What type of network output is one trying to predict?\n",
    "2. What type image data are being registered? Are there any other data, such as segmentations, to support the registration?\n",
    "3. Are the data paired? Are they labeled?\n",
    "\n",
    "From this, we can design an appropriate architecture and choose an adequate loss function to motivate training."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "KArmB41DQfir",
    "colab_type": "text"
   },
   "source": [
    "# Two Classical Registration Examples <a name=\"classical-examples\"></a>\n",
    "\n",
    "We will use DeepReg functions to register two images.\n",
    "\n",
    "First, we will illustrate the possibility of \"self-registering\" an image to it's affine-transformed counterpart, using the same head and neck CT scans data [12] we used to illustrate the losses."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JYC-lMrB4Ct_",
    "colab_type": "text"
   },
   "source": [
    "## Optimising an affine transformation: a \"self-registration\" example"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "IZNdefjurAmQ",
    "colab_type": "code",
    "tags": [],
    "colab": {}
   },
   "source": [
    "# We set the plot size to some parameters.\n",
    "plt.rcParams[\"figure.figsize\"] = (100,100)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "FAh-31ZOO5Ph",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We define some utility functions first\n",
    "## optimisation\n",
    "@tf.function\n",
    "def train_step_CT(grid, weights, optimizer, mov, fix):\n",
    "    \"\"\"\n",
    "    Train step function for backprop using gradient tape.\n",
    "    GradientTape is a tensorflow API which automatically\n",
    "    differentiates and facilitates the implementation of machine\n",
    "    learning algorithms: https://www.tensorflow.org/guide/autodiff.\n",
    "\n",
    "    :param grid: reference grid return from util.get_reference_grid\n",
    "    :param weights: trainable affine parameters [1, 4, 3]\n",
    "    :param optimizer: tf.optimizers: choice of optimizer\n",
    "    :param mov: moving image, tensor shape [1, m_dim1, m_dim2, m_dim3]\n",
    "    :param fix: fixed image, tensor shape[1, f_dim1, f_dim2, f_dim3]\n",
    "    :return loss: image dissimilarity to minimise\n",
    "    \"\"\"\n",
    "    # We initialise an instance of gradient tape to track operations\n",
    "    with tf.GradientTape() as tape:\n",
    "        pred = layer_util.resample(vol=mov, loc=layer_util.warp_grid(grid, weights))\n",
    "        # Calculate the loss function between the fixed image\n",
    "        # and the moving image\n",
    "        loss = image_loss.dissimilarity_fn(\n",
    "            y_true=fix, y_pred=pred, name=image_loss_name\n",
    "        )\n",
    "    gradients = tape.gradient(loss, [weights])\n",
    "    # Applying the gradients\n",
    "    optimizer.apply_gradients(zip(gradients, [weights]))\n",
    "    return loss\n",
    "\n",
    "def plot_results(moving_image, fixed_image, warped_moving_image, nIdx):\n",
    "  \"\"\"\n",
    "  Plotting the results from training\n",
    "  :param moving_image: tensor dims [IM_SIZE_0, IM_SIZE_1, 3]\n",
    "  :param fixed_image:  tensor dims [IM_SIZE_0, IM_SIZE_1, 3]\n",
    "  :param warped_moving_image: tensor dims [IM_SIZE_0, IM_SIZE_1, 3]\n",
    "  :param nIdx: number of indices to display\n",
    "  \"\"\"\n",
    "  # Display\n",
    "  plt.figure()\n",
    "  # Generate a nIdx images in 3s\n",
    "  for idx in range(nIdx):\n",
    "      axs = plt.subplot(nIdx, 3, 3 * idx + 1)\n",
    "      axs.imshow(moving_image[0, ..., idx_slices[idx]], cmap=\"gray\")\n",
    "      axs.axis(\"off\")\n",
    "      axs = plt.subplot(nIdx, 3, 3 * idx + 2)\n",
    "      axs.imshow(fixed_image[0, ..., idx_slices[idx]], cmap=\"gray\")\n",
    "      axs.axis(\"off\")\n",
    "      axs = plt.subplot(nIdx, 3, 3 * idx + 3)\n",
    "      axs.imshow(warped_moving_image[0, ..., idx_slices[idx]], cmap=\"gray\")\n",
    "      axs.axis(\"off\")\n",
    "  plt.ion()\n",
    "  plt.suptitle('Moving Image - Fixed Image - Warped Moving Image', fontsize=200)\n",
    "  plt.show()\n",
    "\n",
    "def display(moving_image, fixed_image):\n",
    "  \"\"\"\n",
    "  Displaying our two image tensors to register\n",
    "  :param moving_image: [IM_SIZE_0, IM_SIZE_1, 3]\n",
    "  :param fixed_image:  [IM_SIZE_0, IM_SIZE_1, 3]\n",
    "  \"\"\"\n",
    "  # Display\n",
    "  idx_slices = [int(5+x*5) for x in range(int(fixed_image_size[3]/5)-1)]\n",
    "  nIdx = len(idx_slices)\n",
    "  plt.figure()\n",
    "  for idx in range(len(idx_slices)):\n",
    "      axs = plt.subplot(nIdx, 2, 2*idx+1)\n",
    "      axs.imshow(moving_image[0,...,idx_slices[idx]], cmap='gray')\n",
    "      axs.axis('off')\n",
    "      axs = plt.subplot(nIdx, 2, 2*idx+2)\n",
    "      axs.imshow(fixed_image[0,...,idx_slices[idx]], cmap='gray')\n",
    "      axs.axis('off')\n",
    "  plt.suptitle('Moving Image - Fixed Image', fontsize=200)\n",
    "  plt.show()"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "PlvkyqKjNrRO",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We re-use the data from the head and neck CT we used to illustrate the losses, so we don't have to redownload it.\n",
    "\n",
    "## registration parameters\n",
    "image_loss_name = \"ssd\"\n",
    "learning_rate = 0.01\n",
    "total_iter = int(1001)\n",
    "\n",
    "\n",
    "# Opening the file\n",
    "fid = h5py.File(FILE_PATH, \"r\")\n",
    "fixed_image = tf.cast(tf.expand_dims(fid[\"image\"], axis=0), dtype=tf.float32)\n",
    "\n",
    "# normalisation to [0,1]\n",
    "fixed_image = (fixed_image - tf.reduce_min(fixed_image)) / (\n",
    "    tf.reduce_max(fixed_image) - tf.reduce_min(fixed_image)\n",
    ")  \n",
    "\n",
    "# generate a radomly-affine-transformed moving image using DeepReg utils\n",
    "fixed_image_size = fixed_image.shape\n",
    "# The following function generates a random transform.\n",
    "transform_random = layer_util.random_transform_generator(batch_size=1, scale=0.2)\n",
    "\n",
    "# We create a reference grid of image size\n",
    "grid_ref = layer_util.get_reference_grid(grid_size=fixed_image_size[1:4])\n",
    "\n",
    "# We warp our reference grid by our random transform\n",
    "grid_random = layer_util.warp_grid(grid_ref, transform_random)\n",
    "# We resample the fixed image with the random transform to create a distorted\n",
    "# image, which we will use as our moving image.\n",
    "moving_image = layer_util.resample(vol=fixed_image, loc=grid_random)\n",
    "\n",
    "# warp the labels to get ground-truth using the same random affine transform\n",
    "# for validation\n",
    "fixed_labels = tf.cast(tf.expand_dims(fid[\"label\"], axis=0), dtype=tf.float32)\n",
    "# We have multiple labels, so we apply the transform to all the labels by\n",
    "# stacking them\n",
    "moving_labels = tf.stack(\n",
    "    [\n",
    "        layer_util.resample(vol=fixed_labels[..., idx], loc=grid_random)\n",
    "        for idx in range(fixed_labels.shape[4])\n",
    "    ],\n",
    "    axis=4,\n",
    ")\n",
    "\n",
    "\n",
    "# We create an affine transformation as a trainable weight layer\n",
    "var_affine = tf.Variable(\n",
    "    initial_value=[\n",
    "        [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0], [0.0, 0.0, 0.0]]\n",
    "    ],\n",
    "    trainable=True,\n",
    ")\n",
    "\n",
    "# We perform an optimisation by backpropagating the loss through to our \n",
    "# trainable weight layer.\n",
    "optimiser = tf.optimizers.Adam(learning_rate)\n",
    "\n",
    "\n",
    "# Perform an optimisation for total_iter number of steps.\n",
    "for step in range(total_iter):\n",
    "    loss_opt = train_step_CT(grid_ref, var_affine, optimiser, moving_image, fixed_image)\n",
    "    if (step % 50) == 0:  # print info\n",
    "        tf.print(\"Step\", step, image_loss_name, loss_opt)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Uzp0FEz1Qg9Q",
    "colab_type": "text"
   },
   "source": [
    "Once the optimisation converges (this may take a minute on a GPU), we can use the optimised affine transformation to warp the moving images."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "jc8vc_rCPdSC",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "## warp the moving image using the optimised affine transformation\n",
    "grid_opt = layer_util.warp_grid(grid_ref, var_affine)\n",
    "warped_moving_image = layer_util.resample(vol=moving_image, loc=grid_opt)\n",
    "\n",
    "idx_slices = [int(5 + x * 5) for x in range(int(fixed_image_size[3] / 5) - 1)]\n",
    "nIdx = len(idx_slices)\n",
    "# display to check the results.\n",
    "plot_results(moving_image, fixed_image, warped_moving_image, nIdx)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "O_1SRBlSW5PT",
    "colab_type": "text"
   },
   "source": [
    "We can see how the data has registered to the fixed image. Let's see how the transformation appears on the labels."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "17lokrA4QV-8",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# Check how the labels have been registered\n",
    "warped_moving_labels  = layer_util.resample(vol=moving_labels, loc=grid_opt)\n",
    "\n",
    "# display\n",
    "for idx_label in range(fixed_labels.shape[4]):\n",
    "    plt.figure()\n",
    "    for idx in range(len(idx_slices)):\n",
    "        axs = plt.subplot(nIdx, 3, 3 * idx + 1)\n",
    "        axs.imshow(moving_labels[0, ..., idx_slices[idx], idx_label], cmap=\"gray\")\n",
    "        axs.axis(\"off\")\n",
    "        axs = plt.subplot(nIdx, 3, 3 * idx + 2)\n",
    "        axs.imshow(fixed_labels[0, ..., idx_slices[idx], idx_label], cmap=\"gray\")\n",
    "        axs.axis(\"off\")\n",
    "        axs = plt.subplot(nIdx, 3, 3 * idx + 3)\n",
    "        axs.imshow(\n",
    "            warped_moving_labels[0, ..., idx_slices[idx], idx_label], cmap=\"gray\"\n",
    "        )\n",
    "        axs.axis(\"off\")\n",
    "    plt.ion()\n",
    "    plt.show()"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9098lsvHXJ_E",
    "colab_type": "text"
   },
   "source": [
    "Here we should be able to see either or both of the following two cases.\n",
    "- There are labels apeared in some slices of the fixed and warped moving images while it does not exist in the same slices of the original moving image;\n",
    "- Some labels in the original moving images disapeared in both fixed and warped moving images, from the same slices.\n",
    "\n",
    "Both indicate the warped moving image has been indeed \"warped\" closer to the fixed image space from the moving image space.\n",
    "\n",
    "## Optimising a nonrigid transformation: an inter-subject registration application\n",
    "\n",
    "Now, we will nonrigid-register inter-subject scans, using MR images from two prostate cancer patients [13]. The data is from the [PROMISE12 Grand Challenge](https://promise12.grand-challenge.org/). We will follow the same procedure, optimising the registration for several steps."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "6TBdx9dDt41n",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# Defining some utility functions\n",
    "@tf.function\n",
    "def train_step(warper, weights, optimizer, mov, fix):\n",
    "    \"\"\"\n",
    "    Train step function for backpropagation using gradient tape.\n",
    "    In contrast to CT function, we have a deformation regularisation.\n",
    "\n",
    "    :param warper: warping function returned from layer.Warping\n",
    "    :param weights: trainable ddf [1, f_dim1, f_dim2, f_dim3, 3]\n",
    "    :param optimizer: tf.optimizers\n",
    "    :param mov: moving image [1, m_dim1, m_dim2, m_dim3]\n",
    "    :param fix: fixed image [1, f_dim1, f_dim2, f_dim3]\n",
    "    :return:\n",
    "        loss: overall loss to optimise\n",
    "        loss_image: image dissimilarity\n",
    "        loss_deform: deformation regularisation\n",
    "    \"\"\"\n",
    "    with tf.GradientTape() as tape:\n",
    "        pred = warper(inputs=[weights, mov])\n",
    "        # Calculating the image loss between the ground truth and prediction\n",
    "        loss_image = image_loss.dissimilarity_fn(\n",
    "            y_true=fix, y_pred=pred, name=image_loss_name\n",
    "        )\n",
    "        # We calculate the deformation loss\n",
    "        loss_deform = deform_loss.local_displacement_energy(weights, deform_loss_name)\n",
    "        # Total loss is weighted\n",
    "        loss = loss_image + weight_deform_loss * loss_deform\n",
    "    # We calculate the gradients by backpropagating the loss to the trainable layer\n",
    "    gradients = tape.gradient(loss, [weights])\n",
    "    # Using our tf optimizer, we apply the gradients\n",
    "    optimizer.apply_gradients(zip(gradients, [weights]))\n",
    "    return loss, loss_image, loss_deform"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "AXoa6Ac_SLXr",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "## We download the data for this example.\n",
    "MAIN_PATH = os.getcwd()\n",
    "\n",
    "DATA_PATH = \"dataset\"\n",
    "if not os.path.exists(os.path.join(MAIN_PATH, DATA_PATH)):\n",
    "  os.makedirs(os.path.join(MAIN_PATH, DATA_PATH))\n",
    "\n",
    "FILE_PATH = os.path.abspath(os.path.join(MAIN_PATH, DATA_PATH, \"demo2.h5\"))\n",
    "ORIGIN = \"https://github.com/yipenghu/example-data/raw/master/promise12/demo2.h5\"\n",
    "\n",
    "get_file(FILE_PATH, ORIGIN)\n",
    "print(\"Prostate MR data downloaded: %s.\" % FILE_PATH)\n",
    "\n",
    "os.chdir(MAIN_PATH)\n",
    "\n",
    "DATA_PATH = \"dataset\"\n",
    "FILE_PATH = os.path.join(MAIN_PATH, DATA_PATH, \"demo2.h5\")\n",
    "\n",
    "fid = h5py.File(FILE_PATH, \"r\")"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "tags": [],
    "id": "OPKCDDULPT3s",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "## We define some registration parameters - play around with these!\n",
    "image_loss_name = \"lncc\" # local normalised cross correlation loss between images\n",
    "deform_loss_name = \"bending\" # Loss to measure the bending energy of the ddf\n",
    "weight_deform_loss = 1 # we weight the deformation loss\n",
    "learning_rate = 0.1\n",
    "total_iter = int(3001) # This will train for longer"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "fNk5DbimPT3w",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We get our two subject images from our datasets\n",
    "moving_image = tf.cast(tf.expand_dims(fid[\"image0\"], axis=0), dtype=tf.float32)\n",
    "fixed_image = tf.cast(tf.expand_dims(fid[\"image1\"], axis=0), dtype=tf.float32)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "BYo61smNPT30",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "# We initialise our layers\n",
    "fixed_image_size = fixed_image.shape\n",
    "initialiser = tf.random_normal_initializer(mean=0, stddev=1e-3)\n",
    "\n",
    "# Creating our DDF tensor that can be trained\n",
    "# The DDF will be of shape [IM_SIZE_1, IM_SIZE_2, 3],\n",
    "# representing the displacement field at each pixel and xyz dimension.\n",
    "var_ddf = tf.Variable(initialiser(fixed_image_size + [3]), name=\"ddf\", trainable=True)\n",
    "\n",
    "# We create a warping layer and initialise an optimizer\n",
    "warping = layer.Warping(fixed_image_size=fixed_image_size[1:4])\n",
    "optimiser = tf.optimizers.Adam(learning_rate)\n",
    "\n",
    "\n",
    "## Optimising the layer\n",
    "## With GPU this takes about 5 minutes.\n",
    "for step in range(total_iter):\n",
    "    # Call the gradient tape function\n",
    "    loss_opt, loss_image_opt, loss_deform_opt = train_step(\n",
    "        warping, var_ddf, optimiser, moving_image, fixed_image\n",
    "    )\n",
    "    if (step % 50) == 0:  # print info at every 50th step\n",
    "        tf.print(\n",
    "            \"Step\",\n",
    "            step,\n",
    "            \"loss\",\n",
    "            loss_opt,\n",
    "            image_loss_name,\n",
    "            loss_image_opt,\n",
    "            deform_loss_name,\n",
    "            loss_deform_opt,\n",
    "        )\n",
    "        # Visualising loss during training\n",
    "        # plt.figure()\n",
    "        # fig, axs = plt.subplots(1, 3)\n",
    "        # warped_moving_image = warping(inputs=[var_ddf, moving_image])\n",
    "        # axs[0].imshow(moving_image[0, ..., 12])\n",
    "        # axs[1].imshow(fixed_image[0, ..., 12])\n",
    "        # axs[2].imshow(warped_moving_image[0, ..., 12])\n",
    "        # plt.show()"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "wCBZ1DXHPT32",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "## Warp the moving image using the optimised ddf and the warping layer.\n",
    "idx_slices = [int(5 + x * 5) for x in range(int(fixed_image_size[3] / 5) - 1)]\n",
    "nIdx = len(idx_slices)\n",
    "warped_moving_image = warping(inputs=[var_ddf, moving_image])\n",
    "plot_results(moving_image, fixed_image, warped_moving_image, nIdx)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "dArjLUqbY_JY",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "## We can observe the effects of the warping on the moving label using\n",
    "# the optimised affine transformation\n",
    "moving_label = tf.cast(tf.expand_dims(fid[\"label0\"], axis=0), dtype=tf.float32)\n",
    "fixed_label = tf.cast(tf.expand_dims(fid[\"label1\"], axis=0), dtype=tf.float32)\n",
    "\n",
    "idx_slices = [int(5 + x * 5) for x in range(int(fixed_image_size[3] / 5) - 1)]\n",
    "nIdx = len(idx_slices)\n",
    "warped_moving_labels = warping(inputs=[var_ddf, moving_label])\n",
    "plot_results(moving_label, fixed_label, warped_moving_labels, nIdx)"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JPAH2zDq4BjB",
    "colab_type": "text"
   },
   "source": [
    "# An adapted DeepReg Demo <a name=\"deep-example\"></a>\n",
    "\n",
    "Now, we will build a more complex demo, also a clinical case, using deep-learning.\n",
    "\n",
    "This is a registration between CT images acquired at different time points for a single patient. The images being registered are taken at inspiration and expiration for each subject. This is an intra subject registration. This type of intra subject registration is useful when there is a need to track certain features on a medical image such as tumor location when conducting invasive procedures [14]."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Iuxx6u164ABI",
    "colab_type": "text"
   },
   "source": [
    "The data files used in this tutorial have been pre-arranged in a folder, required by the DeepReg [paired dataset loader](https://deepreg.readthedocs.io/en/latest/docs/dataset_loader.html#paired-images), and can be downloaded as follows."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "m6kkjw4-JdPd",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "from tensorflow.keras.utils import get_file\n",
    "import zipfile\n",
    "import shutil\n",
    "import os\n",
    "\n",
    "MAIN_PATH = os.getcwd()\n",
    "PROJECT_DIR = os.path.join(MAIN_PATH, \"demos/paired_ct_lung/\")\n",
    "if os.path.exists(os.path.join(PROJECT_DIR,'data')):\n",
    "  shutil.rmtree(os.path.join(PROJECT_DIR,'data'))\n",
    "\n",
    "URL_ZIP = \"https://github.com/yipenghu/example-data/archive/paired_ct_lung.zip\"\n",
    "data_zip = get_file(os.path.abspath(os.path.join(PROJECT_DIR,'data.zip')), URL_ZIP)\n",
    "with zipfile.ZipFile(data_zip, \"r\") as zf:\n",
    "    zf.extractall(PROJECT_DIR)\n",
    "\n",
    "tmp_path = os.path.join(PROJECT_DIR,\"example-data-paired_ct_lung\")\n",
    "os.rename(tmp_path, os.path.join(PROJECT_DIR,'data'))\n",
    "\n",
    "if os.path.exists(data_zip):\n",
    "    os.remove(data_zip)\n",
    "\n",
    "print(\"Data downloaded and unzipped.\")"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To train a registration network is not trivial in both computational cost and, potentially, the need for network tuning.\n",
    "\n",
    "The code block below downloads a pre-trained model and uses the weights to showcase the predictive power of a deep learning trained model. \n",
    "You can choose to pretrain your own model by running the alternative code block in the comments. The number of epochs to train for can be changed by changing num_epochs. The default is 2 epochs but training for longer will improve performance if training from scratch.\n",
    "\n",
    "Please only either run the training or the pre-trained model download. If both code blocks are run, the trained model logs will be overwritten by the pre-trained model logs."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "40XXJBTzA5Qj",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "from deepreg.train import train\n",
    "\n",
    "######## Pre trained model ########\n",
    "\n",
    "! git clone https://github.com/DeepRegNet/deepreg-model-zoo.git logs/paired_ct_lung_demo_logs\n",
    "\n",
    "import zipfile\n",
    "\n",
    "fname = 'logs/paired_ct_lung_demo_logs/paired_ct_lung_demo_logs.zip'\n",
    "\n",
    "with zipfile.ZipFile(fname, \"r\") as zip_ref:\n",
    "    zip_ref.extractall(r'logs/paired_ct_lung_demo_logs')\n",
    "\n",
    "print(\"Files unzipped!\")\n",
    "\n",
    "if os.path.exists(r'logs/paired_ct_lung_demo_logs/learn2reg_t2_paired_train_logs') is not True:\n",
    "  os.mkdir(r'logs/paired_ct_lung_demo_logs/learn2reg_t2_paired_train_logs')\n",
    "\n",
    "! cp -rf  logs/paired_ct_lung_demo_logs/learn2reg_t2_paired_train_logs logs\n",
    "\n",
    "print(os.path.exists(r'logs/learn2reg_t2_paired_train_logs'))"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "MyxpCCVtdDj3",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "######## Train from scratch, uncomment ########\n",
    "# num_epochs = 2\n",
    "\n",
    "# path_to_file = r'config/test'\n",
    "\n",
    "# filename = 'ddf.yaml'\n",
    "# file = open(os.path.join(path_to_file, filename)).read().splitlines()\n",
    "# file[41] = file[41][:-2] + ' ' + str(num_epochs)\n",
    "        \n",
    "# open(os.path.join(path_to_file, filename), 'w').write('\\n'.join(file))\n",
    "\n",
    "# new_file = open(os.path.join(path_to_file, filename)).read().splitlines()\n",
    "# print('Line changed to: \\n', new_file[41])\n",
    "\n",
    "# tf.test.gpu_device_name()\n",
    "# print(tf.config.experimental.list_physical_devices('GPU'))\n",
    "# gpu = \"\"\n",
    "# gpu_allow_growth = False\n",
    "# ckpt_path = \"\"\n",
    "# log_dir = \"learn2reg_t2_paired_train_logs\"\n",
    "# config_path = [\n",
    "#     r\"config/test/ddf.yaml\",\n",
    "#     r\"demos/paired_ct_lung/paired_ct_lung.yaml\",\n",
    "# ]\n",
    "# train(\n",
    "#     gpu=gpu,\n",
    "#     config_path=config_path,\n",
    "#     gpu_allow_growth=gpu_allow_growth,\n",
    "#     ckpt_path=ckpt_path,\n",
    "#     log_dir=log_dir,\n",
    "# )\n"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XM8y9Pr7EuGu",
    "colab_type": "text"
   },
   "source": [
    "With either the trained model or the downloaded model, we can predict the DDFs.\n",
    "\n",
    "The DeepReg predict function saves images as .png files.\n"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "eXAo-8e5QsVj",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "from deepreg.predict import predict\n",
    "\n",
    "######## Predicting Pretrained ########\n",
    "\n",
    "log_dir = \"learn2reg_t2_paired_train_logs\"\n",
    "ckpt_path = os.path.join(\"logs\", log_dir, \"save\", \"weights-epoch100.ckpt\")\n",
    "config_path = \"logs/learn2reg_t2_paired_train_logs/config.yaml\"\n",
    "\n",
    "gpu = \"0\"\n",
    "gpu_allow_growth = False\n",
    "# This will take a couple of minutes\n",
    "predict(\n",
    "    gpu=gpu,\n",
    "    gpu_allow_growth=gpu_allow_growth,\n",
    "    config_path=config_path,\n",
    "    ckpt_path=ckpt_path,\n",
    "    mode=\"test\",\n",
    "    batch_size=1,\n",
    "    log_dir=log_dir,\n",
    "    sample_label=\"all\",\n",
    ")\n",
    "\n",
    "# the numerical metrics are saved in the logs directory specified"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "Hrr5_wv0Be5H",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "######## Predicting Newly Trained ########\n",
    "\n",
    "# log_dir = \"learn2reg_t2_paired_train_logs\"\n",
    "# ckpt_path = os.path.join(\"logs\", log_dir, \"save\", (\"weights-epoch\" + str(num_epochs) + \".ckpt\"))\n",
    "# config_path = \"logs/learn2reg_t2_paired_train_logs/config.yaml\"\n",
    "\n",
    "# gpu = \"0\"\n",
    "# gpu_allow_growth = False\n",
    "# predict(\n",
    "#     gpu=gpu,\n",
    "#     gpu_allow_growth=gpu_allow_growth,\n",
    "#     config_path=config_path,\n",
    "#     ckpt_path=ckpt_path,\n",
    "#     mode=\"test\",\n",
    "#     batch_size=1,\n",
    "#     log_dir=log_dir,\n",
    "#     sample_label=\"all\",\n",
    "# )"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nJcjZasSf88o",
    "colab_type": "text"
   },
   "source": [
    "The code block below plots different slices and their predictions generated using the trained model. The inds_to_plot variable can be changed to plot more slices or different slices."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "BLUiiC1ceZKa",
    "colab_type": "code",
    "colab": {}
   },
   "source": [
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "plt.rcParams['figure.figsize'] = [20, 20]\n",
    "\n",
    "\n",
    "######## Visualisation ########\n",
    "\n",
    "# Now lets load in a few samples from the predicitons and plot them\n",
    "\n",
    "# change the following line to the path to image0 label0\n",
    "path_to_image0_label0 = r\"logs/learn2reg_t2_paired_train_logs/test\"\n",
    "\n",
    "path_to_fixed_label = os.path.join(path_to_image0_label0,\n",
    "                                   r\"pair_1/label_0/fixed_label\")\n",
    "path_to_fixed_image = os.path.join(path_to_image0_label0,\n",
    "                                   r\"pair_1/fixed_image\")\n",
    "\n",
    "path_to_moving_label = os.path.join(path_to_image0_label0,\n",
    "                                   r\"pair_1/label_0/moving_label\")\n",
    "path_to_moving_image = os.path.join(path_to_image0_label0,\n",
    "                                   r\"pair_1/moving_image\")\n",
    "\n",
    "\n",
    "path_to_pred_fixed_image = os.path.join(path_to_image0_label0,\n",
    "                                   r\"pair_1/pred_fixed_image\")\n",
    "path_to_pred_fixed_label = os.path.join(path_to_image0_label0,\n",
    "                                        r\"pair_1/label_0/pred_fixed_label\")\n",
    "\n",
    "# change inds_to_plot if different images need to be plotted instead\n",
    "\n",
    "inds_to_plot = [144, 145, 184, 140, 150, 180]\n",
    "sub_plot_counter = 1\n",
    "\n",
    "for ind in inds_to_plot:\n",
    "  plt.subplot(6, 8, sub_plot_counter)\n",
    "  label = plt.imread(os.path.join(path_to_fixed_label, \n",
    "                                   \"depth\" + str(ind) + \"_fixed_label.png\"))\n",
    "  plt.imshow(label)\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Fixed Label\")\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 1)\n",
    "  fixed_im = plt.imread(os.path.join(path_to_fixed_image,\n",
    "                                    \"depth\" + str(ind) + \"_fixed_image.png\"))\n",
    "  plt.imshow(fixed_im)\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Fixed Image\")\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 2)\n",
    "  moving_label = plt.imread(os.path.join(path_to_moving_label, \n",
    "                                   \"depth\" + str(ind) + \"_moving_label.png\"))\n",
    "  plt.imshow(moving_label)\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Moving Label\")\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 3)\n",
    "  moving_im = plt.imread(os.path.join(path_to_moving_image,\n",
    "                                    \"depth\" + str(ind) + \"_moving_image.png\"))\n",
    "  plt.imshow(moving_im)\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Moving Image\")\n",
    "\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 4)\n",
    "  pred = plt.imread(os.path.join(path_to_pred_fixed_label,\n",
    "                                    \"depth\" + str(ind) + \"_pred_fixed_label.png\"))\n",
    "  plt.imshow(pred)\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Warped Moving Label\")\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 5)\n",
    "  pred_fixed_im = plt.imread(os.path.join(path_to_pred_fixed_image,\n",
    "                                    \"depth\" + str(ind) + \"_pred_fixed_image.png\"))\n",
    "  plt.imshow(pred_fixed_im)\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Warped Moving Image\")\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 6)\n",
    "  pred_label = pred_label_comparison(np.expand_dims(pred[:, :, 0], axis=0),\n",
    "  np.expand_dims(label[:, :, 0], axis=0), \n",
    "  (1, pred.shape[0], pred.shape[1], pred.shape[2]))\n",
    "  plt.imshow(np.squeeze(pred_label))\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Comparison\")\n",
    "\n",
    "  plt.subplot(6, 8, sub_plot_counter + 7)\n",
    "  plt.imshow(np.squeeze((- fixed_im[:, :, 0:3] +pred_fixed_im[:,:, 0:3])))\n",
    "  plt.axis(\"off\")\n",
    "  if sub_plot_counter == 1:\n",
    "    plt.title(\"Pred - Fixed\")   \n",
    "  sub_plot_counter = sub_plot_counter + 8\n",
    "\n",
    "plt.show()\n"
   ],
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZjAKmN6jbz4X",
    "colab_type": "text"
   },
   "source": [
    "# Concluding Remarks <a name=\"conclusion\"></a>\n",
    "In this tutorial, we use two classical image registration algorithms and a deep-learning registration network, all implemented with DeepReg, to discuss the basics of modern image registration algorithms. In particular, we show that they share principles, methodologies and code between the old and the new. \n",
    "\n",
    "DeepReg is a new open-source project that has a unique set of principles aiming to consolidate the research field of medical image registration, open, community-supported and clinical-application-driven. It is these features that have motivated efforts such as this tutorial to communicate with wider groups of researchers and to facilitate diverse clinical applciations.\n",
    "\n",
    "This tutorial may serve as a starting point for next generation of researchers to have a balanced starting point between the new learning-based methods and classical algorithms. It may also be used as an quick introduction of DeepReg to those, who have significant experience in deep learning and / or medical image registration, such that they can make an informed judgement whether this new tool can help their research."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "P5zUaIpPpn7N",
    "colab_type": "text"
   },
   "source": [
    "# References <a name=\"references\"></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "I3048pG3r01o",
    "colab_type": "text"
   },
   "source": [
    "[1] G. Haskins, U. Kruger, and P. Yan, “Deep learning in medical image registration: a survey,” Mach. Vis. Appl., vol. 31, no. 1, pp. 1–18, Jan. 2020.\n",
    "\n",
    "[2] Y. Hu et al., “MR to ultrasound registration for image-guided prostate interventions,” Med. Image Anal., vol. 16, no. 3, pp. 687–703, Apr. 2012.\n",
    "\n",
    "[3] J. Ramalhinho et al., “A pre-operative planning framework for global registration of laparoscopic ultrasound to CT images,” Int. J. Comput. Assist. Radiol. Surg., vol. 13, no. 8, pp. 1177–1186, Aug. 2018.\n",
    "\n",
    "[4]  M. Lorenzo-Valdés, G. I. Sanchez-Ortiz, R. Mohiaddin, and D. Rueckert, “Atlas-based segmentation and tracking of 3D cardiac MR images using non-rigid registration,” in Lecture Notes in Computer Science, 2002, vol. 2488, pp. 642–650.\n",
    "\n",
    "[5] G. Cazoulat, D. Owen, M. M. Matuszak, J. M. Balter, and K. K. Brock, “Biomechanical deformable image registration of longitudinal lung CT images using vessel information,” Phys. Med. Biol., vol. 61, no. 13, pp. 4826–4839, 2016.\n",
    "\n",
    "[6] Y. Hu, et al., “Population-based prediction of subject-specific prostate deformation for MR-to-ultrasound image registration,” Med. Image Anal., vol. 26, no. 1, pp. 332–344, Dec. 2015.\n",
    "\n",
    "[7] P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” in Sensor Fusion IV: Control Paradigms and Data Structures, 1992, vol. 1611, pp. 586–606.\n",
    "\n",
    "[8] A. Myronenko and X. Song, “Point set registration: Coherent point drifts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2262–2275, 2010.\n",
    "\n",
    "[9] J. Ashburner, “A fast diffeomorphic image registration algorithm,” Neuroimage, vol. 38, no. 1, pp. 95–113, Oct. 2007.\n",
    "\n",
    "[10] Y. Hu, E. Gibson, D. C. Barratt, M. Emberton, J. A. Noble, and T. Vercauteren, “Conditional Segmentation in Lieu of Image Registration,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, vol. 11765 LNCS, pp. 401–409.\n",
    "\n",
    "[11] D. L. G. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes, “Medical image registration,” Phys. Med. Biol., vol. 46, no. 3, pp. R1–R45, Mar. 2001.\n",
    "\n",
    "[12] Vallières, M. et al. Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. Sci Rep 7, 10117 (2017). doi: 10.1038/s41598-017-10371-5\n",
    "\n",
    "[13] Litjens, et al., 2014. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Medical image analysis, 18(2), pp.359-373.\n",
    "\n",
    "[14] A. Hering, K. Murphy, and B. van Ginneken. Lean2Reg Challenge: CT Lung Registration - Training Data [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3835682. 2020\n",
    "\n",
    "[15] T. Vercauteren, et al. Diffeomorphic demons: Efficient non-parametric image registration. NeuroImage, 45(1), pp.S61-S72. 2009.\n",
    "\n",
    "[16] Q. Yang, et al., Longitudinal image registration with temporal-order and subject-specificity discrimination. MICCAI 2020, 2020.\n"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "Intro_to_Medical_Image_Regsitration.ipynb",
   "provenance": [],
   "collapsed_sections": [],
   "toc_visible": true
  },
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3"
  },
  "accelerator": "GPU"
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
