{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h1 align=\"center\">Data Augmentation for Deep Learning</h1>\n",
    "\n",
    "This notebook illustrates the use of SimpleITK to perform data augmentation for deep learning. Note that the code is written so that the relevant functions work for both 2D and 3D images without modification.\n",
    "\n",
    "Data augmentation is a model based approach for enlarging your training set. The problem being addressed is that the original dataset is not sufficiently representative of the general population of images. As a consequence, if we only train on the original dataset the resulting network will not generalize well to the population (overfitting). \n",
    "\n",
    "Using a model of the variations found in the general population and the existing dataset we generate additional images in the hope of capturing the population variability. Note that if the model you use is incorrect you can cause harm, you are generating observations that do not occur in the general population and are optimizing a function to fit them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import SimpleITK as sitk\n",
    "import numpy as np\n",
    "\n",
    "%matplotlib notebook\n",
    "import gui\n",
    "\n",
    "#utility method that either downloads data from the Girder repository or\n",
    "#if already downloaded returns the file name for reading from disk (cached data)\n",
    "%run update_path_to_download_script\n",
    "from downloaddata import fetch_data as fdata\n",
    "\n",
    "OUTPUT_DIR = 'Output'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Before we start, a word of caution\n",
    "\n",
    "**Whenever you sample there is potential for aliasing (Nyquist theorem).**\n",
    "\n",
    "In many cases, data prepared for use with a deep learning network is resampled to a fixed size. When we perform data augmentation via spatial transformations we also perform resampling. \n",
    "\n",
    "Admittedly, the example below is exaggerated to illustrate the point, but it serves as a reminder that you may want to consider smoothing your images prior to resampling. \n",
    "\n",
    "The effects of aliasing also play a role in network performance stability:\n",
    "A. Azulay, Y. Weiss, \"Why do deep convolutional networks generalize so poorly to small image transformations?\"  [CoRR abs/1805.12177](https://arxiv.org/abs/1805.12177), 2018."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "simpleitk_error_allowed": "Exception thrown in SimpleITK Show:"
   },
   "outputs": [],
   "source": [
    "# The image we will resample (a grid).\n",
    "grid_image = sitk.GridSource(outputPixelType=sitk.sitkUInt16, size=(512,512), \n",
    "                             sigma=(0.1,0.1), gridSpacing=(20.0,20.0))\n",
    "sitk.Show(grid_image, \"original grid image\")\n",
    "\n",
    "# The spatial definition of the images we want to use in a deep learning framework (smaller than the original). \n",
    "new_size = [100, 100]\n",
    "reference_image = sitk.Image(new_size, grid_image.GetPixelIDValue())\n",
    "reference_image.SetOrigin(grid_image.GetOrigin())\n",
    "reference_image.SetDirection(grid_image.GetDirection())\n",
    "reference_image.SetSpacing([sz*spc/nsz for nsz,sz,spc in zip(new_size, grid_image.GetSize(), grid_image.GetSpacing())])\n",
    "\n",
    "# Resample without any smoothing.\n",
    "sitk.Show(sitk.Resample(grid_image, reference_image) , \"resampled without smoothing\")\n",
    "\n",
    "# Resample after Gaussian smoothing.\n",
    "sitk.Show(sitk.Resample(sitk.SmoothingRecursiveGaussian(grid_image, 2.0), reference_image), \"resampled with smoothing\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Load data\n",
    "\n",
    "Load the images. You can work through the notebook using either the original 3D images or 2D slices from the original volumes. To do the latter, just uncomment the line in the cell below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data = [sitk.ReadImage(fdata(\"nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd\")),\n",
    "        sitk.ReadImage(fdata(\"vm_head_mri.mha\")),\n",
    "        sitk.ReadImage(fdata(\"head_mr_oriented.mha\"))]\n",
    "# Comment out the following line if you want to work in 3D. Note that in 3D some of the notebook visualizations are \n",
    "# disabled. \n",
    "data = [data[0][:,160,:], data[1][:,:,17], data[2][:,:,0]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def disp_images(images, fig_size, wl_list=None):\n",
    "    if images[0].GetDimension()==2:\n",
    "      gui.multi_image_display2D(image_list=images, figure_size=fig_size, window_level_list=wl_list)\n",
    "    else:\n",
    "      gui.MultiImageDisplay(image_list=images, figure_size=fig_size, window_level_list=wl_list)\n",
    "    \n",
    "disp_images(data, fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The original data often needs to be modified. In this example we would like to crop the images so that we only keep the informative regions. We can readily separate the foreground and background using an appropriate threshold, in our case we use Otsu's threshold selection method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def threshold_based_crop(image):\n",
    "    \"\"\"\n",
    "    Use Otsu's threshold estimator to separate background and foreground. In medical imaging the background is\n",
    "    usually air. Then crop the image using the foreground's axis aligned bounding box.\n",
    "    Args:\n",
    "        image (SimpleITK image): An image where the anatomy and background intensities form a bi-modal distribution\n",
    "                                 (the assumption underlying Otsu's method.)\n",
    "    Return:\n",
    "        Cropped image based on foreground's axis aligned bounding box.                                 \n",
    "    \"\"\"\n",
    "    # Set pixels that are in [min_intensity,otsu_threshold] to inside_value, values above otsu_threshold are\n",
    "    # set to outside_value. The anatomy has higher intensity values than the background, so it is outside.\n",
    "    inside_value = 0\n",
    "    outside_value = 255\n",
    "    label_shape_filter = sitk.LabelShapeStatisticsImageFilter()\n",
    "    label_shape_filter.Execute( sitk.OtsuThreshold(image, inside_value, outside_value) )\n",
    "    bounding_box = label_shape_filter.GetBoundingBox(outside_value)\n",
    "    # The bounding box's first \"dim\" entries are the starting index and last \"dim\" entries the size\n",
    "    return sitk.RegionOfInterest(image, bounding_box[int(len(bounding_box)/2):], bounding_box[0:int(len(bounding_box)/2)])\n",
    "    \n",
    "\n",
    "modified_data = [threshold_based_crop(img) for img in data]\n",
    "\n",
    "disp_images(modified_data, fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At this point we select the images we want to work with, skip the following cell if you want to work with the original data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data = modified_data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Augmentation using spatial transformations\n",
    "\n",
    "We next illustrate the generation of images by specifying a list of transformation parameter values representing a sampling of the transformation's parameter space.\n",
    "\n",
    "The code below is agnostic to the specific transformation and it is up to the user to specify a valid list of transformation parameters (correct number of parameters and correct order). To learn more about the spatial transformations supported by SimpleITK you can explore the [Transforms notebook](22_Transforms.ipynb).\n",
    "\n",
    "In most cases we can easily specify a regular grid in parameter space by specifying ranges of values for each of the parameters. In some cases specifying parameter values may be less intuitive (i.e. versor representation of rotation)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Utility methods\n",
    "\n",
    "Utilities for sampling a parameter space using a regular grid in a convenient manner (special care for 3D similarity).  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def parameter_space_regular_grid_sampling(*transformation_parameters):\n",
    "    '''\n",
    "    Create a list representing a regular sampling of the parameter space.     \n",
    "    Args:\n",
    "        *transformation_paramters : two or more numpy ndarrays representing parameter values. The order \n",
    "                                    of the arrays should match the ordering of the SimpleITK transformation \n",
    "                                    parameterization (e.g. Similarity2DTransform: scaling, rotation, tx, ty)\n",
    "    Return:\n",
    "        List of lists representing the regular grid sampling.\n",
    "        \n",
    "    Examples:\n",
    "        #parameterization for 2D translation transform (tx,ty): [[1.0,1.0], [1.5,1.0], [2.0,1.0]]\n",
    "        >>>> parameter_space_regular_grid_sampling(np.linspace(1.0,2.0,3), np.linspace(1.0,1.0,1))        \n",
    "    '''\n",
    "    return [[np.asscalar(p) for p in parameter_values] \n",
    "            for parameter_values in np.nditer(np.meshgrid(*transformation_parameters))]\n",
    "\n",
    "def similarity3D_parameter_space_regular_sampling(thetaX, thetaY, thetaZ, tx, ty, tz, scale):\n",
    "    '''\n",
    "    Create a list representing a regular sampling of the 3D similarity transformation parameter space. As the\n",
    "    SimpleITK rotation parameterization uses the vector portion of a versor we don't have an \n",
    "    intuitive way of specifying rotations. We therefor use the ZYX Euler angle parametrization and convert to\n",
    "    versor.\n",
    "    Args:\n",
    "        thetaX, thetaY, thetaZ: numpy ndarrays with the Euler angle values to use, in radians.\n",
    "        tx, ty, tz: numpy ndarrays with the translation values to use in mm.\n",
    "        scale: numpy array with the scale values to use.\n",
    "    Return:\n",
    "        List of lists representing the parameter space sampling (vx,vy,vz,tx,ty,tz,s).\n",
    "    '''\n",
    "    return [list(eul2quat(parameter_values[0],parameter_values[1], parameter_values[2])) + \n",
    "            [np.asscalar(p) for p in parameter_values[3:]] for parameter_values in np.nditer(np.meshgrid(thetaX, thetaY, thetaZ, tx, ty, tz, scale))]\n",
    "    \n",
    "def similarity3D_parameter_space_random_sampling(thetaX, thetaY, thetaZ, tx, ty, tz, scale, n):\n",
    "    '''\n",
    "    Create a list representing a random (uniform) sampling of the 3D similarity transformation parameter space. As the\n",
    "    SimpleITK rotation parameterization uses the vector portion of a versor we don't have an \n",
    "    intuitive way of specifying rotations. We therefor use the ZYX Euler angle parametrization and convert to\n",
    "    versor.\n",
    "    Args:\n",
    "        thetaX, thetaY, thetaZ: Ranges of Euler angle values to use, in radians.\n",
    "        tx, ty, tz: Ranges of translation values to use in mm.\n",
    "        scale: Range of scale values to use.\n",
    "        n: Number of samples.\n",
    "    Return:\n",
    "        List of lists representing the parameter space sampling (vx,vy,vz,tx,ty,tz,s).\n",
    "    '''\n",
    "    theta_x_vals = (thetaX[1]-thetaX[0])*np.random.random(n) + thetaX[0]\n",
    "    theta_y_vals = (thetaY[1]-thetaY[0])*np.random.random(n) + thetaY[0]\n",
    "    theta_z_vals = (thetaZ[1]-thetaZ[0])*np.random.random(n) + thetaZ[0]\n",
    "    tx_vals = (tx[1]-tx[0])*np.random.random(n) + tx[0]\n",
    "    ty_vals = (ty[1]-ty[0])*np.random.random(n) + ty[0]\n",
    "    tz_vals = (tz[1]-tz[0])*np.random.random(n) + tz[0]\n",
    "    s_vals = (scale[1]-scale[0])*np.random.random(n) + scale[0]\n",
    "    res = list(zip(theta_x_vals, theta_y_vals, theta_z_vals, tx_vals, ty_vals, tz_vals, s_vals))\n",
    "    return [list(eul2quat(*(p[0:3]))) + list(p[3:7]) for p in res]\n",
    "    \n",
    "def eul2quat(ax, ay, az, atol=1e-8):\n",
    "    '''\n",
    "    Translate between Euler angle (ZYX) order and quaternion representation of a rotation.\n",
    "    Args:\n",
    "        ax: X rotation angle in radians.\n",
    "        ay: Y rotation angle in radians.\n",
    "        az: Z rotation angle in radians.\n",
    "        atol: tolerance used for stable quaternion computation (qs==0 within this tolerance).\n",
    "    Return:\n",
    "        Numpy array with three entries representing the vectorial component of the quaternion.\n",
    "\n",
    "    '''\n",
    "    # Create rotation matrix using ZYX Euler angles and then compute quaternion using entries.\n",
    "    cx = np.cos(ax)\n",
    "    cy = np.cos(ay)\n",
    "    cz = np.cos(az)\n",
    "    sx = np.sin(ax)\n",
    "    sy = np.sin(ay)\n",
    "    sz = np.sin(az)\n",
    "    r=np.zeros((3,3))\n",
    "    r[0,0] = cz*cy \n",
    "    r[0,1] = cz*sy*sx - sz*cx\n",
    "    r[0,2] = cz*sy*cx+sz*sx     \n",
    "\n",
    "    r[1,0] = sz*cy \n",
    "    r[1,1] = sz*sy*sx + cz*cx \n",
    "    r[1,2] = sz*sy*cx - cz*sx\n",
    "\n",
    "    r[2,0] = -sy   \n",
    "    r[2,1] = cy*sx             \n",
    "    r[2,2] = cy*cx\n",
    "\n",
    "    # Compute quaternion: \n",
    "    qs = 0.5*np.sqrt(r[0,0] + r[1,1] + r[2,2] + 1)\n",
    "    qv = np.zeros(3)\n",
    "    # If the scalar component of the quaternion is close to zero, we\n",
    "    # compute the vector part using a numerically stable approach\n",
    "    if np.isclose(qs,0.0,atol): \n",
    "        i= np.argmax([r[0,0], r[1,1], r[2,2]])\n",
    "        j = (i+1)%3\n",
    "        k = (j+1)%3\n",
    "        w = np.sqrt(r[i,i] - r[j,j] - r[k,k] + 1)\n",
    "        qv[i] = 0.5*w\n",
    "        qv[j] = (r[i,j] + r[j,i])/(2*w)\n",
    "        qv[k] = (r[i,k] + r[k,i])/(2*w)\n",
    "    else:\n",
    "        denom = 4*qs\n",
    "        qv[0] = (r[2,1] - r[1,2])/denom;\n",
    "        qv[1] = (r[0,2] - r[2,0])/denom;\n",
    "        qv[2] = (r[1,0] - r[0,1])/denom;\n",
    "    return qv"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create reference domain \n",
    "\n",
    "All input images will be resampled onto the reference domain.\n",
    "\n",
    "This domain is defined by two constraints: the number of pixels per dimension and the physical size we want the reference domain to occupy. The former is associated with the computational constraints of deep learning where using a small number of pixels is desired. The later is associated with the SimpleITK concept of an image, it occupies a  region in physical space which should be large enough to encompass the object of interest."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dimension = data[0].GetDimension()\n",
    "\n",
    "# Physical image size corresponds to the largest physical size in the training set, or any other arbitrary size.\n",
    "reference_physical_size = np.zeros(dimension)\n",
    "for img in data:\n",
    "    reference_physical_size[:] = [(sz-1)*spc if sz*spc>mx  else mx for sz,spc,mx in zip(img.GetSize(), img.GetSpacing(), reference_physical_size)]\n",
    "\n",
    "# Create the reference image with a zero origin, identity direction cosine matrix and dimension     \n",
    "reference_origin = np.zeros(dimension)\n",
    "reference_direction = np.identity(dimension).flatten()\n",
    "\n",
    "# Select arbitrary number of pixels per dimension, smallest size that yields desired results \n",
    "# or the required size of a pretrained network (e.g. VGG-16 224x224), transfer learning. This will \n",
    "# often result in non-isotropic pixel spacing.\n",
    "reference_size = [128]*dimension \n",
    "reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]\n",
    "\n",
    "# Another possibility is that you want isotropic pixels, then you can specify the image size for one of\n",
    "# the axes and the others are determined by this choice. Below we choose to set the x axis to 128 and the\n",
    "# spacing set accordingly. \n",
    "# Uncomment the following lines to use this strategy.\n",
    "#reference_size_x = 128\n",
    "#reference_spacing = [reference_physical_size[0]/(reference_size_x-1)]*dimension\n",
    "#reference_size = [int(phys_sz/(spc) + 1) for phys_sz,spc in zip(reference_physical_size, reference_spacing)]\n",
    "\n",
    "reference_image = sitk.Image(reference_size, data[0].GetPixelIDValue())\n",
    "reference_image.SetOrigin(reference_origin)\n",
    "reference_image.SetSpacing(reference_spacing)\n",
    "reference_image.SetDirection(reference_direction)\n",
    "\n",
    "# Always use the TransformContinuousIndexToPhysicalPoint to compute an indexed point's physical coordinates as \n",
    "# this takes into account size, spacing and direction cosines. For the vast majority of images the direction \n",
    "# cosines are the identity matrix, but when this isn't the case simply multiplying the central index by the \n",
    "# spacing will not yield the correct coordinates resulting in a long debugging session. \n",
    "reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data generation\n",
    "\n",
    "Once we have a reference domain we can augment the data using any of the SimpleITK global domain transformations. In this notebook we use a similarity transformation (the generate_images function is agnostic to this specific choice).\n",
    "\n",
    "Note that you also need to create the labels for your augmented images. If these are just classes then your processing is minimal. If you are dealing with segmentation you will also need to transform the segmentation labels so that they match the transformed image. The following function easily accommodates for this, just provide the labeled image as input and use the sitk.sitkNearestNeighbor interpolator so that you do not introduce labels that were not in the original segmentation.   "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def augment_images_spatial(original_image, reference_image, T0, T_aug, transformation_parameters,\n",
    "                    output_prefix, output_suffix,\n",
    "                    interpolator = sitk.sitkLinear, default_intensity_value = 0.0):\n",
    "    '''\n",
    "    Generate the resampled images based on the given transformations.\n",
    "    Args:\n",
    "        original_image (SimpleITK image): The image which we will resample and transform.\n",
    "        reference_image (SimpleITK image): The image onto which we will resample.\n",
    "        T0 (SimpleITK transform): Transformation which maps points from the reference image coordinate system \n",
    "            to the original_image coordinate system.\n",
    "        T_aug (SimpleITK transform): Map points from the reference_image coordinate system back onto itself using the\n",
    "               given transformation_parameters. The reason we use this transformation as a parameter\n",
    "               is to allow the user to set its center of rotation to something other than zero.\n",
    "        transformation_parameters (List of lists): parameter values which we use T_aug.SetParameters().\n",
    "        output_prefix (string): output file name prefix (file name: output_prefix_p1_p2_..pn_.output_suffix).\n",
    "        output_suffix (string): output file name suffix (file name: output_prefix_p1_p2_..pn_.output_suffix).\n",
    "        interpolator: One of the SimpleITK interpolators.\n",
    "        default_intensity_value: The value to return if a point is mapped outside the original_image domain.\n",
    "    '''\n",
    "    all_images = [] # Used only for display purposes in this notebook.\n",
    "    for current_parameters in transformation_parameters:\n",
    "        T_aug.SetParameters(current_parameters)        \n",
    "        # Augmentation is done in the reference image space, so we first map the points from the reference image space\n",
    "        # back onto itself T_aug (e.g. rotate the reference image) and then we map to the original image space T0.\n",
    "        T_all = sitk.Transform(T0)\n",
    "        T_all.AddTransform(T_aug)\n",
    "        aug_image = sitk.Resample(original_image, reference_image, T_all,\n",
    "                                  interpolator, default_intensity_value)\n",
    "        sitk.WriteImage(aug_image, output_prefix + '_' + \n",
    "                        '_'.join(str(param) for param in current_parameters) +'_.' + output_suffix)\n",
    "         \n",
    "        all_images.append(aug_image) # Used only for display purposes in this notebook.\n",
    "    return all_images # Used only for display purposes in this notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we can use the generate_images function we need to compute the transformation which will map points between the reference image and the current image as shown in the code cell below. \n",
    "\n",
    "Note that it is very easy to generate large amounts of data using a regular grid sampling in the transformation parameter space (`similarity3D_parameter_space_regular_sampling`), the calls to np.linspace with $m$ parameters each having $n$ values results in $n^m$ images, so don't forget that these images are also saved to disk. **If you run the code below with regular grid sampling for 3D data you will generate 6561 volumes ($3^7$ parameter combinations times 3 volumes).**\n",
    "\n",
    "By default, the cell below uses random uniform sampling in the transformation parameter space (`similarity3D_parameter_space_random_sampling`). If you want to try regular sampling, uncomment the commented out code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "aug_transform = sitk.Similarity2DTransform() if dimension==2 else sitk.Similarity3DTransform()\n",
    "\n",
    "all_images = []\n",
    "\n",
    "for index,img in enumerate(data):\n",
    "    # Transform which maps from the reference_image to the current img with the translation mapping the image\n",
    "    # origins to each other.\n",
    "    transform = sitk.AffineTransform(dimension)\n",
    "    transform.SetMatrix(img.GetDirection())\n",
    "    transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)\n",
    "    # Modify the transformation to align the centers of the original and reference image instead of their origins.\n",
    "    centering_transform = sitk.TranslationTransform(dimension)\n",
    "    img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))\n",
    "    centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))\n",
    "    centered_transform = sitk.Transform(transform)\n",
    "    centered_transform.AddTransform(centering_transform)\n",
    "\n",
    "    # Set the augmenting transform's center so that rotation is around the image center.\n",
    "    aug_transform.SetCenter(reference_center)\n",
    "    \n",
    "    if dimension == 2:\n",
    "        # The parameters are scale (+-10%), rotation angle (+-10 degrees), x translation, y translation\n",
    "        transformation_parameters_list = parameter_space_regular_grid_sampling(np.linspace(0.9,1.1,3),\n",
    "                                                                               np.linspace(-np.pi/18.0,np.pi/18.0,3),\n",
    "                                                                               np.linspace(-10,10,3),\n",
    "                                                                               np.linspace(-10,10,3))\n",
    "    else:    \n",
    "        transformation_parameters_list = similarity3D_parameter_space_random_sampling(thetaX=(-np.pi/18.0,np.pi/18.0), \n",
    "                                                                                      thetaY=(-np.pi/18.0,np.pi/18.0), \n",
    "                                                                                      thetaZ=(-np.pi/18.0,np.pi/18.0), \n",
    "                                                                                      tx=(-10.0, 10.0),\n",
    "                                                                                      ty=(-10.0, 10.0), \n",
    "                                                                                      tz=(-10.0, 10.0), \n",
    "                                                                                      scale=(0.9,1.1), \n",
    "                                                                                      n=10)\n",
    "#         transformation_parameters_list = similarity3D_parameter_space_regular_sampling(np.linspace(-np.pi/18.0,np.pi/18.0,3),\n",
    "#                                                                                        np.linspace(-np.pi/18.0,np.pi/18.0,3),\n",
    "#                                                                                        np.linspace(-np.pi/18.0,np.pi/18.0,3),\n",
    "#                                                                                        np.linspace(-10,10,3),\n",
    "#                                                                                        np.linspace(-10,10,3),\n",
    "#                                                                                        np.linspace(-10,10,3),\n",
    "#                                                                                        np.linspace(0.9,1.1,3))\n",
    "        \n",
    "    generated_images = augment_images_spatial(img, reference_image, centered_transform, \n",
    "                                       aug_transform, transformation_parameters_list, \n",
    "                                       os.path.join(OUTPUT_DIR, 'spatial_aug'+str(index)), 'mha')\n",
    "    \n",
    "    if dimension==2: # in 2D we join all of the images into a 3D volume which we use for display.\n",
    "        all_images.append(sitk.JoinSeries(generated_images))\n",
    "# If working in 2D, display the resulting set of images.    \n",
    "if dimension==2:\n",
    "    gui.MultiImageDisplay(image_list=all_images, shared_slider=True, figure_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## What about flipping\n",
    "\n",
    "Reflection using SimpleITK can be done in one of several ways:\n",
    "1. Use an affine transform with the matrix component set to a reflection matrix. The columns of the matrix correspond to the $\\mathbf{x}, \\mathbf{y}$ and $\\mathbf{z}$ axes. The reflection matrix is constructed using the plane, 3D,  or axis, 2D, which we want to reflect through with the standard basis vectors, $\\mathbf{e}_i, \\mathbf{e}_j$, and the remaining basis vector set to $-\\mathbf{e}_k$.  \n",
    "    * Reflection about $xy$ plane: $[\\mathbf{e}_1, \\mathbf{e}_2, -\\mathbf{e}_3]$.\n",
    "    * Reflection about $xz$ plane: $[\\mathbf{e}_1, -\\mathbf{e}_2, \\mathbf{e}_3]$.\n",
    "    * Reflection about $yz$ plane: $[-\\mathbf{e}_1, \\mathbf{e}_2, \\mathbf{e}_3]$.\n",
    "2. Use the native slicing operator(e.g. img[:,::-1,:]), or the FlipImageFilter after the image is resampled onto the reference image grid. \n",
    "\n",
    "We prefer option 1 as it is computationally more efficient. It combines all transformation prior to resampling, while the other approach performs resampling onto the reference image grid followed by the reflection operation. An additional difference is that using slicing or the FlipImageFilter will also modify the image origin while the resampling approach keeps the spatial location of the reference image origin intact. This minor difference is of no concern in deep learning as the content of the images is the same, but in SimpleITK two images are considered equivalent iff their content and spatial extent are the same.\n",
    "\n",
    "The following two cells correspond to the two approaches:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%timeit -n1 -r1\n",
    "# Approach 1, using an affine transformation\n",
    "\n",
    "flipped_images = []\n",
    "for index,img in enumerate(data):\n",
    "    # Compute the transformation which maps between the reference and current image (same as done above).\n",
    "    transform = sitk.AffineTransform(dimension)\n",
    "    transform.SetMatrix(img.GetDirection())\n",
    "    transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)\n",
    "    centering_transform = sitk.TranslationTransform(dimension)\n",
    "    img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))\n",
    "    centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))\n",
    "    centered_transform = sitk.Transform(transform)\n",
    "    centered_transform.AddTransform(centering_transform)\n",
    "    \n",
    "    flipped_transform = sitk.AffineTransform(dimension)    \n",
    "    flipped_transform.SetCenter(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))\n",
    "    if dimension==2: # matrices in SimpleITK specified in row major order\n",
    "        flipped_transform.SetMatrix([1,0,0,-1])\n",
    "    else:\n",
    "        flipped_transform.SetMatrix([1,0,0,0,-1,0,0,0,1])\n",
    "    centered_transform.AddTransform(flipped_transform)\n",
    "    \n",
    "    # Resample onto the reference image \n",
    "    flipped_images.append(sitk.Resample(img, reference_image, centered_transform, sitk.sitkLinear, 0.0))\n",
    "# Uncomment the following line to display the images (we don't want to time this)\n",
    "#disp_images(flipped_images, fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%timeit -n1 -r1\n",
    "\n",
    "# Approach 2, flipping after resampling\n",
    "\n",
    "flipped_images = []\n",
    "for index,img in enumerate(data):\n",
    "    # Compute the transformation which maps between the reference and current image (same as done above).\n",
    "    transform = sitk.AffineTransform(dimension)\n",
    "    transform.SetMatrix(img.GetDirection())\n",
    "    transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)\n",
    "    centering_transform = sitk.TranslationTransform(dimension)\n",
    "    img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))\n",
    "    centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))\n",
    "    centered_transform = sitk.Transform(transform)\n",
    "    centered_transform.AddTransform(centering_transform)\n",
    "    # Resample onto the reference image \n",
    "    resampled_img = sitk.Resample(img, reference_image, centered_transform, sitk.sitkLinear, 0.0)\n",
    "    # We flip on the y axis (x, z are done similarly)\n",
    "    if dimension==2:\n",
    "        flipped_images.append(resampled_img[:,::-1])\n",
    "    else:\n",
    "        flipped_images.append(resampled_img[:,::-1,:])\n",
    "# Uncomment the following line to display the images (we don't want to time this)        \n",
    "#disp_images(flipped_images, fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Radial Distortion\n",
    "\n",
    "Some 2D medical imaging modalities, such as endoscopic video and X-ray images acquired with C-arms using image intensifiers, exhibit radial distortion. The common model for such distortion was described by Brown [\"Close-range camera calibration\", Photogrammetric Engineering, 37(8):855–866, 1971]:\n",
    "$$\n",
    "\\mathbf{p}_u = \\mathbf{p}_d + (\\mathbf{p}_d-\\mathbf{p}_c)(k_1r^2 + k_2r^4 + k_3r^6 + \\ldots)\n",
    "$$\n",
    "\n",
    "where:\n",
    "* $\\mathbf{p}_u$ is a point in the undistorted image\n",
    "* $\\mathbf{p}_d$ is a point in the distorted image\n",
    "* $\\mathbf{p}_c$ is the center of distortion\n",
    "* $r = \\|\\mathbf{p}_d-\\mathbf{p}_c\\|$\n",
    "* $k_i$ are coefficients of the radial distortion\n",
    "\n",
    "\n",
    "Using SimpleITK operators we represent this transformation using a deformation field as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def radial_distort(image, k1, k2, k3, distortion_center=None):\n",
    "    c = distortion_center\n",
    "    if not c: # The default distortion center coincides with the image center\n",
    "        c = np.array(image.TransformContinuousIndexToPhysicalPoint(np.array(image.GetSize())/2.0))\n",
    "    \n",
    "    # Compute the vector image (p_d - p_c) \n",
    "    delta_image = sitk.PhysicalPointSource( sitk.sitkVectorFloat64, image.GetSize(), image.GetOrigin(), image.GetSpacing(), image.GetDirection())\n",
    "    delta_image_list = [sitk.VectorIndexSelectionCast(delta_image,i) - c[i] for i in range(len(c))]\n",
    "    \n",
    "    # Compute the radial distortion expression\n",
    "    r2_image = sitk.NaryAdd([img**2 for img in delta_image_list])\n",
    "    r4_image = r2_image**2\n",
    "    r6_image = r2_image*r4_image\n",
    "    disp_image = k1*r2_image + k2*r4_image + k3*r6_image\n",
    "    displacement_image = sitk.Compose([disp_image*img for img in delta_image_list])\n",
    "    \n",
    "    displacement_field_transform = sitk.DisplacementFieldTransform(displacement_image)\n",
    "    return sitk.Resample(image, image, displacement_field_transform)\n",
    "\n",
    "k1 = 0.00001\n",
    "k2 = 0.0000000000001\n",
    "k3 = 0.0000000000001\n",
    "original_image = data[0]\n",
    "distorted_image = radial_distort(original_image, k1, k2, k3)\n",
    "# Use a grid image to highlight the distortion.\n",
    "grid_image = sitk.GridSource(outputPixelType=sitk.sitkUInt16, size=original_image.GetSize(), \n",
    "                             sigma=[0.1]*dimension, gridSpacing=[20.0]*dimension)\n",
    "grid_image.CopyInformation(original_image)\n",
    "distorted_grid = radial_distort(grid_image, k1, k2, k3)\n",
    "disp_images([original_image, distorted_image, distorted_grid], fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Transferring deformations - exercise for the interested reader\n",
    "\n",
    "Using SimpleITK we can readily transfer deformations from a spatio-temporal data set to another spatial data set to simulate temporal behavior. Case in point, using a 4D (3D+time) CT of the thorax we can estimate the respiratory motion using non-rigid registration and [Free Form Deformation](65_Registration_FFD.ipynb) or [displacement field](66_Registration_Demons.ipynb) transformations. We can then register a new spatial data set to the original spatial CT (non-rigidly) followed by application of the temporal deformations.\n",
    "\n",
    "Note that unlike the arbitrary spatial transformations we used for data-augmentation above this approach is more computationally expensive as it involves multiple non-rigid registrations. Also note that as the goal is to use the estimated transformations to create plausible deformations you may be able to relax the required registration accuracy.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Augmentation using intensity modifications\n",
    "\n",
    "SimpleITK has many filters that are potentially relevant for data augmentation via modification of intensities. For example:\n",
    "* Image smoothing, always read the documentation carefully, similar filters use use different parametrization $\\sigma$ vs. variance ($\\sigma^2$):\n",
    "  * [Discrete Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1DiscreteGaussianImageFilter.html)\n",
    "  * [Recursive Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1RecursiveGaussianImageFilter.html)\n",
    "  * [Smoothing Recursive Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1SmoothingRecursiveGaussianImageFilter.html)\n",
    "\n",
    "* Edge preserving image smoothing:\n",
    "  * [Bilateral image filtering](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1BilateralImageFilter.html), edge preserving smoothing.\n",
    "  * [Median filtering](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1MedianImageFilter.html)\n",
    "\n",
    "* Adding noise to your images:\n",
    "  * [Additive Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1AdditiveGaussianNoiseImageFilter.html)\n",
    "  * [Salt and Pepper / Impulse](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1SaltAndPepperNoiseImageFilter.html)\n",
    "  * [Shot/Poisson](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1ShotNoiseImageFilter.html)\n",
    "  * [Speckle/multiplicative](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1SpeckleNoiseImageFilter.html)\n",
    "  \n",
    "* [Adaptive Histogram Equalization](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1AdaptiveHistogramEqualizationImageFilter.html)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def augment_images_intensity(image_list, output_prefix, output_suffix):\n",
    "    '''\n",
    "    Generate intensity modified images from the originals.\n",
    "    Args:\n",
    "        image_list (iterable containing SimpleITK images): The images which we whose intensities we modify.\n",
    "        output_prefix (string): output file name prefix (file name: output_prefixi_FilterName.output_suffix).\n",
    "        output_suffix (string): output file name suffix (file name: output_prefixi_FilterName.output_suffix).\n",
    "    '''\n",
    "\n",
    "    # Create a list of intensity modifying filters, which we apply to the given images\n",
    "    filter_list = []\n",
    "    \n",
    "    # Smoothing filters\n",
    "    \n",
    "    filter_list.append(sitk.SmoothingRecursiveGaussianImageFilter())\n",
    "    filter_list[-1].SetSigma(2.0)\n",
    "    \n",
    "    filter_list.append(sitk.DiscreteGaussianImageFilter())\n",
    "    filter_list[-1].SetVariance(4.0)\n",
    "    \n",
    "    filter_list.append(sitk.BilateralImageFilter())\n",
    "    filter_list[-1].SetDomainSigma(4.0)\n",
    "    filter_list[-1].SetRangeSigma(8.0)\n",
    "    \n",
    "    filter_list.append(sitk.MedianImageFilter())\n",
    "    filter_list[-1].SetRadius(8)\n",
    "    \n",
    "    # Noise filters using default settings\n",
    "    \n",
    "    # Filter control via SetMean, SetStandardDeviation.\n",
    "    filter_list.append(sitk.AdditiveGaussianNoiseImageFilter())\n",
    "\n",
    "    # Filter control via SetProbability\n",
    "    filter_list.append(sitk.SaltAndPepperNoiseImageFilter())\n",
    "    \n",
    "    # Filter control via SetScale\n",
    "    filter_list.append(sitk.ShotNoiseImageFilter())\n",
    "    \n",
    "    # Filter control via SetStandardDeviation\n",
    "    filter_list.append(sitk.SpeckleNoiseImageFilter())\n",
    "\n",
    "    filter_list.append(sitk.AdaptiveHistogramEqualizationImageFilter())\n",
    "    filter_list[-1].SetAlpha(1.0)\n",
    "    filter_list[-1].SetBeta(0.0)\n",
    "\n",
    "    filter_list.append(sitk.AdaptiveHistogramEqualizationImageFilter())\n",
    "    filter_list[-1].SetAlpha(0.0)\n",
    "    filter_list[-1].SetBeta(1.0)\n",
    "    \n",
    "    aug_image_lists = [] # Used only for display purposes in this notebook.\n",
    "    for i,img in enumerate(image_list):\n",
    "        aug_image_lists.append([f.Execute(img) for f in filter_list])            \n",
    "        for aug_image,f in zip(aug_image_lists[-1], filter_list):\n",
    "            sitk.WriteImage(aug_image, output_prefix + str(i) + '_' +\n",
    "                            f.GetName() + '.' + output_suffix)\n",
    "    return aug_image_lists"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Modify the intensities of the original images using the set of SimpleITK filters described above. If we are working with 2D images the results will be displayed inline."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "intensity_augmented_images = augment_images_intensity(data, os.path.join(OUTPUT_DIR, 'intensity_aug'), 'mha')\n",
    "\n",
    "          # in 2D we join all of the images into a 3D volume which we use for display.\n",
    "if dimension==2:    \n",
    "    def list2_float_volume(image_list) :\n",
    "        return sitk.JoinSeries([sitk.Cast(img, sitk.sitkFloat32) for img in image_list])\n",
    "        \n",
    "    all_images = [list2_float_volume(imgs) for imgs in intensity_augmented_images]\n",
    "    \n",
    "    # Compute reasonable window-level values for display (just use the range of intensity values\n",
    "    # from the original data).\n",
    "    original_window_level = []\n",
    "    statistics_image_filter = sitk.StatisticsImageFilter()\n",
    "    for img in data:\n",
    "        statistics_image_filter.Execute(img)\n",
    "        max_intensity = statistics_image_filter.GetMaximum()\n",
    "        min_intensity = statistics_image_filter.GetMinimum()\n",
    "        original_window_level.append((max_intensity-min_intensity, (max_intensity+min_intensity)/2.0))\n",
    "    gui.MultiImageDisplay(image_list=all_images, shared_slider=True, figure_size=(6,2), window_level_list=original_window_level)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "SimpleITK has a sigmoid filter that allows us to map intensities via this nonlinear function to our desired range. Unlike the standard sigmoid settings that are applied when used as an activation function, the sigmoid filter is not necessarily centered on zero and the minimum and maximum output values are not necessarily 0 and 1.\n",
    "The filter itself is defined as:\n",
    "$$f(I) = (max_{output} - min_{output}) \\frac{1}{1+ e^{-\\frac{I-\\beta}{\\alpha}}} + min_{output}$$\n",
    "\n",
    "Where $\\alpha$ is the curve steepness (the larger the $\\alpha$ the steeper the slope, the smaller the $\\alpha$ the closer we get to a linear mapping in the output range) and $\\beta$ is the intensity value for the sigmoid midpoint."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "def sigmoid_mapping(image, curve_steepness, output_min=0, output_max=1.0, intensity_midpoint=None):\n",
    "    '''\n",
    "    Map the image using a sigmoid function.\n",
    "    Args:\n",
    "        image (SimpleITK image): scalar input image.\n",
    "        curve_steepness: Control the sigmoid steepness, the larger the number the steeper the curve.\n",
    "        output_min: Minimum value for output image, default 0.0 .\n",
    "        output_max: Maximum value for output image, default 1.0 .\n",
    "        intensity_midpoint: intensity value defining the sigmoid midpoint (x coordinate), default is the\n",
    "                            median image intensity.\n",
    "    Return:\n",
    "        SimpleITK image with float pixel type.\n",
    "    '''\n",
    "    if intensity_midpoint is None:\n",
    "        intensity_midpoint = np.median(sitk.GetArrayViewFromImage(image))\n",
    "\n",
    "    sig_filter = sitk.SigmoidImageFilter()\n",
    "    sig_filter.SetOutputMinimum(output_min)\n",
    "    sig_filter.SetOutputMaximum(output_max)\n",
    "    sig_filter.SetAlpha(1.0/curve_steepness)\n",
    "    sig_filter.SetBeta(float(intensity_midpoint))\n",
    "    return sig_filter.Execute(sitk.Cast(image, sitk.sitkFloat64))\n",
    "\n",
    "# Change the order of magnitude of curve steepness [1.0,0.1,0.01] to see the effect of this parameter.\n",
    "# Also change it from positive to negative.\n",
    "disp_images([sigmoid_mapping(img, curve_steepness=0.01) for img in data], fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "While the sigmoid mapping visually appears to work as expected, it is always good to \"trust but verify\". In the next cell we create a 1D image and plot the resulting sigmoid mapped values, ensuring that what we expect is indeed what is happening. This also allows us to see the effects in a more controlled manner. \n",
    "\n",
    "To see the effects of various settings combinations try:\n",
    "Setting the `curve_steepness` to [1.0,0.1,0.01, -0.01, -0.1, -1.0]\n",
    "Setting the `intensity_midpoint` to [-50, 0, 50]."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "#Create a 1D image with values in [-100,100].\n",
    "arr_x = np.array([list(range(-100,101))])\n",
    "image1D = sitk.GetImageFromArray(arr_x)\n",
    "plt.figure()\n",
    "plt.plot(arr_x.ravel(), \n",
    "         sitk.GetArrayViewFromImage(sigmoid_mapping(image1D, curve_steepness=1.0, intensity_midpoint = 0)).ravel());"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Histogram equalization, increasing the entropy, of images prior to using deep learning is a common preprocessing step. Unfortunately, ITK and consequentially SimpleITK do not have a histogram equalization filter.\n",
    "\n",
    "The following cell illustrates this functionality for all integer scalar SimpleITK images (2D,3D)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def histogram_equalization(image, \n",
    "                           min_target_range = None, \n",
    "                           max_target_range = None,\n",
    "                           use_target_range = True):\n",
    "    '''\n",
    "    Histogram equalization of scalar images whose single channel has an integer\n",
    "    type. The goal is to map the original intensities so that resulting \n",
    "    histogram is more uniform (increasing the image's entropy).\n",
    "    Args:\n",
    "        image (SimpleITK.Image): A SimpleITK scalar image whose pixel type\n",
    "                                 is an integer (sitkUInt8,sitkInt8...\n",
    "                                 sitkUInt64, sitkInt64).\n",
    "        min_target_range (scalar): Minimal value for the target range. If None\n",
    "                                   then use the minimal value for the scalar pixel\n",
    "                                   type (e.g. 0 for sitkUInt8).\n",
    "        max_target_range (scalar): Maximal value for the target range. If None\n",
    "                                   then use the maximal value for the scalar pixel\n",
    "                                   type (e.g. 255 for sitkUInt8).\n",
    "        use_target_range (bool): If true, the resulting image has values in the\n",
    "                                 target range, otherwise the resulting values\n",
    "                                 are in [0,1].\n",
    "    Returns:\n",
    "        SimpleITK.Image: A scalar image with the same pixel type as the input image\n",
    "                         or a sitkFloat64 (depending on the use_target_range value).\n",
    "    '''\n",
    "    arr = sitk.GetArrayViewFromImage(image)\n",
    "    \n",
    "    i_info = np.iinfo(arr.dtype)\n",
    "    if min_target_range is None:\n",
    "        min_target_range = i_info.min\n",
    "    else:\n",
    "        min_target_range = np.max([i_info.min, min_target_range])\n",
    "    if max_target_range is None:\n",
    "        max_target_range = i_info.max\n",
    "    else:\n",
    "        max_target_range = np.min([i_info.max, max_target_range])\n",
    "\n",
    "    min_val = arr.min()\n",
    "    number_of_bins = arr.max() - min_val + 1\n",
    "    # using ravel, not flatten, as it does not involve memory copy\n",
    "    hist = np.bincount((arr-min_val).ravel(), minlength=number_of_bins)\n",
    "    cdf = np.cumsum(hist)\n",
    "    cdf = (cdf - cdf[0]) / (cdf[-1] - cdf[0])\n",
    "    res = cdf[arr-min_val]\n",
    "    if use_target_range:\n",
    "        res = (min_target_range + res*(max_target_range-min_target_range)).astype(arr.dtype)\n",
    "    return sitk.GetImageFromArray(res)\n",
    "\n",
    "#cast the images to int16 because data[0] is float32 and the histogram equalization only works \n",
    "#on integer types.\n",
    "disp_images([histogram_equalization(sitk.Cast(img,sitk.sitkInt16)) for img in data], fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, you can easily create intensity variations that are specific to your domain, such as the spatially varying multiplicative and additive transformation shown below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def mult_and_add_intensity_fields(original_image):\n",
    "    '''\n",
    "    Modify the intensities using multiplicative and additive Gaussian bias fields.\n",
    "    '''\n",
    "    # Gaussian image with same meta-information as original (size, spacing, direction cosine)\n",
    "    # Sigma is half the image's physical size and mean is the center of the image. \n",
    "    g_mult = sitk.GaussianSource(original_image.GetPixelIDValue(),\n",
    "                             original_image.GetSize(),\n",
    "                             [(sz-1)*spc/2.0 for sz, spc in zip(original_image.GetSize(), original_image.GetSpacing())],\n",
    "                             original_image.TransformContinuousIndexToPhysicalPoint(np.array(original_image.GetSize())/2.0),\n",
    "                             255,\n",
    "                             original_image.GetOrigin(),\n",
    "                             original_image.GetSpacing(),\n",
    "                             original_image.GetDirection())\n",
    "\n",
    "    # Gaussian image with same meta-information as original (size, spacing, direction cosine)\n",
    "    # Sigma is 1/8 the image's physical size and mean is at 1/16 of the size \n",
    "    g_add = sitk.GaussianSource(original_image.GetPixelIDValue(),\n",
    "                             original_image.GetSize(),\n",
    "               [(sz-1)*spc/8.0 for sz, spc in zip(original_image.GetSize(), original_image.GetSpacing())],\n",
    "               original_image.TransformContinuousIndexToPhysicalPoint(np.array(original_image.GetSize())/16.0),\n",
    "               255,\n",
    "               original_image.GetOrigin(),\n",
    "               original_image.GetSpacing(),\n",
    "               original_image.GetDirection())\n",
    "    \n",
    "    return g_mult*original_image+g_add\n",
    "\n",
    "disp_images([mult_and_add_intensity_fields(img) for img in data], fig_size=(6,2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
