{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Using Machine Learning Tools 2021, Assignment 3\n",
    "\n",
    "## Sign Language Image Classification using Deep Learning"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Overview\n",
    "\n",
    "In this assignment you will implement different deep learning networks to classify images of hands in poses that correspond to letters in American Sign Language. The dataset is contained in the assignment zip file, along with some images and a text file describing the dataset. It is similar in many ways to other MNIST datasets.\n",
    "\n",
    "The main aims of the assignment are:\n",
    "\n",
    " - To implement and train different types of deep learning network;\n",
    " \n",
    " - To systematically optimise the architecture and parameters of the networks;\n",
    "  \n",
    " - To explore over-fitting and know what appropriate actions to take in these cases.\n",
    " \n",
    "\n",
    "It is the intention that this assignment will take you through the process of implementing optimised deep learning approaches. The way that you work is more important than the results for this assignment, as what is most crucial for you to learn is how to take a dataset, understand the problem, write appropriate code, optimize performance and present results. A good understanding of the different aspects of this process and how to put them together well (which will not always be the same, since different problems come with different constraints or difficulties) is the key to being able to effectively use deep learning techniques in practice.\n",
    "\n",
    "This assignment relates to the following ACS CBOK areas: abstraction, design, hardware and software, data and information, HCI and programming.\n",
    "\n",
    "## Instructions\n",
    "\n",
    "While you are free to use whatever IDE you like to develop your code, your submission should be formatted as a Jupyter notebook that interleaves Python code with output, commentary and analysis. \n",
    "- Your code must use the current stable versions of python libraries, not outdated versions.\n",
    "- All data processing must be done within the notebook after calling appropriate load functions.\n",
    "- Comment your code, so that its purpose is clear to the reader!\n",
    "- **Before submitting your notebook, make sure to reset the kernel and run all cells in your final notebook so that it works correctly!**\n",
    "- In the submission file name, do not use spaces or special characters.\n",
    "\n",
    "This assignment is divided into several tasks. Use this notebook and enter your code, results and answer text analysis under the exact number that it belongs to!\n",
    "\n",
    "Make sure to answer every question with **separate answer text (“Answer: …”) in a Markdown cell** and check that you answered all sub-questions/aspects within the question. The text answers are worth points!\n",
    "\n",
    "Make the **figures self-explanatory and unambiguous.** Always include axis labels, if available with units, unique colours and markers for each curve/type of data, a legend and a title. Give every figure a number (e.g. at start of title), so that it can be referred to from different parts of the text/notebook. This is also worth points.\n",
    "\n",
    "The assignment is self-sufficient. If you need to \"evaluate\" or \"compare\", then **use the diagrams and metrics generated in the preceding parts** of the assignment, not new ones.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Outline\n",
    "\n",
    "The main steps in the assignment are outlined here, so that you can appreciate the bigger picture. Some of the steps are purely for testing your knowledge and demonstrating certain things, but in general it outlines the main process that you would go through to solve a practical problem.\n",
    "\n",
    "- To start with we load the data, visualise and explore it. You should always do this in any problem.\n",
    "\n",
    "- After this we will implement a simple deep learning network that will act as our baseline for comparisons and optimisations. In this assignment we will specify the settings, but in practice you can usually find these from published papers, blogs/competitions, advice from colleagues, or just a little bit of trial and error. \n",
    "\n",
    "- Afterwards, the bulk of the assignment will focus on tuning the network and trying alternatives to give the best performance. This is something you will need to do systematically, and demonstrates the approach that you would take in any practical problem. Some limitations will be given, in order to help you in this assignment, but it should be clear how to extrapolate these to a more general setting for other datasets and tasks.\n",
    "\n",
    "- Once the optimal network is found, the performance will be evaluated.\n",
    "\n",
    "- A free choice element at the end will allow you to explore some extra aspects, beyond the limitations imposed above.\n",
    "\n",
    "\n",
    "Feel free to use code from the workshops as a base for this assignment but be aware that they will normally not do *exactly* what you want (code examples rarely do!) and so you will need to make suitable modifications."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load and inspect the data (5 Points)\n",
    "\n",
    "We will use a dataset that contains images of hands in poses corresponding to letters in Amercian Sign Language. Each image is small (similar to MNIST size) and we will build a CNN classifier (using Keras) to determine the corresponding letter for an image. \n",
    "\n",
    "1. Load the dataset, look at examples of the images and the summary statistics and information for the class labels. (1 Point)\n",
    "2. Perform any necessary pre-processing of the images. (1 Point)\n",
    "3. Visulise the labels and make any adjustments you deem necessary. (1 Point)\n",
    "4. Create a validation set and any necessary test sets using only the supplied *testing* dataset.  It is unusual to do this, but here the training set contains a lot of non-independent, augmented images and it is important that the validation images must be independent of the training data and not made from augmented instances of training images. (2 Points)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Your code and answers here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Build and train a CNN (15 Points)\n",
    "1. Build a CNN using Keras that has the following settings: (5 Points)\n",
    " - Layers: Conv32 - MaxPooling - Conv64 - MaxPooling - Conv128 - Flatten - Dense(100) - Output\n",
    "    - Note that Conv32 means a convolutional layer with 32 filters, etc.\n",
    " - 3x3 kernel size,\n",
    " - ReLU activation functions for hidden layers,\n",
    " - No dropout layers,\n",
    " - No BatchNorm layers.\n",
    "2. Train this network using appropriate data and metrics. The following settings should be used in training: (5 Points)\n",
    " - Nadam optimiser,\n",
    " - Fixed learning rate, using the default value for this optimiser,\n",
    " - Early stopping.\n",
    "3. Display the learning curves, confusion matrix and performance values. (5 Points) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Your code and answers here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Hyper-parameter optimisation (14 Points)\n",
    "Take the base network from the previous part and perform *separate* hyper-parameter optimisation over the following 2 hyper-parameters. In both cases leave the layer sizes and other configurations constant.\n",
    "\n",
    "1. Optimise the learning rate value, or learning rate schedule. (3 Points)\n",
    "2. Optimise the L2 regularisation on the kernels in all layers. (3 Points)\n",
    "3. Display your results as two plots of the performance versus the hyper-parameter value(s). Practice unambiguous and organised visualisation, by labelling the axis, using a legend, title, axis limits and suitable marker and line styles. Make sure to choose the axis limits, such that the important parts of the data are clearly visible. (6 Points)\n",
    "4. Question: What are the best hyper-parameter values to use? Where can we see this in the diagrams and outputs generated above? (2 Points)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Your code and answers here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Larger CNN and ResNet (16 Points)\n",
    "In this part you will implement a larger version of the previous CNN as well as a ResNet version of this network. \n",
    "1. Implement a CNN with the following architecture:\n",
    " - Layers: Conv32 - Conv32 - MaxPooling - Conv64 - Conv64 - MaxPooling - Conv128 - Conv128 - Flatten - Dense(100) - Output\n",
    "Keep the same settings for other parameters as was used in section 2. Train this network with the Sign Language dataset. (5 Points)\n",
    "2. ResNet: Start with making a function that builds residual modules with two convolutional layers and a skip connection that adds the input to the output of the second layer, prior to the use of the activation function. Then build a new network similar to the one above (point 1), but replace the convolutional layer pairs in the network with ResNet modules. Note that you will need to use the functional form of Keras models for this. An introduction ResNet can be found in  chapter 14 of Géron and in Workshop 11. Train this network with the Sign Language dataset. (5 Points)\n",
    "4. Display your learning curves and performance results. (3 Points)\n",
    "5. Question: Which network performs better? Where can we see this in the outputs generated above? (3 Points)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Your code and answers here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. Final evaluation (10 Points)\n",
    "1. Compare the networks using accuracy and choose the best network (from among all the options you have explored above in sections 2, 3 and 4) and display the accuracy of the best network. (5 Points)\n",
    "2. Calculate the confusion matrix from these results and show the matrix graphically. (3 Points)\n",
    "3. Question: Which class is most often incorrectly classified and what is the class that it is most commonly mistaken for? Explain your reasoning. (2 Points)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Your code and answers here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6. Joint Optimisation (20 Points)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this section there is free choice - you can choose which **three** hyper-parameters/settings to investigate in order to improve the classification performance. Make sure that you clearly explain your process and show appropriate results. Again, the value achieved for the final performance are not that important - it is the process and display of understanding that counts.\n",
    "\n",
    "Options are:\n",
    " - BatchNormalization\n",
    " - Activation function\n",
    " - Optimiser\n",
    " - Dropout rate in a dropout layer associated with the final dense layer (before the output layer)\n",
    " - Vary the number of *filters* in the Conv layers\n",
    " - Vary the *number* of Conv2D and MaxPooling *layers* and their ordering\n",
    " \n",
    "Report your findings and show the final results, in the way that you might report your work to that of a technically minded manager/supervisor who had asked you to perform this optimisation. This should include enough information to demonstrate that you have followed a correct procedure. \n",
    "\n",
    "Points will be awarded as such: 6 for approach taken and rationale/explanation; 7 for code and correct training; 7 for display of selected intermediate and final results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Your code and answers here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Assessment\n",
    "Hand in your notebook as an .ipynb file via the MyUni page. Make sure your notebook includes your code and formatted (Markdown) text blocks explaining what you have done. Your mark will be based on both code correctness and the quality of your comments and analysis. \n",
    "\n",
    "The assignment is worth 40% of your overall mark for the course.\n",
    "\n",
    "Mark Jenkinson  \n",
    "May 2021"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
