{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Explainable AI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Why?\n",
    "\n",
    "Well, we want to know why. Why does a machine think that way. When AlphaGo is placing a stone, what value does it see in that move? When a model says that a fish is a human, what does he see in that fish that makes it a human? Being able to explain how a model works really helps us people understand, and improve on existing systems."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Main approaches for explainable AI systems\n",
    "\n",
    "Explainable AI systems tend to "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Just explain it\n",
    "\n",
    "Models like decision tree is easy to explain, because it's just making decisions along the way. It's fairly easy to see what goes wrong along the multiple decisions. Or models like KNN, you know how your model clusters things because it follows an algorithm that you understands.\n",
    "\n",
    "Most of the models that are directly explainable are based on some algorithms, as opposed to numeric models (that crunch numbers), which deep learning models are."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Removing a part, and see what gets affected\n",
    "\n",
    "If a panda does not have his/her head, is he/she still a human? Probably. However, if it is doesn't have its belly, it probably looks like a zebra. Sometimes when you want to see what part affects the decision of the model the most, you simply remove that part and see if the model changes its mind. In the above case, belly of a panda is much more characteristic of a panda than the head of a panda, because your model relies on it to make the correct decision."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### See what parts of the input yield the biggest gradients\n",
    "\n",
    "We love using gradients in deep learning. Gradients of inputs mean the amount of change to the output if input changes in the direction of the gradients. So if the gradients of input is huge in some region, it probably means that that part of the input is very important. This technique is very popular in images, and has a special name: saliency map."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
