{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "bXoQir0P3P_L"
   },
   "source": [
    "# 10-714: Homework 1\n",
    "\n",
    "This homework will get you started with your implementation of the **needle** (**ne**cessary **e**lements of **d**eep **le**arning) library that you will develop throughout this course.  In particular, the goal of this assignment is to build a basic **automatic differentiation** frameowrk, then use this to re-implement the simple two-layer neural network you used for the MNIST digit classification problem in HW0.\n",
    "\n",
    "First, as you did for HW0, make a copy of this notebook file by selecting \"Save a copy in Drive\" from the \"File\" menu, and then run the code block below.  Then run the code below to set up and install the necessary packages."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'google.colab'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[2], line 2\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# Code to set up the assignment\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mgoogle\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcolab\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m drive\n\u001b[1;32m      3\u001b[0m drive\u001b[38;5;241m.\u001b[39mmount(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m/content/drive\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m      4\u001b[0m get_ipython()\u001b[38;5;241m.\u001b[39mrun_line_magic(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcd\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m/content/drive/MyDrive/\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
      "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'google.colab'"
     ]
    }
   ],
   "source": [
    "# Code to set up the assignment\n",
    "from google.colab import drive\n",
    "drive.mount('/content/drive')\n",
    "%cd /content/drive/MyDrive/\n",
    "!mkdir -p 10714\n",
    "%cd /content/drive/MyDrive/10714\n",
    "!git clone https://github.com/dlsys10714/hw1.git\n",
    "%cd /content/drive/MyDrive/10714/hw1\n",
    "\n",
    "!pip3 install --upgrade --no-deps git+https://github.com/dlsys10714/mugrade.git\n",
    "!pip3 install numdifftools"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Q9vtSr-B3RMx",
    "outputId": "10894b4a-cced-49c4-917b-73cc51a07d30"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "sys.path.append('./python')\n",
    "sys.path.append('./apps')\n",
    "from simple_ml import *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction to `needle`\n",
    "\n",
    "For an introduction to the `needle` framework, refer to Lecture 5 in class and [this Jupyter notebook](https://github.com/dlsys10714/notebooks/blob/main/5_automatic_differentiation_implementation.ipynb) from the lecture. For this homework, you will be implementing the basics of automatic differentiation using a `numpy` CPU backend (in later assignments, you will move to your own linear algebra library including GPU code). All code for this assignment will be written in Python.\n",
    "\n",
    "For the purposes of this assignment, there are two important files in the `needle` library, the `python/needle/autograd.py` file (which defines the basics of the computational graph framework, and also will form the basis of the automatic differentation framework), and the `python/needle/ops/ops_mathematic.py`.file (which contains implementations of various operators that you will use implement throughout the assignment and the course).\n",
    "\n",
    "Although the basic framework for automatic differentiation is already set up in the `autograd.py` file, you should familiarize yourself with the basic concepts of the library as it relates to a few different defined classes.  Note that we would **not** recommend attempting to read through the entire code base before starting your implementations (some of the functionality will likely make more sense after you have implemented something), but you should have a basic   Specifically, you should get familiar with the basic concepts behind the following classes:\n",
    "- `Value`: A value computed in a compute graph, i.e., either the output of some operations applied to other `Value` objects, or a constant (leaf) `Value` objects  We use a generic class here (which we then specialize to e.g. Tensors), in order to allow for other data structures in later version of needle, but for now you will interact with this class mostly through it's subclass `Tensor` (see below).\n",
    "- `Op`: An operator in a compute graph.  Operators need to define their \"forward\" pass in the `compute()` method (i.e., how to compute the operator on the underlying data of the `Value` objects), as well as their \"backward\" pass via the `gradient()` method, which defines how to multiply by incoming output gradients.  The details of writing such operators will be given below.\n",
    "- `Tensor`: This is a subclass of `Value` that corresponds to an actual tensor output, i.e., a multi-dimensional array within a computation graph.  All of your code for this assignment (and most of the following) will use this subclass of `Value` rather than the generic class above.  We have provided several convevience functions (e.g., operator overloading) that let you operate on tensor using normal Python conventions, but these will not work properly until you implement the corresponding operations.\n",
    "- `TensorOp`: This is a subclass for `Op` for operators that return Tensors.  All the operations you implement for this assignment will be of this type.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "18KCMTWT3zVY"
   },
   "source": [
    "## Question 1: Implementing forward computation [10 pts]\n",
    "\n",
    "\n",
    "First, you will implement the forward computation for new operators.  To see how this works, consider the `EWiseAdd` operator in the `ops/ops_mathematic.py` file:\n",
    "\n",
    "```python\n",
    "class EWiseAdd(TensorOp):\n",
    "    def compute(self, a: NDArray, b: NDArray):\n",
    "        return a + b\n",
    "\n",
    "    def gradient(self, out_grad: Tensor, node: Tensor):\n",
    "        return out_grad, out_grad\n",
    "\n",
    "def add(a, b):\n",
    "    return EWiseAdd()(a, b)\n",
    "```\n",
    "\n",
    "The conventions for implementations of this class are the following.  The `compute()` function computes the \"forward\" pass, i.e., just computes the operation itself.  However, it is important to emphasize the inputs to compute are both `NDArray` objects (i.e., in this initial implementation, they are just `numpy.ndarray` objects, though in a later assignment you will implement your own NDArray).  That is, `compute()` computes the forward pass on the _raw data objects_ themselves, not on Tensor objects within the automatic differentiation.\n",
    "\n",
    "We will discuss the `gradient()` call in the next section, but it is important to emphasize here that this call is different from forward in that it takes `Tensor` arguments.  This means that any call you make within this function _should_ be done via `TensorOp` operations themselves (so that you can take gradients of gradients).\n",
    "\n",
    "Finally, note that we also define a helper `add()` function, to avoid the need to call `EWiseAdd()(a,b)` (which is a bit cumbersome) to add two `Tensor` objects.  These functions are all written for you, and should be self-explanation.\n",
    "\n",
    "\n",
    "For this question, you will need to implement the `compute` call for each of the following classes.  These calls are very straightforward, and should be essentially one line that calls to the relevant numpy function.  Note that because in later homeworks you will use a backend other than numpy, we have imported numpy as `import numpy as array_api`, so that you'll need to call `array_api.add()` etc, if you want to use the typical `np.X()` calls.\n",
    "\n",
    "- `PowerScalar`: raise input to an integer (scalar) power\n",
    "- `EWiseDiv`: true division of the inputs, element-wise (2 inputs)\n",
    "- `DivScalar`: true division of the input by a scalar, element-wise (1 input, `scalar` - number)\n",
    "- `MatMul`: matrix multiplication of the inputs (2 inputs)\n",
    "- `Summation`: sum of array elements over given axes (1 input, `axes` - tuple)\n",
    "- `BroadcastTo`: broadcast an array to a new shape (1 input, `shape` - tuple)\n",
    "- `Reshape`: gives a new shape to an array without changing its data (1 input, `shape` - tuple)\n",
    "- `Negate`: numerical negative, element-wise (1 input)\n",
    "- `Transpose`: reverses the order of two axes (axis1, axis2), defaults to the last two axes (1 input, `axes` - tuple)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform linux -- Python 3.11.5, pytest-7.4.4, pluggy-1.0.0 -- /root/miniconda3/bin/python3\n",
      "cachedir: .pytest_cache\n",
      "rootdir: /mnt/d/project/10-714/hw1\n",
      "plugins: anyio-4.2.0\n",
      "collected 24 items / 14 deselected / 10 selected                               \u001b[0m\u001b[1m\n",
      "\n",
      "tests/hw1/test_autograd_hw.py::test_power_scalar_forward \u001b[32mPASSED\u001b[0m\u001b[32m          [ 10%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_ewisepow_forward \u001b[32mPASSED\u001b[0m\u001b[32m              [ 20%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_divide_forward \u001b[32mPASSED\u001b[0m\u001b[32m                [ 30%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_divide_scalar_forward \u001b[32mPASSED\u001b[0m\u001b[32m         [ 40%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_matmul_forward \u001b[32mPASSED\u001b[0m\u001b[32m                [ 50%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_summation_forward \u001b[32mPASSED\u001b[0m\u001b[32m             [ 60%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_broadcast_to_forward \u001b[32mPASSED\u001b[0m\u001b[32m          [ 70%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_reshape_forward \u001b[32mPASSED\u001b[0m\u001b[32m               [ 80%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_negate_forward \u001b[32mPASSED\u001b[0m\u001b[32m                [ 90%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_transpose_forward \u001b[32mPASSED\u001b[0m\u001b[32m             [100%]\u001b[0m\n",
      "\n",
      "\u001b[32m====================== \u001b[32m\u001b[1m10 passed\u001b[0m, \u001b[33m14 deselected\u001b[0m\u001b[32m in 0.82s\u001b[0m\u001b[32m =======================\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!python3 -m pytest -v -k \"forward\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!python3 -m mugrade submit 'YOUR_GRADER_KEY_HERE' -k \"forward\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Question 2: Implementing backward computation [25 pts]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that you have implemented the functions within our computation graph, in order to implement automatic differentiation using our computational graph, we need to be able to compute the backward pass, i.e., multiply the relevant derivatives of the function with the incoming backward gradients.\n",
    "\n",
    "The easiest way to perform these computations is, again, via taking \"fake\" partial derivatives (assuming everything is a scalar), and then matching sizes: here the tests we provide will automatically check against numerical derivatives to ensure that your solution is correct.\n",
    "\n",
    "The general goal of reverse mode autodifferentiation is to compute the gradient of some downstream function $\\ell$ of $f(x,y)$ with respect to $x$ (or $y$).  Written formally, we could write this as trying to compute\n",
    "\\begin{equation}\n",
    "\\frac{\\partial \\ell}{\\partial x} = \\frac{\\partial \\ell}{\\partial f(x,y)} \\frac{\\partial f(x,y)}{\\partial x}.\n",
    "\\end{equation}\n",
    "The \"incoming backward gradient\" is precisely the term $\\frac{\\partial \\ell}{\\partial f(x,y)}$, so we want our `gradient()` function to ultimately compute the _product_ between this backward gradient the function's own derivative $\\frac{\\partial f(x,y)}{\\partial x}$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To see how this works a bit more concretely, consider the elementwise addition function we presented above\n",
    "\\begin{equation}\n",
    "f(x,y) = x + y.\n",
    "\\end{equation}\n",
    "Let's suppose that in this setting $x,y\\in \\mathbb{R}^n$, so that $f(x,y) \\in \\mathbb{R}^n$ as well.  Then via simple differentiation\n",
    "\\begin{equation}\n",
    "\\frac{\\partial f(x,y)}{\\partial x} = 1\n",
    "\\end{equation}\n",
    "so that \n",
    "\\begin{equation}\n",
    "\\frac{\\partial \\ell}{\\partial x} = \\frac{\\partial \\ell}{\\partial f(x,y)} \\frac{\\partial f(x,y)}{\\partial x} = \\frac{\\partial \\ell}{\\partial f(x,y)}\n",
    "\\end{equation}\n",
    "i.e., the product of the function's derivative with respect to its first argument $x$ is just exactly the same as the backward incoming gradient.  The same is true of the gradient with respect to the second argument $y$.  This is precisely what is captured by the following method of the `EWiseAdd` operator.\n",
    "```python\n",
    "    def gradient(self, out_grad: Tensor, node: Tensor):\n",
    "        return out_grad, out_grad\n",
    "```\n",
    "i.e., the function just results the incoming backward gradient (which actually _is_ here the product between the backward incoming gradient and the derivative with respect to each argument of the function.  And because the size of $f(x,y)$ is the same as the size of both $x$ and $y$, we don't even need to worry about dimensions here."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now consider another example, the (elementwise) multiplication function\n",
    "\\begin{equation}\n",
    "f(x,y) = x \\circ y\n",
    "\\end{equation}\n",
    "where $\\circ$ denotes elementwise multiplication between $x$ and $y$.  The partial of this function is given by\n",
    "\\begin{equation}\n",
    "\\frac{\\partial f(x,y)}{\\partial x} = y\n",
    "\\end{equation}\n",
    "and similarly \n",
    "\\begin{equation}\n",
    "\\frac{\\partial f(x,y)}{\\partial y} = x\n",
    "\\end{equation}\n",
    "\n",
    "Thus to compute the product of the incoming gradient\n",
    "\\begin{equation}\n",
    "\\frac{\\partial \\ell}{\\partial x} = \\frac{\\partial \\ell}{\\partial f(x,y)} \\frac{\\partial f(x,y)}{\\partial x} = \\frac{\\partial \\ell}{\\partial f(x,y)} \\cdot y\n",
    "\\end{equation}\n",
    "If $x,y \\in \\mathbb{R}^n$ like in the previous example, then $f(x,y) \\in \\mathbb{R}^n$ as well so the first element returned back the graident function would just be the elementwise multiplication\n",
    "\\begin{equation}\n",
    "\\frac{\\partial \\ell}{\\partial f(x,y)} \\circ y\n",
    "\\end{equation}\n",
    "\n",
    "This is captures in the `gradient()` call of the `EWiseMul` class.\n",
    "```python\n",
    "class EWiseMul(TensorOp):\n",
    "    def compute(self, a: NDArray, b: NDArray):\n",
    "        return a * b\n",
    "\n",
    "    def gradient(self, out_grad: Tensor, node: Tensor):\n",
    "        lhs, rhs = node.inputs\n",
    "        return out_grad * rhs, out_grad * lhs\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Implementing backward passes\n",
    "\n",
    "Note that, unlike the forward pass functions, the arguments to the `gradient` function are `needle` objects. It is important to implement the backward passes using only `needle` operations (i.e. those defined in `python/needle/ops/ops_mathematic.py`), rather than using `numpy` operations on the underlying `numpy` data, so that we can construct the gradients themselves via a computation graph (one excpetion is for the `ReLU` operation defined below, where you could directly access data within the Tensor without risk because the gradient itself is non-differentiable, but this is a special case).\n",
    "\n",
    "\n",
    "To complete this question, fill in the `gradient` function of the following classes:\n",
    "\n",
    "- `PowerScalar`\n",
    "- `EWiseDiv`\n",
    "- `DivScalar`\n",
    "- `MatMul`\n",
    "- `Summation`\n",
    "- `BroadcastTo`\n",
    "- `Reshape`\n",
    "- `Negate`\n",
    "- `Transpose`\n",
    "\n",
    "All of the `gradient` functions can be computed using just the operations defined in `python/needle/ops/ops_mathematic.py`, so there is no need to define any additional forward functions.\n",
    "\n",
    "**Hint:** while gradients of multiplication, division, etc, may be relatively intuitive to compute it can seem a bit less intuitive to compute backward passes of items like `Broadcast` or `Summation`.  To get a handle on these, you can check gradients numerically and print out their actual values, if you don't know where to start (see the `tests/test_autograd_hw.py`, specifically the `check_gradients()` function within that file to get a sense about how to do this).  And remember that the side of `out_grad` will always be the size of the _output_ of the operation, whereas the sizes of the `Tensor` objects _returned_ by `graident()` have to always be the same as the original _inputs_ to the operator.\n",
    "\n",
    "\n",
    "### Checking backward passes\n",
    "To reiterate the above, remember that we can check that these backward passes are correct by doing numerical gradient checking as covered in lecture:\n",
    "\\begin{equation}\n",
    "\\delta^T \\nabla_\\theta f(\\theta) = \\frac{f(\\theta + \\epsilon \\delta) - f(\\theta - \\epsilon \\delta)}{2 \\epsilon} + o(\\epsilon^2)\n",
    "\\end{equation}\n",
    "We provide the function `gradient_check` for doing this numerical checking in `tests/test_autograd_hw.py`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "etSSMzD0fee0",
    "outputId": "2e8d28f1-9dee-4ad0-d9c3-9d9bdb1c30ea",
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform linux -- Python 3.11.5, pytest-7.4.4, pluggy-1.0.0 -- /root/miniconda3/bin/python3\n",
      "cachedir: .pytest_cache\n",
      "rootdir: /mnt/d/project/10-714/hw1\n",
      "plugins: anyio-4.2.0\n",
      "collected 24 items / 14 deselected / 10 selected                               \u001b[0m\u001b[1m\n",
      "\n",
      "tests/hw1/test_autograd_hw.py::test_power_scalar_backward \u001b[32mPASSED\u001b[0m\u001b[32m         [ 10%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_divide_backward \u001b[32mPASSED\u001b[0m\u001b[32m               [ 20%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_divide_scalar_backward \u001b[32mPASSED\u001b[0m\u001b[32m        [ 30%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_matmul_simple_backward \u001b[32mPASSED\u001b[0m\u001b[32m        [ 40%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_matmul_batched_backward \u001b[32mPASSED\u001b[0m\u001b[32m       [ 50%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_reshape_backward \u001b[32mPASSED\u001b[0m\u001b[32m              [ 60%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_negate_backward \u001b[32mPASSED\u001b[0m\u001b[32m               [ 70%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_transpose_backward \u001b[32mPASSED\u001b[0m\u001b[32m            [ 80%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_broadcast_to_backward \u001b[32mPASSED\u001b[0m\u001b[32m         [ 90%]\u001b[0m\n",
      "tests/hw1/test_autograd_hw.py::test_summation_backward \u001b[32mPASSED\u001b[0m\u001b[32m            [100%]\u001b[0m\n",
      "\n",
      "\u001b[32m====================== \u001b[32m\u001b[1m10 passed\u001b[0m, \u001b[33m14 deselected\u001b[0m\u001b[32m in 0.93s\u001b[0m\u001b[32m =======================\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!python3 -m pytest -l -v -k \"backward\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!python3 -m mugrade submit 'YOUR_GRADER_KEY_HERE' -k \"backward\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SQMdWR1x7Euc"
   },
   "source": [
    "## Question 3: Topological sort [20 pts]\n",
    "\n",
    "Now your system is capable of performing operations on tensors which builds up a computation graph. Next you will write one of the main utilities needed for automatic differentiation - the [topological sort](https://en.wikipedia.org/wiki/Topological_sorting). This will allow us to traverse through (forward or backward) the compuatation graph, computing gradients along the way. Furthermore, the previously built components will allow for the operations we perform during this reverse topological traversal to further add to our computation graph (as discussed in lecture), and will therefore give us higher-order differentiation \"for free.\" \n",
    "\n",
    "Fill out the `find_topo_sort` method and the `topo_sort_dfs` helper method (in `python/needle/autograd.py`) to perform this topological sorting. \n",
    "\n",
    "#### Hints: \n",
    "- Ensure that you do a post-order depth-first search, otherwise the test cases will fail. \n",
    "- The `topo_sort_dfs` method is not required, but we find it useful to use this as a recursive helper function. \n",
    "- The \"Reverse mode AD by extending computational graph\" section of the Lecture 4 slides contains walks through an example of the proper node ordering. \n",
    "- We will be traversing this sorting backwards in later parts of this homework, but the `find_topo_sort` should return the node ordering in the forward direction. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "pQqntvth8GVa",
    "outputId": "c8bcdac7-c8e3-47ca-f450-fac21d29bfab"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform linux -- Python 3.11.5, pytest-7.4.4, pluggy-1.0.0\n",
      "rootdir: /mnt/d/project/10-714/hw1\n",
      "plugins: anyio-4.2.0\n",
      "collected 24 items / 23 deselected / 1 selected                                \u001b[0m\u001b[1m\n",
      "\n",
      "tests/hw1/test_autograd_hw.py \u001b[32m.\u001b[0m\u001b[32m                                          [100%]\u001b[0m\n",
      "\n",
      "\u001b[32m======================= \u001b[32m\u001b[1m1 passed\u001b[0m, \u001b[33m23 deselected\u001b[0m\u001b[32m in 2.58s\u001b[0m\u001b[32m =======================\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!python3 -m pytest -k \"topo_sort\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "rPouXyxCfee1",
    "outputId": "803faa1b-a480-4501-8149-b32ff4d1b90b"
   },
   "outputs": [],
   "source": [
    "!python3 -m mugrade submit 'YOUR_GRADER_KEY_HERE' -k \"topo_sort\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OTDpkG098JiR"
   },
   "source": [
    "## Question 4: Implementing reverse mode differentiation [25 pts]\n",
    "\n",
    "Once you have correctly implemented the topological sort, you will next leverage it to implement reverse mode automatic differentiation. As a recap from last lecture, we will need to traverse the computational graph in reverse topological order, and construct the new adjoint nodes. For this question, implement the Reverse AD algorithm in the `compute_gradient_of_variables` function in `python/needle/autograd.py`. This will enable use of the `backward` function that computes the gradient and stores the gradient in the `grad` field of each input `Tensor`. With this completed, our reverse model autodifferentiation engine is functional. We can check the correctness of our implementation in much the same way that we numerically checked the individual backward gradients, by comparing the numerical gradient to the computed one, using the function `gradient_check` in `tests/test_autograd.py`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image2.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As discussed in lecture the result of reverse mode AD is still a computational graph. We can extend that graph further by composing more operations and run reverse mode AD again on the gradient (the last two tests of this problem). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "4L37C7ez8fkM",
    "outputId": "f2846195-0754-465c-97d7-c36a7ade8cea"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform linux -- Python 3.11.5, pytest-7.4.4, pluggy-1.0.0\n",
      "rootdir: /mnt/d/project/10-714/hw1\n",
      "plugins: anyio-4.2.0\n",
      "collected 24 items / 23 deselected / 1 selected                                \u001b[0m\u001b[1m\n",
      "\n",
      "tests/hw1/test_autograd_hw.py \u001b[32m.\u001b[0m\u001b[32m                                          [100%]\u001b[0m\n",
      "\n",
      "\u001b[32m======================= \u001b[32m\u001b[1m1 passed\u001b[0m, \u001b[33m23 deselected\u001b[0m\u001b[32m in 0.79s\u001b[0m\u001b[32m =======================\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!python3 -m pytest -k \"compute_gradient\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "nwjSL3n6fee2",
    "outputId": "8ff6a756-ac4d-4255-bef0-181c7ebbb594"
   },
   "outputs": [],
   "source": [
    "!python3 -m mugrade submit 'YOUR_GRADER_KEY_HERE' -k \"compute_gradient\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qgMkRkJE8i9D"
   },
   "source": [
    "## Question 5: Softmax loss [10 pts]\n",
    "\n",
    "The following questions will be tested using the MNIST dataset, so we will use the `parse_mnist` function we wrote in the Homework 0. \n",
    "\n",
    "1. First, copy and paste your solution to Question 2 of Homework 0 to the `parse_mnist` function in the `apps/simple_ml.py` file.  \n",
    "\n",
    "In this question, you will implement the softmax loss as defined in the `softmax_loss()` function in `apps/simple_ml.py`, which we defined in Question 3 of Homework 0, except this time, the softmax loss takes as input a `Tensor` of logits and a `Tensor` of one hot encodings of the true labels. As a reminder, for a multi-class output that can take on values $y \\in \\{1,\\ldots,k\\}$, the softmax loss takes as input a vector of logits $z \\in \\mathbb{R}^k$, the true class $y \\in \\{1,\\ldots,k\\}$ (which is encoded for this function as a one-hot-vector) returns a loss defined by\n",
    "\\begin{equation}\n",
    "\\ell_{\\mathrm{softmax}}(z, y) = \\log\\sum_{i=1}^k \\exp z_i - z_y.\n",
    "\\end{equation}\n",
    "\n",
    "You will first need to implement the forward and backward passes of one additional operator: ``log``. \n",
    "\n",
    "2. Fill out the `compute()` function in the `Log` and `Exp` operator in `python/needle/ops/ops_mathematic.py`.\n",
    "3. Fill out the `gradient()` function in the `Log` and `Exp` operator in `python/needle/ops/ops_mathematic.py`. \n",
    " \n",
    "Once those operators have been implemented, \n",
    "\n",
    "4. Implement the function `softmax_loss` in `apps/simple_ml.py`. \n",
    "\n",
    "You can start with your solution from Homework 0, and then modify it to be compatible with `needle` objects and operations. As with the previous homework, the function you implement should compute the _average_ softmax loss over a batch of size $m$, i.e. logits `Z` will be an $m \\times k$ `Tensor` where each row represents one example, and `y_one_hot` will be an $m \\times k$ `Tensor` that contains all zeros except for a 1 in the element corresponding to the true label for each row. Finally, note that the average softmax loss returned should also be a `Tensor`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "CpwK-c989OkI",
    "outputId": "0f145d00-6dce-46c9-d5f4-5d72e6d65419"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform linux -- Python 3.11.5, pytest-7.4.4, pluggy-1.0.0\n",
      "rootdir: /mnt/d/project/10-714/hw1\n",
      "plugins: anyio-4.2.0\n",
      "collected 24 items / 23 deselected / 1 selected                                \u001b[0m\u001b[1m\n",
      "\n",
      "tests/hw1/test_autograd_hw.py \u001b[32m.\u001b[0m\u001b[32m                                          [100%]\u001b[0m\n",
      "\n",
      "\u001b[32m======================= \u001b[32m\u001b[1m1 passed\u001b[0m, \u001b[33m23 deselected\u001b[0m\u001b[32m in 7.41s\u001b[0m\u001b[32m =======================\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!python3 -m pytest -k \"softmax_loss_ndl\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "DTwK4X6bfee2",
    "outputId": "6e666bd5-7f75-461b-b42a-673512ba953d"
   },
   "outputs": [],
   "source": [
    "!python3 -m mugrade submit 'YOUR_GRADER_KEY_HERE' -k \"softmax_loss_ndl\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "oWGd-mr-9luT"
   },
   "source": [
    "## Question 6: SGD for a two-layer neural network [10 pts]\n",
    "\n",
    "As you did in Homework 0, you will now implement stochastic gradient descent (SGD) for a simple two-layer neural network as defined in Question 5 of Homework 0. \n",
    "\n",
    "Specifically, for input $x \\in \\mathbb{R}^n$, we'll consider a two-layer neural network (without bias terms) of the form\n",
    "\\begin{equation}\n",
    "z = W_2^T \\mathrm{ReLU}(W_1^T x)\n",
    "\\end{equation}\n",
    "where $W_1 \\in \\mathbb{R}^{n \\times d}$ and $W_2 \\in \\mathbb{R}^{d \\times k}$ represent the weights of the network (which has a $d$-dimensional hidden unit), and where $z \\in \\mathbb{R}^k$ represents the logits output by the network.  We again use the softmax / cross-entropy loss, meaning that we want to solve the optimization problem, overloading the notation to describe the batch form with matrix $X \\in \\mathbb{R}^{m \\times n}$: \n",
    "\\begin{equation}\n",
    "\\min_{W_1, W_2} \\;\\; \\ell_{\\mathrm{softmax}}(\\mathrm{ReLU}(X W_1) W_2, y).\n",
    "\\end{equation}\n",
    "\n",
    "\n",
    "First, you will need to implement the forward and backward passes of the `relu` operator. \n",
    "1. Begin by filling out the function `ReLU` operator in `python/needle/ops/ops_mathematic.py`.\n",
    "2. Then fill out the `gradient` function of the class `ReLU` in `python/needle/ops/ops_mathematic.py`.  **Note that in this one case it's acceptable to access the `.realize_cached_data()` call on the output tensor, since the ReLU function is not twice differentiable anyway**.\n",
    "\n",
    "Then, \n",
    "\n",
    "3. Fill out the `nn_epoch` method in the `apps/simple_ml.py` file. \n",
    "\n",
    "Again, you can use your solution in Homework 0 for the `nn_epoch` function as a starting point. Note that unlike in Homework 0, the inputs `W1` and `W2` are `Tensors`. Inputs `X` and `y` however are still numpy arrays - you should iterate over minibatches of the numpy arrays `X` and `y` as you did in Homework 0, and then cast each `X_batch` as a `Tensor`, and one hot encode `y_batch` and cast as a `Tensor`. While last time we derived the backpropagation equations for this two-layer ReLU network directly, this time we will be using our autodifferentiation engine to compute the gradients generically by calling the `.backward()` method of the `Tensor` class. For each minibatch, after calling `.backward`, you should compute the updated values for `W1` and `W2` in `numpy`, and then create new `Tensors` for `W1` and `W2` with these `numpy` values. Your solution should return the final `W1` and `W2` `Tensors`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "LHKC82ex9wFg",
    "outputId": "5a319557-624e-45be-e839-3e1d30efb6f5"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform linux -- Python 3.11.5, pytest-7.4.4, pluggy-1.0.0\n",
      "rootdir: /mnt/d/project/10-714/hw1\n",
      "plugins: anyio-4.2.0\n",
      "collected 24 items / 23 deselected / 1 selected                                \u001b[0m\u001b[1m\n",
      "\n",
      "tests/hw1/test_autograd_hw.py \u001b[31mF\u001b[0m\u001b[31m                                          [100%]\u001b[0m\n",
      "\n",
      "=================================== FAILURES ===================================\n",
      "\u001b[31m\u001b[1m______________________________ test_nn_epoch_ndl _______________________________\u001b[0m\n",
      "\n",
      "    \u001b[94mdef\u001b[39;49;00m \u001b[92mtest_nn_epoch_ndl\u001b[39;49;00m():\u001b[90m\u001b[39;49;00m\n",
      "        \u001b[90m# test forward/backward pass for relu\u001b[39;49;00m\u001b[90m\u001b[39;49;00m\n",
      "        np.testing.assert_allclose(\u001b[90m\u001b[39;49;00m\n",
      "            ndl.relu(\u001b[90m\u001b[39;49;00m\n",
      "                ndl.Tensor(\u001b[90m\u001b[39;49;00m\n",
      "                    [\u001b[90m\u001b[39;49;00m\n",
      "                        [-\u001b[94m46.9\u001b[39;49;00m, -\u001b[94m48.8\u001b[39;49;00m, -\u001b[94m45.45\u001b[39;49;00m, -\u001b[94m49.0\u001b[39;49;00m],\u001b[90m\u001b[39;49;00m\n",
      "                        [-\u001b[94m49.75\u001b[39;49;00m, -\u001b[94m48.75\u001b[39;49;00m, -\u001b[94m45.8\u001b[39;49;00m, -\u001b[94m49.25\u001b[39;49;00m],\u001b[90m\u001b[39;49;00m\n",
      "                        [-\u001b[94m45.65\u001b[39;49;00m, -\u001b[94m45.25\u001b[39;49;00m, -\u001b[94m49.3\u001b[39;49;00m, -\u001b[94m47.65\u001b[39;49;00m],\u001b[90m\u001b[39;49;00m\n",
      "                    ]\u001b[90m\u001b[39;49;00m\n",
      "                )\u001b[90m\u001b[39;49;00m\n",
      "            ).numpy(),\u001b[90m\u001b[39;49;00m\n",
      "            np.array([[\u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m], [\u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m], [\u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m, \u001b[94m0.0\u001b[39;49;00m]]),\u001b[90m\u001b[39;49;00m\n",
      "        )\u001b[90m\u001b[39;49;00m\n",
      "        gradient_check(ndl.relu, ndl.Tensor(np.random.randn(\u001b[94m5\u001b[39;49;00m, \u001b[94m4\u001b[39;49;00m)))\u001b[90m\u001b[39;49;00m\n",
      "    \u001b[90m\u001b[39;49;00m\n",
      "        \u001b[90m# test nn gradients\u001b[39;49;00m\u001b[90m\u001b[39;49;00m\n",
      "        np.random.seed(\u001b[94m0\u001b[39;49;00m)\u001b[90m\u001b[39;49;00m\n",
      "        X = np.random.randn(\u001b[94m50\u001b[39;49;00m, \u001b[94m5\u001b[39;49;00m).astype(np.float32)\u001b[90m\u001b[39;49;00m\n",
      "        y = np.random.randint(\u001b[94m3\u001b[39;49;00m, size=(\u001b[94m50\u001b[39;49;00m,)).astype(np.uint8)\u001b[90m\u001b[39;49;00m\n",
      "        W1 = np.random.randn(\u001b[94m5\u001b[39;49;00m, \u001b[94m10\u001b[39;49;00m).astype(np.float32) / np.sqrt(\u001b[94m10\u001b[39;49;00m)\u001b[90m\u001b[39;49;00m\n",
      "        W2 = np.random.randn(\u001b[94m10\u001b[39;49;00m, \u001b[94m3\u001b[39;49;00m).astype(np.float32) / np.sqrt(\u001b[94m3\u001b[39;49;00m)\u001b[90m\u001b[39;49;00m\n",
      "        W1_0, W2_0 = W1.copy(), W2.copy()\u001b[90m\u001b[39;49;00m\n",
      "        W1 = ndl.Tensor(W1)\u001b[90m\u001b[39;49;00m\n",
      "        W2 = ndl.Tensor(W2)\u001b[90m\u001b[39;49;00m\n",
      "        X_ = ndl.Tensor(X)\u001b[90m\u001b[39;49;00m\n",
      "        y_one_hot = np.zeros((y.shape[\u001b[94m0\u001b[39;49;00m], \u001b[94m3\u001b[39;49;00m))\u001b[90m\u001b[39;49;00m\n",
      "        y_one_hot[np.arange(y.size), y] = \u001b[94m1\u001b[39;49;00m\u001b[90m\u001b[39;49;00m\n",
      "        y_ = ndl.Tensor(y_one_hot)\u001b[90m\u001b[39;49;00m\n",
      "        dW1 = nd.Gradient(\u001b[90m\u001b[39;49;00m\n",
      "            \u001b[94mlambda\u001b[39;49;00m W1_: softmax_loss(\u001b[90m\u001b[39;49;00m\n",
      "                ndl.relu(X_ @ ndl.Tensor(W1_).reshape((\u001b[94m5\u001b[39;49;00m, \u001b[94m10\u001b[39;49;00m))) @ W2, y_\u001b[90m\u001b[39;49;00m\n",
      "            ).numpy()\u001b[90m\u001b[39;49;00m\n",
      "        )(W1.numpy())\u001b[90m\u001b[39;49;00m\n",
      "        dW2 = nd.Gradient(\u001b[90m\u001b[39;49;00m\n",
      "            \u001b[94mlambda\u001b[39;49;00m W2_: softmax_loss(\u001b[90m\u001b[39;49;00m\n",
      "                ndl.relu(X_ @ W1) @ ndl.Tensor(W2_).reshape((\u001b[94m10\u001b[39;49;00m, \u001b[94m3\u001b[39;49;00m)), y_\u001b[90m\u001b[39;49;00m\n",
      "            ).numpy()\u001b[90m\u001b[39;49;00m\n",
      "        )(W2.numpy())\u001b[90m\u001b[39;49;00m\n",
      ">       W1, W2 = nn_epoch(X, y, W1, W2, lr=\u001b[94m1.0\u001b[39;49;00m, batch=\u001b[94m50\u001b[39;49;00m)\u001b[90m\u001b[39;49;00m\n",
      "\n",
      "W1         = needle.Tensor([[-0.29401088 -0.80908954  0.5216167   0.19344868 -0.3326539  -0.23873496\n",
      "   0.19706924  0.00148867 -0.2...79334  -0.04092041  0.28462127 -0.16306373  0.16456479  0.36083168\n",
      "  -0.45686197 -0.32181606 -0.31315047  0.48197863]])\n",
      "W1_0       = array([[-0.29401088, -0.80908954,  0.5216167 ,  0.19344868, -0.3326539 ,\n",
      "        -0.23873496,  0.19706924,  0.00148867....16306373,  0.16456479,\n",
      "         0.36083168, -0.45686197, -0.32181606, -0.31315047,  0.48197863]],\n",
      "      dtype=float32)\n",
      "W2         = needle.Tensor([[ 0.36733624  0.11524972 -0.0044524 ]\n",
      " [ 0.6786693   0.724955   -0.16219284]\n",
      " [-0.21184002  0.9547785  ...\n",
      " [-0.9004626  -0.16824295  0.46423113]\n",
      " [ 0.42482972 -0.11699399  0.16757399]\n",
      " [-0.4935292  -0.1882925  -0.54978293]])\n",
      "W2_0       = array([[ 0.36733624,  0.11524972, -0.0044524 ],\n",
      "       [ 0.6786693 ,  0.724955  , -0.16219284],\n",
      "       [-0.21184002,  ...23113],\n",
      "       [ 0.42482972, -0.11699399,  0.16757399],\n",
      "       [-0.4935292 , -0.1882925 , -0.54978293]], dtype=float32)\n",
      "X          = array([[ 1.7640524 ,  0.4001572 ,  0.978738  ,  2.2408931 ,  1.867558  ],\n",
      "       [-0.9772779 ,  0.95008844, -0.1513572...29779088, -0.30901298],\n",
      "       [-1.6760038 ,  1.1523316 ,  1.0796186 , -0.81336427, -1.4664243 ]],\n",
      "      dtype=float32)\n",
      "X_         = needle.Tensor([[ 1.7640524   0.4001572   0.978738    2.2408931   1.867558  ]\n",
      " [-0.9772779   0.95008844 -0.1513572  -0....43705 -0.3972718  -0.13288058 -0.29779088 -0.30901298]\n",
      " [-1.6760038   1.1523316   1.0796186  -0.81336427 -1.4664243 ]])\n",
      "dW1        = array([ 0.00353945, -0.01171727,  0.14492095,  0.16244466,  0.01229022,\n",
      "       -0.05016893, -0.02330435,  0.00969417, ...206181,  0.05918267,  0.08731565, -0.03563257,\n",
      "        0.0592333 , -0.01278306,  0.03428882, -0.02947122,  0.01330177])\n",
      "dW2        = array([ 0.01496775,  0.00531035, -0.0202781 ,  0.03245086, -0.01403989,\n",
      "       -0.01841097, -0.0447855 ,  0.17606012, ...276495,  0.01388334, -0.01664829,  0.04061081,\n",
      "       -0.03538637, -0.00522444, -0.00874686,  0.02942744, -0.02068058])\n",
      "y          = array([1, 0, 2, 0, 0, 1, 2, 1, 1, 1, 0, 0, 2, 0, 0, 2, 1, 2, 2, 0, 2, 2,\n",
      "       0, 2, 1, 0, 1, 2, 1, 1, 0, 1, 1, 1, 2, 0, 1, 1, 0, 1, 2, 0, 1, 2,\n",
      "       1, 2, 1, 2, 1, 2], dtype=uint8)\n",
      "y_         = needle.Tensor([[0. 1. 0.]\n",
      " [1. 0. 0.]\n",
      " [0. 0. 1.]\n",
      " [1. 0. 0.]\n",
      " [1. 0. 0.]\n",
      " [0. 1. 0.]\n",
      " [0. 0. 1.]\n",
      " [0. 1. 0.]\n",
      " [0. 1. ...0. 0. 1.]\n",
      " [1. 0. 0.]\n",
      " [0. 1. 0.]\n",
      " [0. 0. 1.]\n",
      " [0. 1. 0.]\n",
      " [0. 0. 1.]\n",
      " [0. 1. 0.]\n",
      " [0. 0. 1.]\n",
      " [0. 1. 0.]\n",
      " [0. 0. 1.]])\n",
      "y_one_hot  = array([[0., 1., 0.],\n",
      "       [1., 0., 0.],\n",
      "       [0., 0., 1.],\n",
      "       [1., 0., 0.],\n",
      "       [1., 0., 0.],\n",
      "       [0., 1...[0., 1., 0.],\n",
      "       [0., 0., 1.],\n",
      "       [0., 1., 0.],\n",
      "       [0., 0., 1.],\n",
      "       [0., 1., 0.],\n",
      "       [0., 0., 1.]])\n",
      "\n",
      "\u001b[1m\u001b[31mtests/hw1/test_autograd_hw.py\u001b[0m:994: \n",
      "_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \n",
      "\n",
      "X = array([[ 1.7640524 ,  0.4001572 ,  0.978738  ,  2.2408931 ,  1.867558  ],\n",
      "       [-0.9772779 ,  0.95008844, -0.1513572...29779088, -0.30901298],\n",
      "       [-1.6760038 ,  1.1523316 ,  1.0796186 , -0.81336427, -1.4664243 ]],\n",
      "      dtype=float32)\n",
      "y = array([1, 0, 2, 0, 0, 1, 2, 1, 1, 1, 0, 0, 2, 0, 0, 2, 1, 2, 2, 0, 2, 2,\n",
      "       0, 2, 1, 0, 1, 2, 1, 1, 0, 1, 1, 1, 2, 0, 1, 1, 0, 1, 2, 0, 1, 2,\n",
      "       1, 2, 1, 2, 1, 2], dtype=uint8)\n",
      "W1 = needle.Tensor([[-0.29401088 -0.80908954  0.5216167   0.19344868 -0.3326539  -0.23873496\n",
      "   0.19706924  0.00148867 -0.2...79334  -0.04092041  0.28462127 -0.16306373  0.16456479  0.36083168\n",
      "  -0.45686197 -0.32181606 -0.31315047  0.48197863]])\n",
      "W2 = needle.Tensor([[ 0.36733624  0.11524972 -0.0044524 ]\n",
      " [ 0.6786693   0.724955   -0.16219284]\n",
      " [-0.21184002  0.9547785  ...\n",
      " [-0.9004626  -0.16824295  0.46423113]\n",
      " [ 0.42482972 -0.11699399  0.16757399]\n",
      " [-0.4935292  -0.1882925  -0.54978293]])\n",
      "lr = 1.0, batch = 50\n",
      "\n",
      "    \u001b[94mdef\u001b[39;49;00m \u001b[92mnn_epoch\u001b[39;49;00m(X, y, W1, W2, lr=\u001b[94m0.1\u001b[39;49;00m, batch=\u001b[94m100\u001b[39;49;00m):\u001b[90m\u001b[39;49;00m\n",
      "    \u001b[90m    \u001b[39;49;00m\u001b[33m\"\"\"Run a single epoch of SGD for a two-layer neural network defined by the\u001b[39;49;00m\n",
      "    \u001b[33m    weights W1 and W2 (with no bias terms):\u001b[39;49;00m\n",
      "    \u001b[33m        logits = ReLU(X * W1) * W2\u001b[39;49;00m\n",
      "    \u001b[33m    The function should use the step size lr, and the specified batch size (and\u001b[39;49;00m\n",
      "    \u001b[33m    again, without randomizing the order of X).\u001b[39;49;00m\n",
      "    \u001b[33m\u001b[39;49;00m\n",
      "    \u001b[33m    Args:\u001b[39;49;00m\n",
      "    \u001b[33m        X (np.ndarray[np.float32]): 2D input array of size\u001b[39;49;00m\n",
      "    \u001b[33m            (num_examples x input_dim).\u001b[39;49;00m\n",
      "    \u001b[33m        y (np.ndarray[np.uint8]): 1D class label array of size (num_examples,)\u001b[39;49;00m\n",
      "    \u001b[33m        W1 (ndl.Tensor[np.float32]): 2D array of first layer weights, of shape\u001b[39;49;00m\n",
      "    \u001b[33m            (input_dim, hidden_dim)\u001b[39;49;00m\n",
      "    \u001b[33m        W2 (ndl.Tensor[np.float32]): 2D array of second layer weights, of shape\u001b[39;49;00m\n",
      "    \u001b[33m            (hidden_dim, num_classes)\u001b[39;49;00m\n",
      "    \u001b[33m        lr (float): step size (learning rate) for SGD\u001b[39;49;00m\n",
      "    \u001b[33m        batch (int): size of SGD mini-batch\u001b[39;49;00m\n",
      "    \u001b[33m\u001b[39;49;00m\n",
      "    \u001b[33m    Returns:\u001b[39;49;00m\n",
      "    \u001b[33m        Tuple: (W1, W2)\u001b[39;49;00m\n",
      "    \u001b[33m            W1: ndl.Tensor[np.float32]\u001b[39;49;00m\n",
      "    \u001b[33m            W2: ndl.Tensor[np.float32]\u001b[39;49;00m\n",
      "    \u001b[33m    \"\"\"\u001b[39;49;00m\u001b[90m\u001b[39;49;00m\n",
      "    \u001b[90m\u001b[39;49;00m\n",
      "        \u001b[90m### BEGIN YOUR SOLUTION\u001b[39;49;00m\u001b[90m\u001b[39;49;00m\n",
      "        iters = (y.size + batch - \u001b[94m1\u001b[39;49;00m) // batch\u001b[90m\u001b[39;49;00m\n",
      "        \u001b[94mfor\u001b[39;49;00m i \u001b[95min\u001b[39;49;00m \u001b[96mrange\u001b[39;49;00m(iters):\u001b[90m\u001b[39;49;00m\n",
      "            x = ndl.Tensor(X[i*batch:(i+\u001b[94m1\u001b[39;49;00m)*batch, :])\u001b[90m\u001b[39;49;00m\n",
      ">           Z = ndl.matmul(ndl.ReLU(ndl.matmul(x, W1)), W2)\u001b[90m\u001b[39;49;00m\n",
      "\u001b[1m\u001b[31mE           TypeError: ReLU() takes no arguments\u001b[0m\n",
      "\n",
      "W1         = needle.Tensor([[-0.29401088 -0.80908954  0.5216167   0.19344868 -0.3326539  -0.23873496\n",
      "   0.19706924  0.00148867 -0.2...79334  -0.04092041  0.28462127 -0.16306373  0.16456479  0.36083168\n",
      "  -0.45686197 -0.32181606 -0.31315047  0.48197863]])\n",
      "W2         = needle.Tensor([[ 0.36733624  0.11524972 -0.0044524 ]\n",
      " [ 0.6786693   0.724955   -0.16219284]\n",
      " [-0.21184002  0.9547785  ...\n",
      " [-0.9004626  -0.16824295  0.46423113]\n",
      " [ 0.42482972 -0.11699399  0.16757399]\n",
      " [-0.4935292  -0.1882925  -0.54978293]])\n",
      "X          = array([[ 1.7640524 ,  0.4001572 ,  0.978738  ,  2.2408931 ,  1.867558  ],\n",
      "       [-0.9772779 ,  0.95008844, -0.1513572...29779088, -0.30901298],\n",
      "       [-1.6760038 ,  1.1523316 ,  1.0796186 , -0.81336427, -1.4664243 ]],\n",
      "      dtype=float32)\n",
      "batch      = 50\n",
      "i          = 0\n",
      "iters      = 1\n",
      "lr         = 1.0\n",
      "x          = needle.Tensor([[ 1.7640524   0.4001572   0.978738    2.2408931   1.867558  ]\n",
      " [-0.9772779   0.95008844 -0.1513572  -0....43705 -0.3972718  -0.13288058 -0.29779088 -0.30901298]\n",
      " [-1.6760038   1.1523316   1.0796186  -0.81336427 -1.4664243 ]])\n",
      "y          = array([1, 0, 2, 0, 0, 1, 2, 1, 1, 1, 0, 0, 2, 0, 0, 2, 1, 2, 2, 0, 2, 2,\n",
      "       0, 2, 1, 0, 1, 2, 1, 1, 0, 1, 1, 1, 2, 0, 1, 1, 0, 1, 2, 0, 1, 2,\n",
      "       1, 2, 1, 2, 1, 2], dtype=uint8)\n",
      "\n",
      "\u001b[1m\u001b[31mapps/simple_ml.py\u001b[0m:102: TypeError\n",
      "\u001b[36m\u001b[1m=========================== short test summary info ============================\u001b[0m\n",
      "\u001b[31mFAILED\u001b[0m tests/hw1/test_autograd_hw.py::\u001b[1mtest_nn_epoch_ndl\u001b[0m - TypeError: ReLU() takes no arguments\n",
      "\u001b[31m======================= \u001b[31m\u001b[1m1 failed\u001b[0m, \u001b[33m23 deselected\u001b[0m\u001b[31m in 1.10s\u001b[0m\u001b[31m =======================\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!python3 -m pytest -l -k \"nn_epoch_ndl\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "hw1_combined.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
