{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4",
      "authorship_tag": "ABX9TyNJXXrA4Zm7xHKUVEED+5za",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/DanielWarfield1/MLWritingAndResearch/blob/main/NNInCUDA.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Making a Neural Network in CUDA\n",
        "based on this blog post\n",
        "\n",
        "https://web.archive.org/web/20231002022529/https://luniak.io/cuda-neural-network-implementation-part-1/\n",
        "\n",
        "with this code\n",
        "\n",
        "https://github.com/pwlnk/cuda-neural-network/tree/master/cuda-neural-network/src\n",
        "\n",
        "\n",
        "## Project Structure\n",
        "I'm developing a CUDA project like I would typically experiment in python, which means I'm defining little bits of functionality then playing around with the results.\n",
        "\n",
        "I'm doing this by defining c++ headers `.hh` and CUDA files `.cu`, I'm also defining `main.cu` differently at different points in the program, and compiling it. This allows me to experiment with code blocks.\n",
        "\n",
        "Most sections of functionality have four blocks:\n",
        " - A header file\n",
        " - a CUDA file\n",
        " - a test main file\n",
        " - a block for compiling\n",
        "\n",
        "Naturally, as functionality develops, blocks include functionality defiend in previous blocks.\n",
        "\n",
        "## An overview\n",
        "I'm writing an \"Intuitiviely and Exhaustively Explained\" article on this work, but as a quick overview:\n",
        "\n",
        "1. First, a few utilities get defined\n",
        "    1. **shape**, which is just a datatype which holds an `x` and a `y` value\n",
        "    1. **nn_excpetion**. When an error happens in CUDA it doesn't necissarily halt the program on the CPU (which is the host). This is an abstraction which allows the main program to quickly check if there has been an issue with CUDA.\n",
        "    1. **matrix**, holds a 2D matrix (as a flattened list). It also manages the communication of that matrix between the host and device. Also allows the matrix to be easily indexable by a single value, which is convenient for parallelizing computation\n",
        "    1. **binary cross entropy**, based on predicted outputs, and target outputs, this calculates a loss (aka cost) value which quantifies how bad the predictions are. This also calculates the derivative of cross entropy, which essentially calculates how the output should change to be better.\n",
        "1. Then, a few layers get defined\n",
        "    1. **[nn_layer](https://colab.research.google.com/drive/1W-njrqroQhQcqplyVVCW-PctcP5a1JEr#scrollTo=f3URqewCcqKU)** abstracts all the layers into the same polymorphic class. This defines each layer as having a forward and a backward pass, where the forward pass is used to generate predictions and the backward pass is used to update the parameters of that particular layer.\n",
        "    2. **[linear_layer](https://colab.research.google.com/drive/1W-njrqroQhQcqplyVVCW-PctcP5a1JEr#scrollTo=LdFVowqTdDN6)** A linear layer which multiplies an input matrix by a weight matrix, and adds a value matrix.\n",
        "    3. **[sigmoid](https://colab.research.google.com/drive/1W-njrqroQhQcqplyVVCW-PctcP5a1JEr#scrollTo=7GY1XGnG8hQo)** a sigmoid activation function\n",
        "    4. **[ReLu](https://colab.research.google.com/drive/1W-njrqroQhQcqplyVVCW-PctcP5a1JEr#scrollTo=LK73skCuD4jm)** a ReLu activation function\n",
        "    \n",
        "\n",
        "## Design Patterns of Note\n",
        "I'm new to CUDA, and low level ML in general. These were some things I found interesting:\n",
        "```\n",
        "\n",
        "```"
      ],
      "metadata": {
        "id": "dMGBwrGKMwdG"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Structure of the project"
      ],
      "metadata": {
        "id": "RDooIUw6pOyq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile someClass.hh\n",
        "\n",
        "// this is used so, if someClass gets imported multiple times across several\n",
        "// documents, it only actually gets imported once.\n",
        "#pragma once\n",
        "\n",
        "class ClassWithFunctionality {\n",
        "    // defining private things for internal use\n",
        "private:\n",
        "    // defining private data\n",
        "    int someValue;\n",
        "    int anotherValue;\n",
        "\n",
        "    // defining private functions\n",
        "    void privateFunction1();\n",
        "    void privateFunction2();\n",
        "\n",
        "    // defining things accessible outside the object\n",
        "public:\n",
        "    // defining public data\n",
        "    int somePublicValue;\n",
        "    int someOtherPublicValue;\n",
        "\n",
        "    // defining public functions\n",
        "    ClassWithFunctionality(int constructorInput);\n",
        "    void doSomething1();\n",
        "    void doSomething2();\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ulWuVoIddY3W",
        "outputId": "98e14ffa-6fd0-4749-d9c8-b15027a58765"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing someClass.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile someClass.cu\n",
        "\n",
        "#include \"someClass.hh\"\n",
        "\n",
        "// defining constructor\n",
        "ClassWithFunctionality::ClassWithFunctionality(int constructorInput)\n",
        "    : someValue(constructorInput), anotherValue(2), somePublicValue(3), someOtherPublicValue(4)\n",
        "{}\n",
        "\n",
        "void ClassWithFunctionality::doSomething1() {\n",
        "    return;\n",
        "}\n",
        "\n",
        "void ClassWithFunctionality::doSomething2() {\n",
        "    return;\n",
        "}\n",
        "\n",
        "void ClassWithFunctionality::privateFunction1() {\n",
        "    return;\n",
        "}\n",
        "\n",
        "void ClassWithFunctionality::privateFunction2() {\n",
        "    return;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "tUTUab_jqgrL",
        "outputId": "da969034-448c-4cdc-977d-228a9a6b4ad3"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing someClass.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include <iostream>\n",
        "#include \"someClass.hh\"\n",
        "\n",
        "// testing SomeClass\n",
        "int main(void) {\n",
        "    ClassWithFunctionality example(3);\n",
        "    std::cout << \"it works!\" << std::endl;\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "XkbqOcf2qi1O",
        "outputId": "94be3fe2-26a6-4530-a7e2-a52901bd1242"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc someClass.cu main.cu -o outputFile.out\n",
        "!./outputFile.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "muSiITCWqkY1",
        "outputId": "dc942861-939a-4572-bbd1-a3bcfbfc76ba"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "it works!\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Defining Helper Functions, Structures, and Classes"
      ],
      "metadata": {
        "id": "zyXe-42Jro6W"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Shape\n",
        "\n",
        "for storing the 2d shape of some matrix"
      ],
      "metadata": {
        "id": "t1t53QE4t-Kj"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile shape.hh\n",
        "#pragma once\n",
        "\n",
        "struct Shape {\n",
        "\tsize_t x, y;\n",
        "\n",
        "\tShape(size_t x = 1, size_t y = 1);\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NYvulJVi-O07",
        "outputId": "8ea18048-deec-4e84-afce-78276fe44edf"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing shape.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile shape.cu\n",
        "#include \"shape.hh\"\n",
        "\n",
        "Shape::Shape(size_t x, size_t y) :\n",
        "\tx(x), y(y)\n",
        "{ }"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "hj8F7IlsSGPS",
        "outputId": "548ba1d1-833a-4748-919f-fa46233e1ae0"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing shape.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include \"shape.hh\"\n",
        "#include <iostream>\n",
        "#include <stdio.h>\n",
        "\n",
        "using namespace std;\n",
        "\n",
        "//testing\n",
        "int main( void ) {\n",
        "    Shape shape = Shape(100, 200);\n",
        "    cout << \"shape x: \" << shape.x << \", shape y: \" << shape.y << endl;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wiEiDSCrDAbo",
        "outputId": "1d2b0119-ae3c-429b-ec1f-eb59af3e465d"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc shape.cu main.cu -o shape.out\n",
        "!./shape.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "p6hetvIAo4_I",
        "outputId": "12180419-61db-429a-9b6c-4c3481baa333"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "shape x: 100, shape y: 200\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### nn exception\n",
        "\n",
        "This handles CUDA errors, providing an abstraction where the CPU can easily check if there's some error.\n",
        "\n",
        "essentially, this code allows for a relatively brief injection of\n",
        "```\n",
        "try {\n",
        "    NNException::throwIfDeviceErrorsOccurred(\"Failed to allocate GPU memory\");\n",
        "} catch (const NNException& e) {\n",
        "    std::cerr << \"Caught NNException: \" << e.what() << std::endl;\n",
        "    return -1; // Return an error code\n",
        "}\n",
        "```\n",
        "within some CUDA code. This checks for the last CUDA error thrown, and handles the raising of that error if it did occur. This essentially allows the CPU to ocassionally check if the GPU threw an error or not."
      ],
      "metadata": {
        "id": "U-8e0cSkzryK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile nn_exception.hh\n",
        "#pragma once\n",
        "\n",
        "#include <exception>\n",
        "#include <iostream>\n",
        "\n",
        "class NNException : std::exception {\n",
        "private:\n",
        "\tconst char* exception_message;\n",
        "\n",
        "public:\n",
        "\tNNException(const char* exception_message) :\n",
        "\t\texception_message(exception_message)\n",
        "\t{ }\n",
        "\n",
        "\tvirtual const char* what() const throw()\n",
        "\t{\n",
        "\t\treturn exception_message;\n",
        "\t}\n",
        "\n",
        "\tstatic void throwIfDeviceErrorsOccurred(const char* exception_message) {\n",
        "\t\tcudaError_t error = cudaGetLastError();\n",
        "\t\tif (error != cudaSuccess) {\n",
        "\t\t\tstd::cerr << error << \": \" << exception_message;\n",
        "\t\t\tthrow NNException(exception_message);\n",
        "\t\t}\n",
        "\t}\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "PhJ8F_42zrmN",
        "outputId": "d87af5dc-0ccd-4433-d2db-140561b8c149"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing nn_exception.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "//With error handling\n",
        "\n",
        "#include \"nn_exception.hh\"\n",
        "#include <cuda_runtime.h>\n",
        "\n",
        "int main() {\n",
        "    // Allocate memory on the GPU\n",
        "    float* d_data;\n",
        "    cudaError_t error = cudaMalloc((void**)&d_data, 100 * sizeof(float));\n",
        "\n",
        "    // Check for CUDA errors and throw an exception if any\n",
        "    try {\n",
        "        NNException::throwIfDeviceErrorsOccurred(\"Failed to allocate GPU memory\");\n",
        "    } catch (const NNException& e) {\n",
        "        std::cerr << \"Caught NNException: \" << e.what() << std::endl;\n",
        "        return -1; // Return an error code\n",
        "    }\n",
        "\n",
        "    // Free the GPU memory\n",
        "    error = cudaFree(d_data);\n",
        "\n",
        "    // Check for CUDA errors again\n",
        "    try {\n",
        "        NNException::throwIfDeviceErrorsOccurred(\"Failed to free GPU memory\");\n",
        "    } catch (const NNException& e) {\n",
        "        std::cerr << \"Caught NNException: \" << e.what() << std::endl;\n",
        "        return -1; // Return an error code\n",
        "    }\n",
        "\n",
        "    std::cout << \"CUDA operations completed successfully\" << std::endl;\n",
        "    return 0; // Return success\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "s8AZb2ZVJT3F",
        "outputId": "c6f017f6-12dc-4f0e-d08e-836eeba34f90"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "//Without error handling\n",
        "\n",
        "#include <cuda_runtime.h>\n",
        "#include <iostream>\n",
        "\n",
        "using namespace std;\n",
        "\n",
        "int main() {\n",
        "    // Allocate memory on the GPU\n",
        "    float* d_data;\n",
        "    cudaError_t error = cudaMalloc((void**)&d_data, 100 * sizeof(float));\n",
        "\n",
        "    // Free the GPU memory\n",
        "    error = cudaFree(d_data);\n",
        "\n",
        "    cout << \"CUDA operations completed successfully\" << endl;\n",
        "    return 0; // Return success\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "9TlCdZb-K0Z1",
        "outputId": "09a2e9c2-662b-4bcf-a063-57f35b36295b"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu shape.cu -o nnexception.out\n",
        "!./nnexception.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NFQ8RnczJao4",
        "outputId": "2c144bae-eb43-4dd9-f2f2-403d855e222b"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CUDA operations completed successfully\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### matrix\n",
        "\n",
        "This class abstracts some of the communication between the device and host, allowing a matrix of values to easily be passed between memory locations.\n",
        "\n",
        "It allows for\n",
        " - memory to be allocated on the GPU for the matrix\n",
        " - memory to be allocated on the CPU for the matrix\n",
        " - memory to be allocated on both the CPU and GPU for the matrix\n",
        " - allocate memory, if it isn't allocated allready\n",
        " - copy data from the CPU RAM to GPU VRAM\n",
        " - copy data from the GPU VRAM to CPU RAM\n",
        " - overrides to allow the matrix to be indexed like an array"
      ],
      "metadata": {
        "id": "XeKsCaS4uKFi"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile matrix.hh\n",
        "#pragma once\n",
        "\n",
        "#include \"shape.hh\"\n",
        "\n",
        "#include <memory>\n",
        "\n",
        "class Matrix {\n",
        "private:\n",
        "\tbool device_allocated;\n",
        "\tbool host_allocated;\n",
        "\n",
        "\tvoid allocateCudaMemory();\n",
        "\tvoid allocateHostMemory();\n",
        "\n",
        "public:\n",
        "\tShape shape;\n",
        "\n",
        "\tstd::shared_ptr<float> data_device;\n",
        "\tstd::shared_ptr<float> data_host;\n",
        "\n",
        "\tMatrix(size_t x_dim = 1, size_t y_dim = 1);\n",
        "\tMatrix(Shape shape);\n",
        "\n",
        "\tvoid allocateMemory();\n",
        "\tvoid allocateMemoryIfNotAllocated(Shape shape);\n",
        "\n",
        "\tvoid copyHostToDevice();\n",
        "\tvoid copyDeviceToHost();\n",
        "\n",
        "\tfloat& operator[](const int index);\n",
        "\tconst float& operator[](const int index) const;\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "9qPdPbsPzPJp",
        "outputId": "f0e990e0-2fb8-451a-8eec-f07a364b8f9b"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing matrix.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile matrix.cu\n",
        "#include \"matrix.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "using namespace std;\n",
        "\n",
        "Matrix::Matrix(size_t x_dim, size_t y_dim) :\n",
        "\tshape(x_dim, y_dim), data_device(nullptr), data_host(nullptr),\n",
        "\tdevice_allocated(false), host_allocated(false)\n",
        "{ }\n",
        "\n",
        "Matrix::Matrix(Shape shape) :\n",
        "\tMatrix(shape.x, shape.y)\n",
        "{ }\n",
        "\n",
        "void Matrix::allocateCudaMemory() {\n",
        "\tif (!device_allocated) {\n",
        "\t\tfloat* device_memory = nullptr;\n",
        "\t\tcudaMalloc(&device_memory, shape.x * shape.y * sizeof(float));\n",
        "\t\tNNException::throwIfDeviceErrorsOccurred(\"Cannot allocate CUDA memory for Tensor3D.\");\n",
        "\t\tdata_device = std::shared_ptr<float>(device_memory,\n",
        "\t\t\t\t\t\t\t\t\t\t\t [&](float* ptr){ cudaFree(ptr); });\n",
        "\t\tdevice_allocated = true;\n",
        "\t}\n",
        "}\n",
        "\n",
        "void Matrix::allocateHostMemory() {\n",
        "\tif (!host_allocated) {\n",
        "\t\tdata_host = std::shared_ptr<float>(new float[shape.x * shape.y],\n",
        "\t\t\t\t\t\t\t\t\t\t   [&](float* ptr){ delete[] ptr; });\n",
        "\t\thost_allocated = true;\n",
        "\t}\n",
        "}\n",
        "\n",
        "void Matrix::allocateMemory() {\n",
        "\tallocateCudaMemory();\n",
        "\tallocateHostMemory();\n",
        "}\n",
        "\n",
        "void Matrix::allocateMemoryIfNotAllocated(Shape shape) {\n",
        "\tif (!device_allocated && !host_allocated) {\n",
        "\t\tthis->shape = shape;\n",
        "\t\tallocateMemory();\n",
        "\t}\n",
        "}\n",
        "\n",
        "void Matrix::copyHostToDevice() {\n",
        "\tif (device_allocated && host_allocated) {\n",
        "\t\tcudaMemcpy(data_device.get(), data_host.get(), shape.x * shape.y * sizeof(float), cudaMemcpyHostToDevice);\n",
        "\t\tNNException::throwIfDeviceErrorsOccurred(\"Cannot copy host data to CUDA device.\");\n",
        "\t}\n",
        "\telse {\n",
        "\t\tthrow NNException(\"Cannot copy host data to not allocated memory on device.\");\n",
        "\t}\n",
        "}\n",
        "\n",
        "void Matrix::copyDeviceToHost() {\n",
        "\tif (device_allocated && host_allocated) {\n",
        "\t\tcudaMemcpy(data_host.get(), data_device.get(), shape.x * shape.y * sizeof(float), cudaMemcpyDeviceToHost);\n",
        "\t\tNNException::throwIfDeviceErrorsOccurred(\"Cannot copy device data to host.\");\n",
        "\t}\n",
        "\telse {\n",
        "\t\tthrow NNException(\"Cannot copy device data to not allocated memory on host.\");\n",
        "\t}\n",
        "}\n",
        "\n",
        "float& Matrix::operator[](const int index) {\n",
        "\treturn data_host.get()[index];\n",
        "}\n",
        "\n",
        "const float& Matrix::operator[](const int index) const {\n",
        "\treturn data_host.get()[index];\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NvevDP7KqPbH",
        "outputId": "bcf0f6d5-123a-4f82-e134-d6017703b40b"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing matrix.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include <iostream>\n",
        "#include \"matrix.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "int main() {\n",
        "    // Create a Matrix object with dimensions 10x10\n",
        "    Matrix matrix(10, 10);\n",
        "\n",
        "    // Allocate memory on both host and device\n",
        "    matrix.allocateMemory();\n",
        "    std::cout << \"Memory allocated on host and device.\" << std::endl;\n",
        "\n",
        "    // Initialize host data\n",
        "    for (size_t i = 0; i < 100; ++i) {\n",
        "        matrix[i] = static_cast<float>(i);\n",
        "    }\n",
        "    std::cout << \"Host data initialized.\" << std::endl;\n",
        "\n",
        "    // Copy data from host to device\n",
        "    matrix.copyHostToDevice();\n",
        "    std::cout << \"Data copied from host to device.\" << std::endl;\n",
        "\n",
        "    // Clear host data\n",
        "    for (size_t i = 0; i < 100; ++i) {\n",
        "        matrix[i] = 0.0f;\n",
        "    }\n",
        "    std::cout << \"Host data cleared.\" << std::endl;\n",
        "\n",
        "    // Copy data back from device to host\n",
        "    matrix.copyDeviceToHost();\n",
        "    std::cout << \"Data copied from device to host.\" << std::endl;\n",
        "\n",
        "    // Verify the data\n",
        "    bool success = true;\n",
        "    for (size_t i = 0; i < 100; ++i) {\n",
        "        if (matrix[i] != static_cast<float>(i)) {\n",
        "            success = false;\n",
        "            break;\n",
        "        }\n",
        "    }\n",
        "\n",
        "    if (success) {\n",
        "        std::cout << \"Test passed: Data verification successful.\" << std::endl;\n",
        "    } else {\n",
        "        std::cout << \"Test failed: Data verification unsuccessful.\" << std::endl;\n",
        "    }\n",
        "\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "xgWD-AexRXos",
        "outputId": "945e1847-92f0-489b-992b-fdaaabc94d85"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu -o matrix.out\n",
        "!./matrix.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "vTuZEqwAwXwh",
        "outputId": "2e62c72e-af54-4b95-cbad-91d5f99313c3"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Memory allocated on host and device.\n",
            "Host data initialized.\n",
            "Data copied from host to device.\n",
            "Host data cleared.\n",
            "Data copied from device to host.\n",
            "Test passed: Data verification successful.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Binary Cross Entropy Loss\n",
        "Calculates both the loss, as well as gradients."
      ],
      "metadata": {
        "id": "BIGJ1r0pT-2r"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile bce_cost.hh\n",
        "#pragma once\n",
        "#include \"matrix.hh\"\n",
        "\n",
        "class BCECost {\n",
        "public:\n",
        "\tfloat cost(Matrix predictions, Matrix target);\n",
        "\tMatrix dCost(Matrix predictions, Matrix target, Matrix dY);\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Q8VkHQSSwaf4",
        "outputId": "af7c301c-1a0d-45f0-eb44-5e60adbe4279"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing bce_cost.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile bce_cost.cu\n",
        "#include \"bce_cost.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "#include <math.h>\n",
        "#include <iostream>\n",
        "#include <assert.h>\n",
        "\n",
        "__global__ void binaryCrossEntropyCost(float* predictions, float* target,\n",
        "                                       int size, float* cost) {\n",
        "    int index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "    if (index < size) {\n",
        "        // Clamp predictions to avoid log(0)\n",
        "        float pred = predictions[index];\n",
        "        pred = fmaxf(fminf(pred, 1.0f - 1e-7), 1e-7);\n",
        "\n",
        "        float partial_cost = target[index] * logf(pred)\n",
        "                + (1.0f - target[index]) * logf(1.0f - pred);\n",
        "        atomicAdd(cost, - partial_cost / size);\n",
        "    }\n",
        "}\n",
        "\n",
        "__global__ void dBinaryCrossEntropyCost(float* predictions, float* target, float* dY,\n",
        "                                        int size) {\n",
        "    int index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "    if (index < size) {\n",
        "        // Clamp predictions to avoid division by zero\n",
        "        float pred = predictions[index];\n",
        "        pred = fmaxf(fminf(pred, 1.0f - 1e-7), 1e-7);\n",
        "\n",
        "        dY[index] = -1.0 * (target[index] / pred - (1 - target[index]) / (1 - pred));\n",
        "    }\n",
        "}\n",
        "\n",
        "float BCECost::cost(Matrix predictions, Matrix target) {\n",
        "\tassert(predictions.shape.x == target.shape.x);\n",
        "\n",
        "\tfloat* cost;\n",
        "\tcudaMallocManaged(&cost, sizeof(float));\n",
        "\t*cost = 0.0f;\n",
        "\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks((predictions.shape.x + block_size.x - 1) / block_size.x);\n",
        "\tbinaryCrossEntropyCost<<<num_of_blocks, block_size>>>(predictions.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t  target.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t  predictions.shape.x, cost);\n",
        "\tcudaDeviceSynchronize();\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot compute binary cross entropy cost.\");\n",
        "\n",
        "\tfloat cost_value = *cost;\n",
        "\tcudaFree(cost);\n",
        "\n",
        "\treturn cost_value;\n",
        "}\n",
        "\n",
        "Matrix BCECost::dCost(Matrix predictions, Matrix target, Matrix dY) {\n",
        "\tassert(predictions.shape.x == target.shape.x);\n",
        "\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks((predictions.shape.x + block_size.x - 1) / block_size.x);\n",
        "\tdBinaryCrossEntropyCost<<<num_of_blocks, block_size>>>(predictions.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t   target.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t   dY.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t   predictions.shape.x);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot compute derivative for binary cross entropy.\");\n",
        "\n",
        "\treturn dY;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3EHDxRYQVcsg",
        "outputId": "442d565f-0cf0-4900-cab9-84e9fe3f53ac"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing bce_cost.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include <iostream>\n",
        "#include <vector>\n",
        "#include \"matrix.hh\"\n",
        "#include \"bce_cost.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "// Helper function to initialize a Matrix with data\n",
        "void initializeMatrix(Matrix& matrix, const std::vector<float>& data) {\n",
        "    for (size_t i = 0; i < data.size(); ++i) {\n",
        "        matrix[i] = data[i];\n",
        "    }\n",
        "    matrix.copyHostToDevice();\n",
        "}\n",
        "\n",
        "int main() {\n",
        "    // Define the size of the data\n",
        "    const int size = 10;\n",
        "\n",
        "    // Create predictions and target data\n",
        "    std::vector<float> predictions_data = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95};\n",
        "    std::vector<float> target_data = {0, 0, 1, 0, 1, 0, 1, 1, 1, 0};\n",
        "\n",
        "    // Create Matrix objects for predictions and targets\n",
        "    Matrix predictions(size, 1);\n",
        "    Matrix target(size, 1);\n",
        "    predictions.allocateMemory();\n",
        "    target.allocateMemory();\n",
        "\n",
        "    // Initialize matrices with data\n",
        "    initializeMatrix(predictions, predictions_data);\n",
        "    initializeMatrix(target, target_data);\n",
        "\n",
        "    // Compute the binary cross-entropy cost\n",
        "    BCECost bce_cost;\n",
        "    float cost_value = bce_cost.cost(predictions, target);\n",
        "    std::cout << \"Binary Cross-Entropy Cost: \" << cost_value << std::endl;\n",
        "\n",
        "    // Compute the gradient of the binary cross-entropy cost\n",
        "    Matrix dY(size, 1);\n",
        "    dY.allocateMemory();\n",
        "    Matrix dCost_matrix = bce_cost.dCost(predictions, target, dY);\n",
        "    dCost_matrix.copyDeviceToHost();\n",
        "\n",
        "    // Print the gradient values\n",
        "    std::cout << \"Gradient of Binary Cross-Entropy Cost: \";\n",
        "    for (int i = 0; i < size; ++i) {\n",
        "        std::cout << dCost_matrix[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lyMh7v2vV4BY",
        "outputId": "1207ae5e-9e7c-4412-a835-dc166200bc17"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu bce_cost.cu -o bce.out\n",
        "!./bce.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "DX4m9kH2V9IA",
        "outputId": "70585850-a529-471d-cb58-b20ffccf351e"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Binary Cross-Entropy Cost: 0.733365\n",
            "Gradient of Binary Cross-Entropy Cost: 1.11111 1.25 -3.33333 1.66667 -2 2.5 -1.42857 -1.25 -1.11111 20 \n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Defining the Layers\n",
        "now that we've implemented some critical utilities, we can begin implementing"
      ],
      "metadata": {
        "id": "uW8C4O7LT99Z"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### nn Layer\n",
        "This is defining a general layer of a neural network, all layers will inherit this structure."
      ],
      "metadata": {
        "id": "f3URqewCcqKU"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile nn_layer.hh\n",
        "#pragma once\n",
        "\n",
        "#include <iostream>\n",
        "#include \"matrix.hh\"\n",
        "\n",
        "class NNLayer {\n",
        "protected:\n",
        "\tstd::string name;\n",
        "\n",
        "public:\n",
        "\tvirtual ~NNLayer() = 0;\n",
        "\n",
        "\tvirtual Matrix& forward(Matrix& A) = 0;\n",
        "\tvirtual Matrix& backprop(Matrix& dZ, float learning_rate) = 0;\n",
        "\n",
        "\tstd::string getName() { return this->name; };\n",
        "\n",
        "};\n",
        "\n",
        "inline NNLayer::~NNLayer() {}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3cB5TxCsZfa5",
        "outputId": "f06d9574-c68c-44a7-aca9-0fa49d39cefe"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing nn_layer.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Linear Layer"
      ],
      "metadata": {
        "id": "LdFVowqTdDN6"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile linear_layer.hh\n",
        "#pragma once\n",
        "#include \"nn_layer.hh\"\n",
        "\n",
        "class LinearLayer : public NNLayer {\n",
        "private:\n",
        "\tconst float weights_init_threshold = 0.01;\n",
        "\n",
        "\tMatrix W;\n",
        "\tMatrix b;\n",
        "\n",
        "\tMatrix Z;\n",
        "\tMatrix A;\n",
        "\tMatrix dA;\n",
        "\n",
        "\tvoid initializeBiasWithZeros();\n",
        "\tvoid initializeWeightsRandomly();\n",
        "\n",
        "\tvoid computeAndStoreBackpropError(Matrix& dZ);\n",
        "\tvoid computeAndStoreLayerOutput(Matrix& A);\n",
        "\tvoid updateWeights(Matrix& dZ, float learning_rate);\n",
        "\tvoid updateBias(Matrix& dZ, float learning_rate);\n",
        "\n",
        "public:\n",
        "\tLinearLayer(std::string name, Shape W_shape);\n",
        "\t~LinearLayer();\n",
        "\n",
        "\tMatrix& forward(Matrix& A);\n",
        "\tMatrix& backprop(Matrix& dZ, float learning_rate = 0.01);\n",
        "\n",
        "\tint getXDim() const;\n",
        "\tint getYDim() const;\n",
        "\n",
        "\tMatrix& getWeightsMatrix();\n",
        "    Matrix& getBiasVector();\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "EI9L6Fm_c5B1",
        "outputId": "70c8dece-8749-4adc-ccf4-ac8d92a17b3c"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting linear_layer.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile linear_layer.cu\n",
        "#include <stdlib.h>\n",
        "#include <assert.h>\n",
        "#include <iostream>\n",
        "#include <random>\n",
        "\n",
        "#include \"linear_layer.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "__global__ void linearLayerForward( float* W, float* A, float* Z, float* b,\n",
        "\t\t\t\t\t\t\t\t\tint W_x_dim, int W_y_dim,\n",
        "\t\t\t\t\t\t\t\t\tint A_x_dim, int A_y_dim) {\n",
        "\n",
        "\tint row = blockIdx.y * blockDim.y + threadIdx.y;\n",
        "\tint col = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "\tint Z_x_dim = A_x_dim;\n",
        "\tint Z_y_dim = W_y_dim;\n",
        "\n",
        "\tfloat Z_value = 0;\n",
        "\n",
        "\tif (row < Z_y_dim && col < Z_x_dim) {\n",
        "\t\tfor (int i = 0; i < W_x_dim; i++) {\n",
        "\t\t\tZ_value += W[row * W_x_dim + i] * A[i * A_x_dim + col];\n",
        "\t\t}\n",
        "\t\tZ[row * Z_x_dim + col] = Z_value + b[row];\n",
        "\t}\n",
        "}\n",
        "\n",
        "__global__ void linearLayerBackprop(float* W, float* dZ, float *dA,\n",
        "\t\t\t\t\t\t\t\t\tint W_x_dim, int W_y_dim,\n",
        "\t\t\t\t\t\t\t\t\tint dZ_x_dim, int dZ_y_dim) {\n",
        "\n",
        "\tint col = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\tint row = blockIdx.y * blockDim.y + threadIdx.y;\n",
        "\n",
        "\t// W is treated as transposed\n",
        "\tint dA_x_dim = dZ_x_dim;\n",
        "\tint dA_y_dim = W_x_dim;\n",
        "\n",
        "\tfloat dA_value = 0.0f;\n",
        "\n",
        "\tif (row < dA_y_dim && col < dA_x_dim) {\n",
        "\t\tfor (int i = 0; i < W_y_dim; i++) {\n",
        "\t\t\tdA_value += W[i * W_x_dim + row] * dZ[i * dZ_x_dim + col];\n",
        "\t\t}\n",
        "\t\tdA[row * dA_x_dim + col] = dA_value;\n",
        "\t}\n",
        "}\n",
        "\n",
        "__global__ void linearLayerUpdateWeights(  float* dZ, float* A, float* W,\n",
        "\t\t\t\t\t\t\t\t\t\t   int dZ_x_dim, int dZ_y_dim,\n",
        "\t\t\t\t\t\t\t\t\t\t   int A_x_dim, int A_y_dim,\n",
        "\t\t\t\t\t\t\t\t\t\t   float learning_rate) {\n",
        "\n",
        "\tint col = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\tint row = blockIdx.y * blockDim.y + threadIdx.y;\n",
        "\n",
        "\t// A is treated as transposed\n",
        "\tint W_x_dim = A_y_dim;\n",
        "\tint W_y_dim = dZ_y_dim;\n",
        "\n",
        "\tfloat dW_value = 0.0f;\n",
        "\n",
        "\tif (row < W_y_dim && col < W_x_dim) {\n",
        "\t\tfor (int i = 0; i < dZ_x_dim; i++) {\n",
        "\t\t\tdW_value += dZ[row * dZ_x_dim + i] * A[col * A_x_dim + i];\n",
        "\t\t}\n",
        "\t\tW[row * W_x_dim + col] = W[row * W_x_dim + col] - learning_rate * (dW_value / A_x_dim);\n",
        "\t}\n",
        "}\n",
        "\n",
        "__global__ void linearLayerUpdateBias(  float* dZ, float* b,\n",
        "\t\t\t\t\t\t\t\t\t\tint dZ_x_dim, int dZ_y_dim,\n",
        "\t\t\t\t\t\t\t\t\t\tint b_x_dim,\n",
        "\t\t\t\t\t\t\t\t\t\tfloat learning_rate) {\n",
        "\tint index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "\tif (index < dZ_x_dim * dZ_y_dim) {\n",
        "\t\tint dZ_x = index % dZ_x_dim;\n",
        "\t\tint dZ_y = index / dZ_x_dim;\n",
        "\t\tatomicAdd(&b[dZ_y], - learning_rate * (dZ[dZ_y * dZ_x_dim + dZ_x] / dZ_x_dim));\n",
        "\t}\n",
        "}\n",
        "\n",
        "LinearLayer::LinearLayer(std::string name, Shape W_shape) :\n",
        "\tW(W_shape), b(W_shape.y, 1)\n",
        "{\n",
        "\tthis->name = name;\n",
        "\tb.allocateMemory();\n",
        "\tW.allocateMemory();\n",
        "\tinitializeBiasWithZeros();\n",
        "\tinitializeWeightsRandomly();\n",
        "}\n",
        "\n",
        "LinearLayer::~LinearLayer()\n",
        "{ }\n",
        "\n",
        "void LinearLayer::initializeWeightsRandomly() {\n",
        "\tstd::default_random_engine generator;\n",
        "\tstd::normal_distribution<float> normal_distribution(0.0, 1.0);\n",
        "\n",
        "\tfor (int x = 0; x < W.shape.x; x++) {\n",
        "\t\tfor (int y = 0; y < W.shape.y; y++) {\n",
        "\t\t\tW[y * W.shape.x + x] = normal_distribution(generator) * weights_init_threshold;\n",
        "\t\t}\n",
        "\t}\n",
        "\n",
        "\tW.copyHostToDevice();\n",
        "}\n",
        "\n",
        "void LinearLayer::initializeBiasWithZeros() {\n",
        "\tfor (int x = 0; x < b.shape.x; x++) {\n",
        "\t\tb[x] = 0;\n",
        "\t}\n",
        "\n",
        "\tb.copyHostToDevice();\n",
        "}\n",
        "\n",
        "Matrix& LinearLayer::forward(Matrix& A) {\n",
        "\tassert(W.shape.x == A.shape.y);\n",
        "\n",
        "\tthis->A = A;\n",
        "\tShape Z_shape(A.shape.x, W.shape.y);\n",
        "\tZ.allocateMemoryIfNotAllocated(Z_shape);\n",
        "\n",
        "\tcomputeAndStoreLayerOutput(A);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform linear layer forward propagation.\");\n",
        "\n",
        "\treturn Z;\n",
        "}\n",
        "\n",
        "void LinearLayer::computeAndStoreLayerOutput(Matrix& A) {\n",
        "\tdim3 block_size(8, 8);\n",
        "\tdim3 num_of_blocks(\t(Z.shape.x + block_size.x - 1) / block_size.x,\n",
        "\t\t\t\t\t\t(Z.shape.y + block_size.y - 1) / block_size.y);\n",
        "\tlinearLayerForward<<<num_of_blocks, block_size>>>( W.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t   A.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t   Z.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t   b.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t   W.shape.x, W.shape.y,\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t   A.shape.x, A.shape.y);\n",
        "}\n",
        "\n",
        "Matrix& LinearLayer::backprop(Matrix& dZ, float learning_rate) {\n",
        "\tdA.allocateMemoryIfNotAllocated(A.shape);\n",
        "\n",
        "\tcomputeAndStoreBackpropError(dZ);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform back propagation.\");\n",
        "\n",
        "\tupdateBias(dZ, learning_rate);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform bias update.\");\n",
        "\n",
        "\tupdateWeights(dZ, learning_rate);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform weights update.\");\n",
        "\n",
        "\treturn dA;\n",
        "}\n",
        "\n",
        "void LinearLayer::computeAndStoreBackpropError(Matrix& dZ) {\n",
        "\tdim3 block_size(8, 8);\n",
        "\tdim3 num_of_blocks(\t(A.shape.x + block_size.x - 1) / block_size.x,\n",
        "\t\t\t\t\t\t(A.shape.y + block_size.y - 1) / block_size.y);\n",
        "\tlinearLayerBackprop<<<num_of_blocks, block_size>>>( W.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\tdZ.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\tdA.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\tW.shape.x, W.shape.y,\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\tdZ.shape.x, dZ.shape.y);\n",
        "}\n",
        "\n",
        "void LinearLayer::updateWeights(Matrix& dZ, float learning_rate) {\n",
        "\tdim3 block_size(8, 8);\n",
        "\tdim3 num_of_blocks(\t(W.shape.x + block_size.x - 1) / block_size.x,\n",
        "\t\t\t\t\t\t(W.shape.y + block_size.y - 1) / block_size.y);\n",
        "\tlinearLayerUpdateWeights<<<num_of_blocks, block_size>>>(dZ.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tA.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tW.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tdZ.shape.x, dZ.shape.y,\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tA.shape.x, A.shape.y,\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tlearning_rate);\n",
        "}\n",
        "\n",
        "void LinearLayer::updateBias(Matrix& dZ, float learning_rate) {\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks( (dZ.shape.y * dZ.shape.x + block_size.x - 1) / block_size.x);\n",
        "\tlinearLayerUpdateBias<<<num_of_blocks, block_size>>>(dZ.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t b.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t dZ.shape.x, dZ.shape.y,\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t b.shape.x, learning_rate);\n",
        "}\n",
        "\n",
        "int LinearLayer::getXDim() const {\n",
        "\treturn W.shape.x;\n",
        "}\n",
        "\n",
        "int LinearLayer::getYDim() const {\n",
        "\treturn W.shape.y;\n",
        "}\n",
        "\n",
        "Matrix& LinearLayer::getWeightsMatrix() {\n",
        "    return W;\n",
        "}\n",
        "\n",
        "Matrix& LinearLayer::getBiasVector() {\n",
        "    return b;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "x2CJQgGadL5l",
        "outputId": "76760d0b-c00e-46f5-e6bd-c0176c460cce"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting linear_layer.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include \"linear_layer.hh\"\n",
        "#include \"bce_cost.hh\"\n",
        "#include <iostream>\n",
        "#include \"matrix.hh\"\n",
        "\n",
        "void printMatrix(Matrix& matrix, const std::string& name) {\n",
        "    matrix.copyDeviceToHost();\n",
        "    std::cout << name << \":\" << std::endl;\n",
        "    for (int i = 0; i < matrix.shape.x * matrix.shape.y; ++i) {\n",
        "        std::cout << matrix[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "}\n",
        "\n",
        "int main() {\n",
        "    // Define input dimensions and initialize the layer\n",
        "    Shape input_shape(1, 3); // (1 rows, 3 columns, transposed vector)\n",
        "    Shape weight_shape(3, 1); // shape of weights, resulting in a 1x1 output\n",
        "\n",
        "    LinearLayer layer(\"test_layer\", weight_shape);\n",
        "\n",
        "    // Allocate memory for input and output\n",
        "    Matrix input(input_shape);\n",
        "    input.allocateMemory();\n",
        "    input[0] = 0.1f; input[1] = 0.2f; input[2] = 0.3f;\n",
        "    input.copyHostToDevice();\n",
        "\n",
        "    // Allocate memory for target\n",
        "    Matrix target(Shape(1, 1)); // 1x1 target matrix\n",
        "    target.allocateMemory();\n",
        "    target[0] = 0.0f;\n",
        "    target.copyHostToDevice();\n",
        "\n",
        "    // Print initial weights and biases\n",
        "    printMatrix(layer.getWeightsMatrix(), \"Initial Weights\");\n",
        "    printMatrix(layer.getBiasVector(), \"Initial Biases\");\n",
        "\n",
        "    // Perform forward pass\n",
        "    Matrix& output = layer.forward(input);\n",
        "    output.copyDeviceToHost();\n",
        "\n",
        "    // Print forward pass output\n",
        "    std::cout << \"Forward pass output:\" << std::endl;\n",
        "    for (int i = 0; i < output.shape.x * output.shape.y; ++i) {\n",
        "        std::cout << output[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "\n",
        "    // Calculate BCE loss\n",
        "    BCECost bce;\n",
        "    float loss = bce.cost(output, target);\n",
        "    std::cout << \"Binary Cross Entropy Loss: \" << loss << std::endl;\n",
        "\n",
        "    // Calculate gradient of BCE loss\n",
        "    Matrix dZ(output.shape);\n",
        "    dZ.allocateMemory();\n",
        "    bce.dCost(output, target, dZ);\n",
        "\n",
        "    // Perform backpropagation\n",
        "    float learning_rate = 0.01f;\n",
        "    Matrix& dA = layer.backprop(dZ, learning_rate);\n",
        "    dA.copyDeviceToHost();\n",
        "\n",
        "    // Print backpropagation output (dA)\n",
        "    std::cout << \"Backpropagation output (dA):\" << std::endl;\n",
        "    for (int i = 0; i < dA.shape.x * dA.shape.y; ++i) {\n",
        "        std::cout << dA[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "\n",
        "    // Print updated weights and biases\n",
        "    printMatrix(layer.getWeightsMatrix(), \"Updated Weights\");\n",
        "    printMatrix(layer.getBiasVector(), \"Updated Biases\");\n",
        "\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "nai0jAlpdUdA",
        "outputId": "0a8e19f4-305c-47aa-fdc4-625f84624af1"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu bce_cost.cu linear_layer.cu -o ll.out\n",
        "!./ll.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "40xSchfVeJ4R",
        "outputId": "16a77fd3-be4d-424c-a984-d7d9648e3e67"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Initial Weights:\n",
            "-0.00259093 0.0160159 -0.0149896 \n",
            "Initial Biases:\n",
            "0 \n",
            "Forward pass output:\n",
            "-0.00155279 \n",
            "Binary Cross Entropy Loss: 1.19209e-07\n",
            "Backpropagation output (dA):\n",
            "-0.00259093 0.0160159 -0.0149896 \n",
            "Updated Weights:\n",
            "-0.00359093 0.0140159 -0.0179896 \n",
            "Updated Biases:\n",
            "-0.01 \n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Training Naive Linear Model\n",
        "\n",
        "because BCE is clamped, this will attempt to predict a number below zero if the target is zero, and a value above one if the target is 1"
      ],
      "metadata": {
        "id": "CV5MZpPg3ApI"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include \"linear_layer.hh\"\n",
        "#include \"bce_cost.hh\"\n",
        "#include <iostream>\n",
        "#include \"matrix.hh\"\n",
        "\n",
        "void printMatrix(Matrix& matrix, const std::string& name) {\n",
        "    matrix.copyDeviceToHost();\n",
        "    std::cout << name << \":\" << std::endl;\n",
        "    for (int i = 0; i < matrix.shape.x * matrix.shape.y; ++i) {\n",
        "        std::cout << matrix[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "}\n",
        "\n",
        "int main() {\n",
        "    // Define input dimensions and initialize the layer\n",
        "    Shape input_shape(1, 3); // (1 rows, 3 columns, transposed vector)\n",
        "    Shape weight_shape(3, 1); // shape of weights, resulting in a 1x1 output\n",
        "\n",
        "    LinearLayer layer(\"test_layer\", weight_shape);\n",
        "\n",
        "    // Allocate memory for input and output\n",
        "    Matrix input(input_shape);\n",
        "    input.allocateMemory();\n",
        "    input[0] = 0.1f; input[1] = 0.2f; input[2] = 0.3f;\n",
        "    input.copyHostToDevice();\n",
        "\n",
        "    // Allocate memory for target\n",
        "    Matrix target(Shape(1, 1)); // 1x1 target matrix\n",
        "    target.allocateMemory();\n",
        "    target[0] = 0.0f;\n",
        "    target.copyHostToDevice();\n",
        "\n",
        "    // Print initial weights and biases\n",
        "    printMatrix(layer.getWeightsMatrix(), \"Initial Weights\");\n",
        "    printMatrix(layer.getBiasVector(), \"Initial Biases\");\n",
        "\n",
        "    // Training loop\n",
        "    for (int i = 0; i < 3; ++i) {\n",
        "        // Perform forward pass\n",
        "        Matrix& output = layer.forward(input);\n",
        "        output.copyDeviceToHost();\n",
        "\n",
        "        // Print forward pass output\n",
        "        std::cout << \"Forward pass output:\" << std::endl;\n",
        "        for (int j = 0; j < output.shape.x * output.shape.y; ++j) {\n",
        "            std::cout << output[j] << \" \";\n",
        "        }\n",
        "        std::cout << std::endl;\n",
        "\n",
        "        // Calculate BCE loss\n",
        "        BCECost bce;\n",
        "        float loss = bce.cost(output, target);\n",
        "        std::cout << \"Loss at iteration \" << i << \": \" << loss << std::endl;\n",
        "\n",
        "        // Calculate gradient of BCE loss\n",
        "        Matrix dZ(output.shape);\n",
        "        dZ.allocateMemory();\n",
        "        bce.dCost(output, target, dZ);\n",
        "\n",
        "        // Perform backpropagation\n",
        "        float learning_rate = 0.000001f;\n",
        "        layer.backprop(dZ, learning_rate);\n",
        "    }\n",
        "\n",
        "    // Print updated weights and biases\n",
        "    printMatrix(layer.getWeightsMatrix(), \"Updated Weights\");\n",
        "    printMatrix(layer.getBiasVector(), \"Updated Biases\");\n",
        "\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ajBZmI4uegXZ",
        "outputId": "ba7fbc92-f9f0-4319-cc24-d2f3495082b5"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu bce_cost.cu linear_layer.cu -o ll.out\n",
        "!./ll.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ab30gCp83aMu",
        "outputId": "959de553-5628-414c-d704-ddce85081eed"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Initial Weights:\n",
            "-0.00259093 0.0160159 -0.0149896 \n",
            "Initial Biases:\n",
            "0 \n",
            "Forward pass output:\n",
            "-0.00155279 \n",
            "Loss at iteration 0: 1.19209e-07\n",
            "Forward pass output:\n",
            "-0.00155393 \n",
            "Loss at iteration 1: 1.19209e-07\n",
            "Forward pass output:\n",
            "-0.00155507 \n",
            "Loss at iteration 2: 1.19209e-07\n",
            "Updated Weights:\n",
            "-0.00259123 0.0160153 -0.0149905 \n",
            "Updated Biases:\n",
            "-3e-06 \n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Making sigmoid activation\n",
        "The final output of the model will be passed through a sigmoid activation function, so we're making that.\n",
        "\n",
        "While it behaves differently, it's another instance of nn_layer, just like the linear layer. There's a forward and a backward pass. The forward pass filters values based on the sigmoid function, and the backward pass filters values based on the derivative of the sigmoid."
      ],
      "metadata": {
        "id": "7GY1XGnG8hQo"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile sigmoid_activation.hh\n",
        "#pragma once\n",
        "\n",
        "#include \"nn_layer.hh\"\n",
        "\n",
        "class SigmoidActivation : public NNLayer {\n",
        "private:\n",
        "\tMatrix A;\n",
        "\n",
        "\tMatrix Z;\n",
        "\tMatrix dZ;\n",
        "\n",
        "public:\n",
        "\tSigmoidActivation(std::string name);\n",
        "\t~SigmoidActivation();\n",
        "\n",
        "\tMatrix& forward(Matrix& Z);\n",
        "\tMatrix& backprop(Matrix& dA, float learning_rate = 0.01);\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "nFg8-GTB3b_d",
        "outputId": "4ed34c75-67c2-4923-e73c-9e4d04e0cdf4"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing sigmoid_activation.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile sigmoid_activation.cu\n",
        "#include \"sigmoid_activation.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "#include <iostream>\n",
        "\n",
        "__device__ float sigmoid(float x) {\n",
        "\treturn 1.0f / (1 + exp(-x));\n",
        "}\n",
        "\n",
        "__global__ void sigmoidActivationForward(float* Z, float* A,\n",
        "\t\t\t\t\t\t\t\t\t\t int Z_x_dim, int Z_y_dim) {\n",
        "\n",
        "\tint index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "\tif (index < Z_x_dim * Z_y_dim) {\n",
        "\t\tA[index] = sigmoid(Z[index]);\n",
        "\t}\n",
        "}\n",
        "\n",
        "__global__ void sigmoidActivationBackprop(float* Z, float* dA, float* dZ,\n",
        "\t\t\t\t\t\t\t\t\t\t  int Z_x_dim, int Z_y_dim) {\n",
        "\n",
        "\tint index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "\tif (index < Z_x_dim * Z_y_dim) {\n",
        "\t\tdZ[index] = dA[index] * sigmoid(Z[index]) * (1 - sigmoid(Z[index]));\n",
        "\t}\n",
        "}\n",
        "\n",
        "SigmoidActivation::SigmoidActivation(std::string name) {\n",
        "\tthis->name = name;\n",
        "}\n",
        "\n",
        "SigmoidActivation::~SigmoidActivation()\n",
        "{ }\n",
        "\n",
        "Matrix& SigmoidActivation::forward(Matrix& Z) {\n",
        "\tthis->Z = Z;\n",
        "\tA.allocateMemoryIfNotAllocated(Z.shape);\n",
        "\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks((Z.shape.y * Z.shape.x + block_size.x - 1) / block_size.x);\n",
        "\n",
        "\tsigmoidActivationForward<<<num_of_blocks, block_size>>>(Z.data_device.get(), A.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t   \tZ.shape.x, Z.shape.y);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform sigmoid forward propagation.\");\n",
        "\n",
        "\treturn A;\n",
        "}\n",
        "\n",
        "Matrix& SigmoidActivation::backprop(Matrix& dA, float learning_rate) {\n",
        "\tdZ.allocateMemoryIfNotAllocated(Z.shape);\n",
        "\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks((Z.shape.y * Z.shape.x + block_size.x - 1) / block_size.x);\n",
        "\tsigmoidActivationBackprop<<<num_of_blocks, block_size>>>(Z.data_device.get(), dA.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t dZ.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t Z.shape.x, Z.shape.y);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform sigmoid back propagation\");\n",
        "\n",
        "\treturn dZ;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "TBEfGEdQACSC",
        "outputId": "8438997f-7539-4251-fa93-100b89fa8627"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing sigmoid_activation.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include \"sigmoid_activation.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "#include \"matrix.hh\"\n",
        "#include <iostream>\n",
        "\n",
        "void printMatrix(Matrix& matrix, const std::string& name) {\n",
        "    matrix.copyDeviceToHost();\n",
        "    std::cout << name << \":\" << std::endl;\n",
        "    for (int i = 0; i < matrix.shape.x * matrix.shape.y; ++i) {\n",
        "        std::cout << matrix[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "}\n",
        "\n",
        "int main() {\n",
        "    // Define input dimensions and initialize the matrix\n",
        "    Shape input_shape(1, 3); // (1 rows, 3 columns)\n",
        "\n",
        "    // Initialize SigmoidActivation\n",
        "    SigmoidActivation sigmoid(\"sigmoid_activation\");\n",
        "\n",
        "    // Allocate memory for input matrix\n",
        "    Matrix input(input_shape);\n",
        "    input.allocateMemory();\n",
        "    input[0] = -1.0f; input[1] = 0.0f; input[2] = 1.0f;\n",
        "    input.copyHostToDevice();\n",
        "\n",
        "    // Perform forward pass\n",
        "    Matrix& output = sigmoid.forward(input);\n",
        "    output.copyDeviceToHost();\n",
        "\n",
        "    // Print forward pass output\n",
        "    printMatrix(output, \"Forward pass output\");\n",
        "\n",
        "    // Allocate memory for gradient matrix\n",
        "    Matrix dA(output.shape);\n",
        "    dA.allocateMemory();\n",
        "    dA[0] = 0.1f; dA[1] = 0.2f; dA[2] = 0.3f;\n",
        "    dA.copyHostToDevice();\n",
        "\n",
        "    // Perform backward pass\n",
        "    Matrix& dZ = sigmoid.backprop(dA, 0.01f);\n",
        "    dZ.copyDeviceToHost();\n",
        "\n",
        "    // Print backward pass output\n",
        "    printMatrix(dZ, \"Backward pass output\");\n",
        "\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "OJVzbVkkAJxw",
        "outputId": "771b6146-dad8-4a17-fc33-31c6866ec681"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu bce_cost.cu sigmoid_activation.cu -o sig.out\n",
        "!./sig.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "_Y-juW2xAZdX",
        "outputId": "6dfeef5b-b2ad-48b8-fe3d-b26d86d46c04"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Forward pass output:\n",
            "0.268941 0.5 0.731059 \n",
            "Backward pass output:\n",
            "0.0196612 0.05 0.0589836 \n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Making ReLu activation\n",
        "Virtually identical to Sigmoid, except the forward and backward passes have different functions"
      ],
      "metadata": {
        "id": "LK73skCuD4jm"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile relu_activation.hh\n",
        "#pragma once\n",
        "\n",
        "#include \"nn_layer.hh\"\n",
        "\n",
        "class ReLUActivation : public NNLayer {\n",
        "private:\n",
        "\tMatrix A;\n",
        "\n",
        "\tMatrix Z;\n",
        "\tMatrix dZ;\n",
        "\n",
        "public:\n",
        "\tReLUActivation(std::string name);\n",
        "\t~ReLUActivation();\n",
        "\n",
        "\tMatrix& forward(Matrix& Z);\n",
        "\tMatrix& backprop(Matrix& dA, float learning_rate = 0.01);\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "HOFQ6nqfAbsU",
        "outputId": "f1c05b89-fb1c-45a2-8ec0-56b0cf21955e"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing relu_activation.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile relu_activation.cu\n",
        "#include \"relu_activation.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "__global__ void reluActivationForward(float* Z, float* A,\n",
        "\t\t\t\t\t\t\t\t\t  int Z_x_dim, int Z_y_dim) {\n",
        "\n",
        "\tint index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "\tif (index < Z_x_dim * Z_y_dim) {\n",
        "\t\tA[index] = fmaxf(Z[index], 0);\n",
        "\t}\n",
        "}\n",
        "\n",
        "__global__ void reluActivationBackprop(float* Z, float* dA, float* dZ,\n",
        "\t\t\t\t\t\t\t\t\t   int Z_x_dim, int Z_y_dim) {\n",
        "\n",
        "\tint index = blockIdx.x * blockDim.x + threadIdx.x;\n",
        "\n",
        "\tif (index < Z_x_dim * Z_y_dim) {\n",
        "\t\tif (Z[index] > 0) {\n",
        "\t\t\tdZ[index] = dA[index];\n",
        "\t\t}\n",
        "\t\telse {\n",
        "\t\t\tdZ[index] = 0;\n",
        "\t\t}\n",
        "\t}\n",
        "}\n",
        "\n",
        "ReLUActivation::ReLUActivation(std::string name) {\n",
        "\tthis->name = name;\n",
        "}\n",
        "\n",
        "ReLUActivation::~ReLUActivation() { }\n",
        "\n",
        "Matrix& ReLUActivation::forward(Matrix& Z) {\n",
        "\tthis->Z = Z;\n",
        "\tA.allocateMemoryIfNotAllocated(Z.shape);\n",
        "\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks((Z.shape.y * Z.shape.x + block_size.x - 1) / block_size.x);\n",
        "\n",
        "\treluActivationForward<<<num_of_blocks, block_size>>>(Z.data_device.get(), A.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t Z.shape.x, Z.shape.y);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform ReLU forward propagation.\");\n",
        "\n",
        "\treturn A;\n",
        "}\n",
        "\n",
        "Matrix& ReLUActivation::backprop(Matrix& dA, float learning_rate) {\n",
        "\tdZ.allocateMemoryIfNotAllocated(Z.shape);\n",
        "\n",
        "\tdim3 block_size(256);\n",
        "\tdim3 num_of_blocks((Z.shape.y * Z.shape.x + block_size.x - 1) / block_size.x);\n",
        "\treluActivationBackprop<<<num_of_blocks, block_size>>>(Z.data_device.get(), dA.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t      dZ.data_device.get(),\n",
        "\t\t\t\t\t\t\t\t\t\t\t\t\t\t  Z.shape.x, Z.shape.y);\n",
        "\tNNException::throwIfDeviceErrorsOccurred(\"Cannot perform ReLU back propagation\");\n",
        "\n",
        "\treturn dZ;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3mkRC_psENBV",
        "outputId": "8afe7b42-3765-442c-f607-145ccb34633f"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting relu_activation.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include \"relu_activation.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "#include \"matrix.hh\"\n",
        "#include <iostream>\n",
        "\n",
        "void printMatrix(Matrix& matrix, const std::string& name) {\n",
        "    matrix.copyDeviceToHost();\n",
        "    std::cout << name << \":\" << std::endl;\n",
        "    for (int i = 0; i < matrix.shape.x * matrix.shape.y; ++i) {\n",
        "        std::cout << matrix[i] << \" \";\n",
        "    }\n",
        "    std::cout << std::endl;\n",
        "}\n",
        "\n",
        "int main() {\n",
        "    // Define input dimensions and initialize the matrix\n",
        "    Shape input_shape(1, 3); // (1 rows, 3 columns)\n",
        "\n",
        "    // Initialize ReLUActivation\n",
        "    ReLUActivation relu(\"relu_activation\");\n",
        "\n",
        "    // Allocate memory for input matrix\n",
        "    Matrix input(input_shape);\n",
        "    input.allocateMemory();\n",
        "    input[0] = -1.0f; input[1] = 0.0f; input[2] = 1.0f;\n",
        "    input.copyHostToDevice();\n",
        "\n",
        "    // Perform forward pass\n",
        "    Matrix& output = relu.forward(input);\n",
        "    output.copyDeviceToHost();\n",
        "\n",
        "    // Print forward pass output\n",
        "    printMatrix(output, \"Forward pass output\");\n",
        "\n",
        "    // Allocate memory for gradient matrix\n",
        "    Matrix dA(output.shape);\n",
        "    dA.allocateMemory();\n",
        "    dA[0] = 0.1f; dA[1] = 0.2f; dA[2] = 0.3f;\n",
        "    dA.copyHostToDevice();\n",
        "\n",
        "    // Perform backward pass\n",
        "    Matrix& dZ = relu.backprop(dA, 0.01f);\n",
        "    dZ.copyDeviceToHost();\n",
        "\n",
        "    // Print backward pass output\n",
        "    printMatrix(dZ, \"Backward pass output\");\n",
        "\n",
        "    return 0;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "nuyl9qt0ETTR",
        "outputId": "cc7476cb-cc1b-4b5f-c79e-a225f51fbed9"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu bce_cost.cu relu_activation.cu -o sig.out\n",
        "!./sig.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "R3FAPZOuEvyp",
        "outputId": "a0efd9e6-89d4-4c96-de4d-c9276049ab2f"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Forward pass output:\n",
            "0 0 1 \n",
            "Backward pass output:\n",
            "0 0 0.3 \n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Finally\n",
        "We have all the core functionality set up including utilities:\n",
        " - an abstraction for handling matrixes\n",
        " - an abstraction for handling errors\n",
        " - a handy little structure for defining shape\n",
        " - an implementetion of binary cross entropy, including the forward and backward pass\n",
        "\n",
        "layers:\n",
        " - a fully connected layer, including the forward and backward pass\n",
        " - a sigmoid activation function, including the forward and backward pass\n",
        " - a ReLU activation function, including the forward and backward pass\n",
        "\n",
        "\n",
        " Now we can put this all together to train a model. To do that we'll define two more (very small) abstractions:\n",
        "\n",
        " - one for the model\n",
        " - one for the dataset"
      ],
      "metadata": {
        "id": "Itt3qW4sFkKV"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### neural network"
      ],
      "metadata": {
        "id": "rbS-Itc9I7rq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile neural_network.hh\n",
        "#pragma once\n",
        "\n",
        "#include <vector>\n",
        "#include \"nn_layer.hh\"\n",
        "#include \"bce_cost.hh\"\n",
        "\n",
        "class NeuralNetwork {\n",
        "private:\n",
        "\tstd::vector<NNLayer*> layers;\n",
        "\tBCECost bce_cost;\n",
        "\n",
        "\tMatrix Y;\n",
        "\tMatrix dY;\n",
        "\tfloat learning_rate;\n",
        "\n",
        "public:\n",
        "\tNeuralNetwork(float learning_rate = 0.01);\n",
        "\t~NeuralNetwork();\n",
        "\n",
        "\tMatrix forward(Matrix X);\n",
        "\tvoid backprop(Matrix predictions, Matrix target);\n",
        "\n",
        "\tvoid addLayer(NNLayer *layer);\n",
        "\tstd::vector<NNLayer*> getLayers() const;\n",
        "\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Ql7Ft1zwEx3y",
        "outputId": "8a9a3f05-0b07-4f07-9d57-5b63e58c51b5"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing neural_network.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile neural_network.cu\n",
        "#include \"neural_network.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "\n",
        "NeuralNetwork::NeuralNetwork(float learning_rate) :\n",
        "\tlearning_rate(learning_rate)\n",
        "{ }\n",
        "\n",
        "NeuralNetwork::~NeuralNetwork() {\n",
        "\tfor (auto layer : layers) {\n",
        "\t\tdelete layer;\n",
        "\t}\n",
        "}\n",
        "\n",
        "void NeuralNetwork::addLayer(NNLayer* layer) {\n",
        "\tthis->layers.push_back(layer);\n",
        "}\n",
        "\n",
        "Matrix NeuralNetwork::forward(Matrix X) {\n",
        "\tMatrix Z = X;\n",
        "\n",
        "\tfor (auto layer : layers) {\n",
        "\t\tZ = layer->forward(Z);\n",
        "\t}\n",
        "\n",
        "\tY = Z;\n",
        "\treturn Y;\n",
        "}\n",
        "\n",
        "void NeuralNetwork::backprop(Matrix predictions, Matrix target) {\n",
        "\tdY.allocateMemoryIfNotAllocated(predictions.shape);\n",
        "\tMatrix error = bce_cost.dCost(predictions, target, dY);\n",
        "\n",
        "\tfor (auto it = this->layers.rbegin(); it != this->layers.rend(); it++) {\n",
        "\t\terror = (*it)->backprop(error, learning_rate);\n",
        "\t}\n",
        "\n",
        "\tcudaDeviceSynchronize();\n",
        "}\n",
        "\n",
        "std::vector<NNLayer*> NeuralNetwork::getLayers() const {\n",
        "\treturn layers;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "k2LQ_H53INXX",
        "outputId": "0f54a549-6711-409a-ae2b-fa1b2e022ad2"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing neural_network.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Dataset"
      ],
      "metadata": {
        "id": "MwskwxZRI_cq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile coordinates_dataset.hh\n",
        "#pragma once\n",
        "\n",
        "#include \"matrix.hh\"\n",
        "#include <vector>\n",
        "\n",
        "class CoordinatesDataset {\n",
        "private:\n",
        "\tsize_t batch_size;\n",
        "\tsize_t number_of_batches;\n",
        "\n",
        "\tstd::vector<Matrix> batches;\n",
        "\tstd::vector<Matrix> targets;\n",
        "\n",
        "public:\n",
        "\n",
        "\tCoordinatesDataset(size_t batch_size, size_t number_of_batches);\n",
        "\n",
        "\tint getNumOfBatches();\n",
        "\tstd::vector<Matrix>& getBatches();\n",
        "\tstd::vector<Matrix>& getTargets();\n",
        "\n",
        "};"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "DCE0I9sBIXbU",
        "outputId": "0c3bd579-50e9-4c1e-94ff-7dc28456964a"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing coordinates_dataset.hh\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile  coordinates_dataset.cu\n",
        "#include \"coordinates_dataset.hh\"\n",
        "\n",
        "CoordinatesDataset::CoordinatesDataset(size_t batch_size, size_t number_of_batches) :\n",
        "\tbatch_size(batch_size), number_of_batches(number_of_batches)\n",
        "{\n",
        "\tfor (int i = 0; i < number_of_batches; i++) {\n",
        "\t\tbatches.push_back(Matrix(Shape(batch_size, 2)));\n",
        "\t\ttargets.push_back(Matrix(Shape(batch_size, 1)));\n",
        "\n",
        "\t\tbatches[i].allocateMemory();\n",
        "\t\ttargets[i].allocateMemory();\n",
        "\n",
        "\t\tfor (int k = 0; k < batch_size; k++) {\n",
        "\t\t\tbatches[i][k] = static_cast<float>(rand()) / RAND_MAX - 0.5;\n",
        "\t\t\tbatches[i][batches[i].shape.x + k] = static_cast<float>(rand()) / RAND_MAX - 0.5;;\n",
        "\n",
        "\t\t\tif ( (batches[i][k] > 0 && batches[i][batches[i].shape.x + k] > 0) ||\n",
        "\t\t\t\t ((batches[i][k] < 0 && batches[i][batches[i].shape.x + k] < 0)) ) {\n",
        "\t\t\t\ttargets[i][k] = 1;\n",
        "\t\t\t}\n",
        "\t\t\telse {\n",
        "\t\t\t\ttargets[i][k] = 0;\n",
        "\t\t\t}\n",
        "\t\t}\n",
        "\n",
        "\t\tbatches[i].copyHostToDevice();\n",
        "\t\ttargets[i].copyHostToDevice();\n",
        "\t}\n",
        "}\n",
        "\n",
        "int CoordinatesDataset::getNumOfBatches() {\n",
        "\treturn number_of_batches;\n",
        "}\n",
        "\n",
        "std::vector<Matrix>& CoordinatesDataset::getBatches() {\n",
        "\treturn batches;\n",
        "}\n",
        "\n",
        "std::vector<Matrix>& CoordinatesDataset::getTargets() {\n",
        "\treturn targets;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "deYdk_NpIxQu",
        "outputId": "bf624e46-5ca2-48b3-9eb4-da9c89b4b812"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Writing coordinates_dataset.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "### Training neural network on dataset, and evaluating results"
      ],
      "metadata": {
        "id": "EnwmokerJGD3"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%writefile main.cu\n",
        "#include <iostream>\n",
        "#include <time.h>\n",
        "\n",
        "#include \"neural_network.hh\"\n",
        "#include \"linear_layer.hh\"\n",
        "#include \"relu_activation.hh\"\n",
        "#include \"sigmoid_activation.hh\"\n",
        "#include \"nn_exception.hh\"\n",
        "#include \"bce_cost.hh\"\n",
        "\n",
        "#include \"coordinates_dataset.hh\"\n",
        "\n",
        "float computeAccuracy(const Matrix& predictions, const Matrix& targets);\n",
        "\n",
        "int main() {\n",
        "\n",
        "\tsrand( time(NULL) );\n",
        "\n",
        "\tCoordinatesDataset dataset(100, 21);\n",
        "\tBCECost bce_cost;\n",
        "\n",
        "\tNeuralNetwork nn;\n",
        "\tnn.addLayer(new LinearLayer(\"linear_1\", Shape(2, 30)));\n",
        "\tnn.addLayer(new ReLUActivation(\"relu_1\"));\n",
        "\tnn.addLayer(new LinearLayer(\"linear_2\", Shape(30, 1)));\n",
        "\tnn.addLayer(new SigmoidActivation(\"sigmoid_output\"));\n",
        "\n",
        "\t// network training\n",
        "\tMatrix Y;\n",
        "\tfor (int epoch = 0; epoch < 1001; epoch++) {\n",
        "\t\tfloat cost = 0.0;\n",
        "\n",
        "\t\tfor (int batch = 0; batch < dataset.getNumOfBatches() - 1; batch++) {\n",
        "\t\t\tY = nn.forward(dataset.getBatches().at(batch));\n",
        "\t\t\tnn.backprop(Y, dataset.getTargets().at(batch));\n",
        "\t\t\tcost += bce_cost.cost(Y, dataset.getTargets().at(batch));\n",
        "\t\t}\n",
        "\n",
        "\t\tif (epoch % 100 == 0) {\n",
        "\t\t\tstd::cout \t<< \"Epoch: \" << epoch\n",
        "\t\t\t\t\t\t<< \", Cost: \" << cost / dataset.getNumOfBatches()\n",
        "\t\t\t\t\t\t<< std::endl;\n",
        "\t\t}\n",
        "\t}\n",
        "\n",
        "\t// compute accuracy\n",
        "\tY = nn.forward(dataset.getBatches().at(dataset.getNumOfBatches() - 1));\n",
        "\tY.copyDeviceToHost();\n",
        "\n",
        "\tfloat accuracy = computeAccuracy(\n",
        "\t\t\tY, dataset.getTargets().at(dataset.getNumOfBatches() - 1));\n",
        "\tstd::cout \t<< \"Accuracy: \" << accuracy << std::endl;\n",
        "\n",
        "\treturn 0;\n",
        "}\n",
        "\n",
        "float computeAccuracy(const Matrix& predictions, const Matrix& targets) {\n",
        "\tint m = predictions.shape.x;\n",
        "\tint correct_predictions = 0;\n",
        "\n",
        "\tfor (int i = 0; i < m; i++) {\n",
        "\t\tfloat prediction = predictions[i] > 0.5 ? 1 : 0;\n",
        "\t\tif (prediction == targets[i]) {\n",
        "\t\t\tcorrect_predictions++;\n",
        "\t\t}\n",
        "\t}\n",
        "\n",
        "\treturn static_cast<float>(correct_predictions) / m;\n",
        "}"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dsg2MGjXI5jJ",
        "outputId": "05940568-bff7-4160-f38a-b48f5ef9f3e8"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Overwriting main.cu\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!nvcc main.cu matrix.cu shape.cu bce_cost.cu sigmoid_activation.cu relu_activation.cu linear_layer.cu coordinates_dataset.cu neural_network.cu -o main.out\n",
        "!./main.out"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ttB_ICSSJNhm",
        "outputId": "9c566e87-c1cc-4700-af1f-9661b8cb90b1"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch: 0, Cost: 0.660134\n",
            "Epoch: 100, Cost: 0.659899\n",
            "Epoch: 200, Cost: 0.659478\n",
            "Epoch: 300, Cost: 0.658153\n",
            "Epoch: 400, Cost: 0.654031\n",
            "Epoch: 500, Cost: 0.642247\n",
            "Epoch: 600, Cost: 0.61495\n",
            "Epoch: 700, Cost: 0.571769\n",
            "Epoch: 800, Cost: 0.520754\n",
            "Epoch: 900, Cost: 0.450404\n",
            "Epoch: 1000, Cost: 0.356447\n",
            "Accuracy: 0.92\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [],
      "metadata": {
        "id": "f-zE6XNLJV-n"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}