{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Layers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What are layers?\n",
    "\n",
    "Layers are basic building blocks of a neural network. You can think of layers as filters. Each layer do something that helps you in your task. It's convenient to think of a big neural network system as many layers working together to achieve a common goal. Don't think too hard. Anything can be a layer. A layer is just a fancy way to refer to a function. It takes in an input and spits out some output. It can be anything, trust me."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What are some common layers?\n",
    "\n",
    "- [Linear layer](./linear/linear) is everywhere. \n",
    "- [Convolution layer](./cnn/cnn) is a layer that specialize in processing data that have patterns. Like images and voice.\n",
    "- [Recurrent layer](./rnn/rnn) and [Transformer](./transformer/transformer) are good in processing sequences and text.\n",
    "- [Padding layer](./padding/padding) and [Pooling layer](./pooling/pooling) layers are good at reshaping the input data.\n",
    "- [Embedding layer](./emb/emb) are good at converting tokens (like characters) to vector (its meanings).\n",
    "- ..._and a lot more_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Layers in code"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch.nn import Conv2d, Linear, Module, Sequential"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Layers in PyTorch is represented by `Module` class. All layers, such as `Linear`, are subclasses of it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(issubclass(Linear, Module))\n",
    "print(issubclass(Conv2d, Module))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is how you define a custom `Module`. It's easy."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Identity(Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "\n",
    "    def forward(x):\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We created an identity function! Which means it does nothing but spit out what's passed in. But isn't it very easy! You now have a reusable module that can be put into a neural network! And can use a lot of PyTorch's function such as callbacks or printing."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's create a sequential model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = Sequential(\n",
    "    Linear(3, 4),\n",
    "    Linear(4, 5),\n",
    "    Linear(5, 6),\n",
    ")\n",
    "\n",
    "x = torch.randn(3)\n",
    "print(x)\n",
    "print(x.shape)\n",
    "\n",
    "y = model(x)\n",
    "print(y)\n",
    "print(y.shape)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
