{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "847589f7-86dd-41cc-9b62-fef23916441a",
   "metadata": {},
   "source": [
    "\n",
    "<img src=\"\" alt=\"Logo\" style=\"width:270px;\">\n",
    "\n",
    "- **Newsletter:** [https://awesomeneuron.substack.com/](https://awesomeneuron.substack.com/)\n",
    "- **Linkedin:**: https://www.linkedin.com/in/analyticalrohit\n",
    "- **Code:** [https://github.com/analyticalrohit/pytorch_fundamentals](https://github.com/analyticalrohit/pytorch_fundamentals)\n",
    "- **Author:** [Rohit Kumar Tiwari](https://www.linkedin.com/in/analyticalrohit)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77697049-2739-4d39-bb7d-9eec971a54db",
   "metadata": {},
   "source": [
    "# PyTorch Fundamentals: Your First Steps into Hands-on Deep Learning\n",
    "\n",
    "This notebook provides an introduction to PyTorch, covering tensor initialization, operations, indexing, and reshaping. \n",
    "Follow along to learn the basics with clear examples and detailed explanations."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d82f588-0510-4a2a-bed9-8107518bb90e",
   "metadata": {},
   "source": [
    "# Table of Contents\n",
    "\n",
    "- [What are Tensors?](#What-are-Tensors?)\n",
    "- [Tensor Initialization](#Tensor-Initialization)\n",
    "- [Common Tensor Initialization Methods](#Common-Tensor-Initialization-Methods)\n",
    "- [Tensor Type Conversion](#Tensor-Type-Conversion)\n",
    "- [Converting Between NumPy Arrays and Tensors](#Converting-Between-NumPy-Arrays-and-Tensors)\n",
    "- [Tensor Mathematics and Comparison Operations](#Tensor-Mathematics-and-Comparison-Operations)\n",
    "- [Matrix Multiplication and Batch Operations](#Matrix-Multiplication-and-Batch-Operations)\n",
    "- [Broadcasting and Other Useful Operations](#Broadcasting-and-Other-Useful-Operations)\n",
    "- [Tensor Indexing](#Tensor-Indexing)\n",
    "- [Tensor Reshaping](#Tensor-Reshaping)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "c17a3a77-8493-4a2b-abb1-63dab66acee8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch version: 2.4.0+cu118\n",
      "numpy version: 1.26.3\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "# Ignore warnings\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "\n",
    "# Print versions\n",
    "print(\"torch version:\", torch.__version__)\n",
    "print(\"numpy version:\", np.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49861051-2f2c-4c83-a46b-18393a0623c1",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## What are Tensors?\n",
    "\n",
    "Tensor holds a multi-dimensional array of elements of a single data type which is very similar with numpy’s ndarray. When the dimension is zero, it can be called a scalar. When the dimension is 1, it can be called a vector. When the dimension is 2, it can be called a matrix. \n",
    "\n",
    "- 0-dimensional tensor: A single number (scalar).\n",
    "- 1-dimensional tensor: A list of numbers (vector).\n",
    "- 2-dimensional tensor: A table of numbers (matrix).\n",
    "\n",
    "When the dimension is greater than 2, it is usually called a tensor. \n",
    "\n",
    "<img src=\"\" alt=\"Logo\" style=\"width:800px;\">"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40a7efa1-16ac-4a2a-bdb8-5a163aae7b7f",
   "metadata": {},
   "source": [
    "## Tensor Initialization\n",
    "\n",
    "This code creates a 2×3 PyTorch tensor with float32 data type, assigns it to a specified device (CPU or GPU), and enables gradient tracking for backpropagation. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "2a563fb4-db87-4224-afc6-fdf4014952c8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 2., 3.],\n",
      "        [4., 5., 6.]], device='cuda:0', requires_grad=True)\n",
      "Data type: torch.float32\n",
      "Device: cuda:0\n",
      "Shape: torch.Size([2, 3])\n",
      "Requires Gradient: True\n"
     ]
    }
   ],
   "source": [
    "# Check for CUDA availability and set the device\n",
    "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
    "\n",
    "# Initialize a 2x3 tensor with requires_grad enabled\n",
    "my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device=device, requires_grad=True)\n",
    "\n",
    "print(my_tensor)\n",
    "print(\"Data type:\", my_tensor.dtype)\n",
    "print(\"Device:\", my_tensor.device)\n",
    "print(\"Shape:\", my_tensor.shape)\n",
    "print(\"Requires Gradient:\", my_tensor.requires_grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f87b7dbb-9c22-437a-a923-6c5822f30a19",
   "metadata": {},
   "source": [
    "## Other Common Tensor Initialization Methods\n",
    "\n",
    "- **Empty Tensor:** Creates an uninitialized 3×3 tensor (random values).\n",
    "- **Zeros Tensor:** Creates a 3×3 tensor filled with zeros.\n",
    "- **Random Tensor:** Generates a 3×3 tensor with random values between 0 and 1.\n",
    "- **Ones Tensor:** Creates a 3×3 tensor filled with ones.\n",
    "- **Identity Matrix:** Generates a 4×4 identity matrix (diagonal of ones).\n",
    "- **Arange Tensor:** Creates a 1D tensor with values from 0 to 4 (step of 1).\n",
    "- **Linspace Tensor:** Generates 5 evenly spaced values between 0.1 and 1.\n",
    "- **Normal Distributed Tensor:** Fills a tensor with values from a normal (Gaussian) distribution with mean 0 and std 1.\n",
    "- **Uniform Distributed Tensor:** Fills a tensor with values from a uniform distribution between 0 and 1.\n",
    "- **Diagonal Tensor:** Creates a 4×4 diagonal tensor with ones along the diagonal and zeros elsewhere."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "4f91921f-b75d-4500-aa2f-7832582019e6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Empty Tensor:\n",
      " tensor([[0.2450, 0.9899, 0.0546],\n",
      "        [0.4938, 0.7471, 0.5465],\n",
      "        [0.0106, 0.3488, 0.2002]])\n",
      "Zeros Tensor:\n",
      " tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.],\n",
      "        [0., 0., 0.]])\n",
      "Random Tensor:\n",
      " tensor([[0.1397, 0.6464, 0.0529],\n",
      "        [0.5808, 0.0736, 0.0454],\n",
      "        [0.7504, 0.4951, 0.9513]])\n",
      "Ones Tensor:\n",
      " tensor([[1., 1., 1.],\n",
      "        [1., 1., 1.],\n",
      "        [1., 1., 1.]])\n",
      "Identity Matrix:\n",
      " tensor([[1., 0., 0., 0.],\n",
      "        [0., 1., 0., 0.],\n",
      "        [0., 0., 1., 0.],\n",
      "        [0., 0., 0., 1.]])\n",
      "Arange Tensor:\n",
      " tensor([0, 1, 2, 3, 4])\n",
      "Linspace Tensor:\n",
      " tensor([0.1000, 0.3250, 0.5500, 0.7750, 1.0000])\n",
      "Normal Distributed Tensor:\n",
      " tensor([[-1.2343, -0.4473,  0.6663,  0.0808,  0.3924]])\n",
      "Uniform Distributed Tensor:\n",
      " tensor([[0.4357, 0.9985, 0.8195, 0.4854, 0.3971]])\n",
      "Diagonal Tensor:\n",
      " tensor([[1., 0., 0., 0.],\n",
      "        [0., 1., 0., 0.],\n",
      "        [0., 0., 1., 0.],\n",
      "        [0., 0., 0., 1.]])\n"
     ]
    }
   ],
   "source": [
    "# Create an empty tensor of size 3x3\n",
    "x = torch.empty(3, 3)\n",
    "print(\"Empty Tensor:\\n\", x)\n",
    "\n",
    "# Create a tensor filled with zeros\n",
    "x = torch.zeros(3, 3)\n",
    "print(\"Zeros Tensor:\\n\", x)\n",
    "\n",
    "# Create a tensor with random values\n",
    "x = torch.rand(3, 3)\n",
    "print(\"Random Tensor:\\n\", x)\n",
    "\n",
    "# Create a tensor filled with ones\n",
    "x = torch.ones(3, 3)\n",
    "print(\"Ones Tensor:\\n\", x)\n",
    "\n",
    "# Create an identity matrix\n",
    "x = torch.eye(4, 4)\n",
    "print(\"Identity Matrix:\\n\", x)\n",
    "\n",
    "# Create a tensor using arange\n",
    "x = torch.arange(5)\n",
    "print(\"Arange Tensor:\\n\", x)\n",
    "\n",
    "# Create a tensor using linspace\n",
    "x = torch.linspace(0.1, 1, 5)\n",
    "print(\"Linspace Tensor:\\n\", x)\n",
    "\n",
    "# Create a tensor with values drawn from a normal distribution\n",
    "x = torch.empty(1, 5).normal_(mean=0, std=1)\n",
    "print(\"Normal Distributed Tensor:\\n\", x)\n",
    "\n",
    "# Create a tensor with values drawn from a uniform distribution\n",
    "x = torch.empty(1, 5).uniform_(0, 1)\n",
    "print(\"Uniform Distributed Tensor:\\n\", x)\n",
    "\n",
    "# Create a diagonal tensor from a tensor of ones\n",
    "x = torch.diag(torch.ones(4))\n",
    "print(\"Diagonal Tensor:\\n\", x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af3cf0aa-0908-41d1-bfdf-145edf4a17a2",
   "metadata": {},
   "source": [
    "## Tensor Type Conversion\n",
    "\n",
    "Creates a tensor with values [0, 1, 2, 3] and demonstrates type conversion to boolean, int16, int64, float16, float32, and float64."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "0764b1e0-efa1-4b67-836c-bb67587693fb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Boolean Tensor: tensor([False,  True,  True,  True])\n",
      "Short Tensor (int16): tensor([0, 1, 2, 3], dtype=torch.int16)\n",
      "Long Tensor (int64): tensor([0, 1, 2, 3])\n",
      "Half Tensor (float16): tensor([0., 1., 2., 3.], dtype=torch.float16)\n",
      "Float Tensor (float32): tensor([0., 1., 2., 3.])\n",
      "Double Tensor (float64): tensor([0., 1., 2., 3.], dtype=torch.float64)\n"
     ]
    }
   ],
   "source": [
    "# Create a tensor and convert its type\n",
    "tensor = torch.arange(4)\n",
    "print(\"Boolean Tensor:\", tensor.bool())   # Convert to boolean\n",
    "print(\"Short Tensor (int16):\", tensor.short())   # Convert to int16\n",
    "print(\"Long Tensor (int64):\", tensor.long())   # Convert to int64\n",
    "print(\"Half Tensor (float16):\", tensor.half())   # Convert to float16\n",
    "print(\"Float Tensor (float32):\", tensor.float())   # Convert to float32\n",
    "print(\"Double Tensor (float64):\", tensor.double())   # Convert to float64"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac511d3d-e223-405d-9798-0822fd614c89",
   "metadata": {},
   "source": [
    "## Converting Between NumPy Arrays and Tensors\n",
    "\n",
    "PyTorch makes it easy to switch between NumPy arrays and tensors, allowing seamless integration with existing computing workflows."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "ed821954-082a-4b44-8bd7-156f14c0fe9e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "NumPy Array:\n",
      " [[0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]]\n",
      "Tensor from NumPy Array:\n",
      " tensor([[0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0.]], dtype=torch.float64)\n",
      "Converted Back to NumPy Array:\n",
      " [[0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]\n",
      " [0. 0. 0. 0. 0.]]\n"
     ]
    }
   ],
   "source": [
    "# Create a NumPy array of zeros\n",
    "np_array = np.zeros((5, 5))\n",
    "print(\"NumPy Array:\\n\", np_array)\n",
    "\n",
    "# Convert NumPy array to PyTorch tensor\n",
    "tensor = torch.from_numpy(np_array)\n",
    "print(\"Tensor from NumPy Array:\\n\", tensor)\n",
    "\n",
    "# Convert tensor back to NumPy array\n",
    "numpy_back = tensor.numpy()\n",
    "print(\"Converted Back to NumPy Array:\\n\", numpy_back)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9fe3acd-c40b-40d4-8549-df2fe4393cf6",
   "metadata": {},
   "source": [
    "## Tensor Mathematics and Comparison Operations\n",
    "\n",
    "This section explores essential math operations with PyTorch tensors. \n",
    " \n",
    "- **Addition & Subtraction:** Adds and subtracts two tensors element-wise.  \n",
    "- **Division:** Uses true division for precise results.  \n",
    "- **Inplace Operations:** Modifies a tensor directly without creating a new one.  \n",
    "- **Exponentiation:** Raises each element to a power using `pow()` or `**`.  \n",
    "- **Comparisons:** Checks conditions like `x > 0` or `x < 0`, returning boolean results.  \n",
    "- **Dot Product:** Computes the sum of element-wise multiplications between two tensors. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "025b31b1-a167-4bd8-879b-fdf4b6b2155a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Addition Results: tensor([10, 10, 10])\n",
      "Addition Results: tensor([10, 10, 10]) tensor([10., 10., 10.]) tensor([10, 10, 10])\n",
      "Subtraction Result: tensor([-8, -6, -4])\n",
      "Division Result: tensor([0.1111, 0.2500, 0.4286])\n",
      "Before inplace addition: tensor([1., 1., 1.])\n",
      "After inplace addition: tensor([2., 3., 4.])\n",
      "After second inplace addition: tensor([3., 5., 7.])\n",
      "Exponentiation (pow): tensor([1, 4, 9])\n",
      "Exponentiation (**): tensor([1, 4, 9])\n",
      "x > 0: tensor([True, True, True])\n",
      "x < 0: tensor([False, False, False])\n",
      "Dot Product: tensor(46)\n"
     ]
    }
   ],
   "source": [
    "# Define two tensors for operations\n",
    "x = torch.tensor([1, 2, 3])\n",
    "y = torch.tensor([9, 8, 7])\n",
    "\n",
    "# Addition\n",
    "z = x + y\n",
    "print(\"Addition Results:\", z)\n",
    "\n",
    "# Addition using .add\n",
    "z1 = torch.empty(3)\n",
    "torch.add(x, y, out=z1)\n",
    "z2 = torch.add(x, y)\n",
    "print(\"Addition Results:\", z, z1, z2)\n",
    "\n",
    "# Subtraction\n",
    "z = x - y\n",
    "print(\"Subtraction Result:\", z)\n",
    "\n",
    "# Division (true division)\n",
    "z = torch.true_divide(x, y)\n",
    "print(\"Division Result:\", z)\n",
    "\n",
    "# Inplace operations\n",
    "t = torch.ones(3)\n",
    "print(\"Before inplace addition:\", t)\n",
    "t.add_(x)\n",
    "print(\"After inplace addition:\", t)\n",
    "t += x  # Another inplace addition (note: t = t + x creates a new tensor)\n",
    "print(\"After second inplace addition:\", t)\n",
    "\n",
    "# Exponentiation\n",
    "z = x.pow(2)\n",
    "print(\"Exponentiation (pow):\", z)\n",
    "z = x**2\n",
    "print(\"Exponentiation (**):\", z)\n",
    "\n",
    "# Comparisons\n",
    "z = x > 0\n",
    "print(\"x > 0:\", z)\n",
    "z = x < 0\n",
    "print(\"x < 0:\", z)\n",
    "\n",
    "# Dot product\n",
    "z = torch.dot(x, y)\n",
    "print(\"Dot Product:\", z)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be31c9f3-d5b0-4be7-8f27-1be06af6fbbb",
   "metadata": {},
   "source": [
    "## Matrix Multiplication and Batch Operations\n",
    "\n",
    "Matrix operations are at the heart of deep learning. Let's find out different ways to perform multiplication.\n",
    "\n",
    "- **Matrix Multiplication:** Uses `@` or `torch.mm()` to perform standard matrix multiplication.  \n",
    "- **Matrix Exponentiation:** Raises a square matrix to a power using `matrix_power(n)`.  \n",
    "- **Element-wise Multiplication:** Uses `torch.mul()` or `*` for element-wise multiplication.  \n",
    "- **Batch Matrix Multiplication:** Uses `torch.bmm()` to multiply batches of matrices efficiently."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "ec40c7dc-f19f-4ef2-8a83-c022281221ac",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Matrix Multiplication (@ operator):\n",
      " tensor([[46]])\n",
      "Matrix Multiplication (torch.mm):\n",
      " tensor([[46]])\n",
      "Matrix Multiplication (mm):\n",
      " tensor([[46]])\n",
      "Matrix multiplied 3 times:\n",
      " tensor([[6.8649, 7.1375, 6.5783, 7.6163, 6.8593],\n",
      "        [6.4644, 6.5831, 6.2444, 7.0148, 6.4003],\n",
      "        [5.8586, 6.0872, 5.5466, 6.2068, 5.5633],\n",
      "        [6.6579, 6.8326, 6.3734, 7.4843, 6.8450],\n",
      "        [5.6023, 5.7104, 5.4097, 5.7048, 5.1312]])\n",
      "Matrix power 3:\n",
      " tensor([[6.8649, 7.1375, 6.5783, 7.6163, 6.8593],\n",
      "        [6.4644, 6.5831, 6.2444, 7.0148, 6.4003],\n",
      "        [5.8586, 6.0872, 5.5466, 6.2068, 5.5633],\n",
      "        [6.6579, 6.8326, 6.3734, 7.4843, 6.8450],\n",
      "        [5.6023, 5.7104, 5.4097, 5.7048, 5.1312]])\n",
      "Element-wise Multiplication: tensor([ 9, 16, 21])\n",
      "Element-wise Multiplication (alternative): tensor([ 9, 16, 21])\n",
      "Batch Matrix Multiplication (first batch):\n",
      " tensor([[3.5440, 6.5026, 6.5445, 4.5252, 5.2045, 5.7321, 6.1181, 6.6445, 6.0093,\n",
      "         4.9353, 3.9677, 4.6019, 6.3319, 5.7043, 5.1247, 4.3091, 4.7326, 4.8858,\n",
      "         5.1322, 5.6088, 5.9398, 6.9429, 5.9886, 5.2573, 2.7003, 4.8218, 5.9894,\n",
      "         4.6512, 5.1542, 3.7731],\n",
      "        [4.2282, 5.0751, 4.5581, 3.9363, 5.2671, 5.2506, 4.4924, 5.5095, 5.1781,\n",
      "         3.7145, 4.7497, 3.8810, 4.2245, 4.6506, 4.5174, 2.8565, 3.9761, 3.8779,\n",
      "         4.3638, 4.0318, 5.5015, 3.9265, 5.3370, 4.7982, 2.2797, 4.3291, 3.7386,\n",
      "         4.0959, 3.6940, 3.2689],\n",
      "        [4.5827, 6.1946, 4.9569, 4.6735, 5.6490, 5.8873, 5.7623, 6.4767, 6.3909,\n",
      "         4.4470, 5.0609, 4.6619, 5.1481, 5.3259, 5.4949, 3.7388, 5.1354, 4.2429,\n",
      "         4.5741, 4.5234, 5.5854, 4.6225, 5.9995, 5.5425, 2.8380, 4.2748, 4.3472,\n",
      "         4.4736, 4.1825, 3.4843],\n",
      "        [3.3100, 5.1634, 5.0415, 3.7873, 4.4239, 5.4084, 5.0781, 5.1528, 5.1801,\n",
      "         3.5463, 3.6643, 3.9544, 4.7713, 4.8103, 3.8891, 2.8634, 4.0375, 3.4439,\n",
      "         4.1778, 4.0538, 4.9986, 4.8988, 5.2553, 4.8321, 1.6977, 4.3844, 4.1179,\n",
      "         4.2077, 4.1596, 3.0347],\n",
      "        [4.1352, 5.5097, 5.4081, 4.1339, 5.2045, 5.8074, 5.9689, 5.7145, 5.1557,\n",
      "         4.5157, 4.7473, 5.3542, 5.0027, 4.4979, 5.2767, 4.4354, 3.5934, 4.7134,\n",
      "         5.1901, 5.5433, 6.2373, 4.5942, 5.5256, 5.3336, 3.2117, 4.1416, 4.2712,\n",
      "         4.1048, 4.0465, 3.6511],\n",
      "        [3.7322, 5.3081, 5.4181, 4.6393, 5.1372, 5.5894, 5.5543, 6.0296, 4.5670,\n",
      "         4.0552, 4.2119, 4.5391, 5.2059, 4.8795, 5.0137, 3.7978, 3.7304, 4.0530,\n",
      "         4.3719, 5.0318, 5.7526, 4.8653, 4.8926, 5.5795, 2.6404, 4.2889, 4.5551,\n",
      "         4.2022, 3.9910, 3.8555],\n",
      "        [3.2315, 4.6632, 4.9304, 3.5092, 4.4533, 4.8799, 4.6290, 5.3459, 4.9640,\n",
      "         3.7996, 4.0911, 4.0296, 4.6814, 4.0135, 4.5876, 3.6069, 3.8043, 4.2701,\n",
      "         4.1166, 4.8829, 5.5559, 4.8536, 4.5921, 3.6704, 2.4689, 3.4953, 3.6832,\n",
      "         3.9577, 3.5755, 3.0293],\n",
      "        [4.7066, 5.7029, 6.1439, 4.4224, 5.7423, 6.4657, 5.8944, 5.8826, 6.2143,\n",
      "         4.1787, 4.8364, 4.9471, 5.2735, 4.8664, 5.5245, 3.1057, 4.8749, 4.4022,\n",
      "         5.6390, 5.2407, 6.2896, 5.6143, 5.5157, 5.6265, 2.1874, 4.4567, 3.9869,\n",
      "         5.5333, 4.4775, 3.9314],\n",
      "        [4.3227, 5.8580, 5.9357, 4.1483, 5.2436, 6.3560, 5.7144, 5.9563, 5.8249,\n",
      "         4.5020, 4.4281, 4.4165, 5.4092, 4.8056, 4.9667, 4.0844, 4.2222, 3.8133,\n",
      "         4.5904, 5.0534, 6.7042, 5.0727, 5.5185, 5.6069, 3.0032, 4.5262, 4.0067,\n",
      "         4.7430, 4.2932, 3.7362],\n",
      "        [3.8294, 5.4057, 5.0970, 3.9448, 5.0699, 5.1357, 4.9649, 5.2615, 4.4683,\n",
      "         3.5833, 4.0856, 3.3753, 4.0069, 4.1184, 4.2750, 3.4178, 3.7880, 3.3670,\n",
      "         4.7347, 4.6245, 5.3556, 3.9735, 5.1346, 5.1036, 2.6921, 4.2103, 3.9386,\n",
      "         3.6596, 4.1051, 2.2051]])\n",
      "Shape of batched multiplication result: torch.Size([32, 10, 30])\n"
     ]
    }
   ],
   "source": [
    "# Matrix multiplication using @ operator and torch.mm\n",
    "x2 = torch.tensor([[1, 2, 3]])\n",
    "y2 = torch.tensor([[9, 8, 7]])\n",
    "\n",
    "z = x2 @ torch.t(y2)\n",
    "print(\"Matrix Multiplication (@ operator):\\n\", z)\n",
    "z = torch.mm(x2, torch.t(y2))\n",
    "print(\"Matrix Multiplication (torch.mm):\\n\", z)\n",
    "z = x2.mm(torch.t(y2))\n",
    "print(\"Matrix Multiplication (mm):\\n\", z)\n",
    "\n",
    "# Matrix exponentiation: multiplying a matrix with itself 3 times\n",
    "matrix_exp = torch.rand(5, 5)\n",
    "print(\"Matrix multiplied 3 times:\\n\", matrix_exp @ matrix_exp @ matrix_exp)\n",
    "print(\"Matrix power 3:\\n\", matrix_exp.matrix_power(3))\n",
    "\n",
    "# Element-wise multiplication\n",
    "z = torch.mul(x, y)\n",
    "print(\"Element-wise Multiplication:\", z)\n",
    "z = x * y\n",
    "print(\"Element-wise Multiplication (alternative):\", z)\n",
    "\n",
    "# Batch matrix multiplication\n",
    "batch = 32\n",
    "n, m, p = 10, 20, 30\n",
    "tensor1 = torch.rand((batch, n, m))\n",
    "tensor2 = torch.rand((batch, m, p))\n",
    "out_bmm = torch.bmm(tensor1, tensor2)  # Result shape: (batch, n, p)\n",
    "print(\"Batch Matrix Multiplication (first batch):\\n\", out_bmm[0])\n",
    "print(\"Shape of batched multiplication result:\", (tensor1 @ tensor2).shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c410490-69cc-401a-9ea1-e0a8ddd29c25",
   "metadata": {},
   "source": [
    "## Broadcasting and Other Useful Operations\n",
    "\n",
    "Broadcasting allows arithmetic operations on tensors of different shapes. This section also demonstrates additional useful functions.\n",
    "\n",
    "- **Broadcasting:** Automatically expands smaller tensors to match larger ones in operations.  \n",
    "- **Summation:** `torch.sum(x, dim=0)` computes sum along a specific dimension.  \n",
    "- **Min/Max Values:** `torch.max()` and `torch.min()` return the highest and lowest values along a dimension.  \n",
    "- **Absolute Values:** `torch.abs(x)` gets the element-wise absolute values.  \n",
    "- **Argmax/Argmin:** `torch.argmax()` and `torch.argmin()` return the index of max/min values.  \n",
    "- **Mean Calculation:** `torch.mean(x.float(), dim=0)` computes the mean (ensuring float dtype).  \n",
    "- **Element-wise Comparison:** `torch.eq(x, y)` checks equality between two tensors.  \n",
    "- **Sorting:** `torch.sort(y, dim=0)` sorts tensor elements and returns indices.  \n",
    "- **Clamping:** `torch.clamp(x, min=0)` restricts values within a range.  \n",
    "- **Boolean Operations:** `torch.any(x_bool)` checks if any value is `True`, `torch.all(x_bool)` checks if all are `True`.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "3aa2d579-cf5e-41de-8f4a-6fdd15184ac0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tensor x1:\n",
      " tensor([[0.5610, 0.6928, 0.8066, 0.2603, 0.1528],\n",
      "        [0.9867, 0.1102, 0.4665, 0.1929, 0.6669],\n",
      "        [0.4671, 0.0768, 0.6585, 0.5024, 0.8904],\n",
      "        [0.6634, 0.6646, 0.0860, 0.1698, 0.0833],\n",
      "        [0.5990, 0.0964, 0.4688, 0.3539, 0.0450]])\n",
      "Tensor x2:\n",
      " tensor([0.8125, 0.2102, 0.6983, 0.5526, 0.1509])\n",
      "x1 - x2:\n",
      " tensor([[-0.2516,  0.4826,  0.1083, -0.2923,  0.0019],\n",
      "        [ 0.1741, -0.1000, -0.2318, -0.3597,  0.5160],\n",
      "        [-0.3454, -0.1334, -0.0397, -0.0503,  0.7396],\n",
      "        [-0.1491,  0.4544, -0.6123, -0.3828, -0.0676],\n",
      "        [-0.2135, -0.1138, -0.2294, -0.1988, -0.1058]])\n",
      "x1 raised to the power of x2:\n",
      " tensor([[0.6252, 0.9258, 0.8606, 0.4753, 0.7532],\n",
      "        [0.9892, 0.6290, 0.5872, 0.4028, 0.9407],\n",
      "        [0.5388, 0.5831, 0.7470, 0.6836, 0.9826],\n",
      "        [0.7164, 0.9177, 0.1803, 0.3754, 0.6873],\n",
      "        [0.6594, 0.6116, 0.5892, 0.5632, 0.6264]])\n",
      "Sum along dimension 0: tensor(6)\n",
      "Max value and index: tensor(3) tensor(2)\n",
      "Min value and index: tensor(1) tensor(0)\n",
      "Absolute values: tensor([1, 2, 3])\n",
      "Argmax: tensor(2)\n",
      "Argmin: tensor(0)\n",
      "Mean (converted to float): tensor(2.)\n",
      "Element-wise equality (x == y): tensor([False, False, False])\n",
      "Sorted y and indices: tensor([7, 8, 9]) tensor([2, 1, 0])\n",
      "Clamped x: tensor([1, 2, 3])\n",
      "Any True: tensor(True)\n",
      "All True: tensor(False)\n"
     ]
    }
   ],
   "source": [
    "# Broadcasting example\n",
    "x1 = torch.rand(5, 5)\n",
    "x2 = torch.rand(5)\n",
    "print(\"Tensor x1:\\n\", x1)\n",
    "print(\"Tensor x2:\\n\", x2)\n",
    "print(\"x1 - x2:\\n\", x1 - x2)\n",
    "print(\"x1 raised to the power of x2:\\n\", x1 ** x2)\n",
    "\n",
    "# Sum of tensor elements along dimension 0\n",
    "sum_x = torch.sum(x, dim=0)\n",
    "print(\"Sum along dimension 0:\", sum_x)\n",
    "\n",
    "# Maximum and minimum values\n",
    "value, indices = torch.max(x, dim=0)\n",
    "print(\"Max value and index:\", value, indices)\n",
    "\n",
    "value, indices = torch.min(x, dim=0)\n",
    "print(\"Min value and index:\", value, indices)\n",
    "\n",
    "# Other operations\n",
    "print(\"Absolute values:\", torch.abs(x))\n",
    "print(\"Argmax:\", torch.argmax(x, dim=0))\n",
    "print(\"Argmin:\", torch.argmin(x, dim=0))\n",
    "print(\"Mean (converted to float):\", torch.mean(x.float(), dim=0))\n",
    "print(\"Element-wise equality (x == y):\", torch.eq(x, y))\n",
    "\n",
    "# Sorting\n",
    "sorted_y, indices = torch.sort(y, dim=0, descending=False)\n",
    "print(\"Sorted y and indices:\", sorted_y, indices)\n",
    "\n",
    "# Clamping values\n",
    "print(\"Clamped x:\", torch.clamp(x, min=0))\n",
    "\n",
    "# Boolean operations\n",
    "x_bool = torch.tensor([1, 0, 1, 1, 1], dtype=torch.bool)\n",
    "print(\"Any True:\", torch.any(x_bool))\n",
    "print(\"All True:\", torch.all(x_bool))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e62019b0-d5d1-41e5-ab0b-1c436486d6d5",
   "metadata": {},
   "source": [
    "## Tensor Indexing\n",
    "\n",
    "Access and modify tensor elements using indexing, slicing, and advanced indexing.\n",
    "\n",
    "- **Accessing Rows & Columns:** Use `x[row, :]` for a row and `x[:, col]` for a column.  \n",
    "- **Slicing:** `x[row, start:end]` extracts a portion of a row.  \n",
    "- **Modifying Elements:** Directly assign values using `x[row, col] = value`.  \n",
    "- **Fancy Indexing:** Use a list of indices to select multiple elements at once.  \n",
    "- **Conditional Indexing:** Extract elements using conditions like `(x < 2) | (x > 8)`.  \n",
    "- **Finding Even Numbers:** Use `x.remainder(2) == 0` to filter even values.  \n",
    "- **Conditional Selection with `torch.where()`:** Chooses values based on a condition.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "4bac190d-6c58-4567-ab9b-3753b98b4ec0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "First row of tensor: tensor([0.1933, 0.0269, 0.3945, 0.6182, 0.3705, 0.7060, 0.4922, 0.0280, 0.3398,\n",
      "        0.9600, 0.2417, 0.8861, 0.1833, 0.0985, 0.2710, 0.6410, 0.3799, 0.5981,\n",
      "        0.0205, 0.9136, 0.9481, 0.6899, 0.9450, 0.6970, 0.1787])\n",
      "Second column of tensor: tensor([2.6887e-02, 8.1319e-02, 9.8993e-01, 4.5033e-01, 5.7220e-04, 9.5527e-01,\n",
      "        1.1555e-01, 9.4050e-02, 5.3863e-02, 5.0582e-01])\n",
      "First 10 elements of third row: tensor([0.2450, 0.9899, 0.0546, 0.4938, 0.7471, 0.5465, 0.0106, 0.3488, 0.2002,\n",
      "        0.4488])\n",
      "Fancy indexing result: tensor([2, 5, 8])\n",
      "Elements where x2 < 2 or x2 > 8: tensor([0, 1, 9])\n",
      "Even numbers in x2: tensor([0, 2, 4, 6, 8])\n",
      "Using torch.where: tensor([ 0,  2,  4,  6,  8, 10,  6,  7,  8,  9])\n"
     ]
    }
   ],
   "source": [
    "# Create a random tensor with shape (batch_size, features)\n",
    "batch_size = 10\n",
    "features = 25\n",
    "x = torch.rand((batch_size, features))\n",
    "\n",
    "# Access the first row\n",
    "print(\"First row of tensor:\", x[0, :])\n",
    "\n",
    "# Access the second column\n",
    "print(\"Second column of tensor:\", x[:, 1])\n",
    "\n",
    "# Access the first 10 elements of the third row\n",
    "print(\"First 10 elements of third row:\", x[2, 0:10])\n",
    "\n",
    "# Modify a specific element (set first element to 100)\n",
    "x[0, 0] = 100\n",
    "\n",
    "# Fancy indexing example\n",
    "x1 = torch.arange(10)\n",
    "indices = [2, 5, 8]\n",
    "print(\"Fancy indexing result:\", x1[indices])\n",
    "\n",
    "# Advanced indexing: select elements based on a condition\n",
    "x2 = torch.arange(10)\n",
    "print(\"Elements where x2 < 2 or x2 > 8:\", x2[(x2 < 2) | (x2 > 8)])\n",
    "print(\"Even numbers in x2:\", x2[x2.remainder(2) == 0])\n",
    "\n",
    "# Using torch.where to select values based on a condition\n",
    "print(\"Using torch.where:\", torch.where(x2 > 5, x2, x2 * 2))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "45d8957f-3906-4203-b6b5-fadc50722c02",
   "metadata": {},
   "source": [
    "## Tensor Reshaping\n",
    "\n",
    "Learn how to reshape tensors, concatenate them, and change the order of dimensions.\n",
    "\n",
    "- **Reshape with `view()` & `reshape()`:** Change tensor shape without altering data.  \n",
    "- **Transpose & Flatten:** `.t()` transposes, `.contiguous().view(-1)` flattens.  \n",
    "- **Concatenation:** `torch.cat([x1, x2], dim=0/1)` merges tensors along rows/columns.  \n",
    "- **Flattening:** `.view(-1)` converts a tensor into a 1D array.  \n",
    "- **Batch Reshaping:** `.view(batch, -1)` keeps batch size while reshaping.  \n",
    "- **Permute Dimensions:** `.permute(0, 2, 1)` reorders dimensions efficiently.  \n",
    "- **Unsqueeze for New Dimensions:** `.unsqueeze(dim)` adds singleton dimensions.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "09b8a7a2-f1a2-4730-b000-39f9a61e3ddd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Reshaped to 3x3 using view:\n",
      " tensor([[0, 1, 2],\n",
      "        [3, 4, 5],\n",
      "        [6, 7, 8]])\n",
      "Reshaped to 3x3 using reshape:\n",
      " tensor([[0, 1, 2],\n",
      "        [3, 4, 5],\n",
      "        [6, 7, 8]])\n",
      "Flattened transposed tensor: tensor([0, 3, 6, 1, 4, 7, 2, 5, 8])\n",
      "Concatenated along dimension 0 (rows): torch.Size([4, 5])\n",
      "Concatenated along dimension 1 (columns): torch.Size([2, 10])\n",
      "Flattened tensor shape: torch.Size([10])\n",
      "Reshaped to (batch, -1): torch.Size([64, 10])\n",
      "Permuted tensor shape: torch.Size([64, 5, 2])\n",
      "Original x: tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n",
      "x unsqueezed at dim 0: torch.Size([1, 10]) tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])\n",
      "x unsqueezed at dim 1: torch.Size([10, 1]) tensor([[0],\n",
      "        [1],\n",
      "        [2],\n",
      "        [3],\n",
      "        [4],\n",
      "        [5],\n",
      "        [6],\n",
      "        [7],\n",
      "        [8],\n",
      "        [9]])\n"
     ]
    }
   ],
   "source": [
    "# Reshape a tensor using view and reshape\n",
    "x = torch.arange(9)\n",
    "x_3x3 = x.view(3, 3)\n",
    "print(\"Reshaped to 3x3 using view:\\n\", x_3x3)\n",
    "x_3x3 = x.reshape(3, 3)\n",
    "print(\"Reshaped to 3x3 using reshape:\\n\", x_3x3)\n",
    "\n",
    "# Transpose and flatten the tensor\n",
    "y = x_3x3.t()\n",
    "print(\"Flattened transposed tensor:\", y.contiguous().view(9))\n",
    "\n",
    "# Concatenation example\n",
    "x1 = torch.rand(2, 5)\n",
    "x2 = torch.rand(2, 5)\n",
    "print(\"Concatenated along dimension 0 (rows):\", torch.cat([x1, x2], dim=0).shape)\n",
    "print(\"Concatenated along dimension 1 (columns):\", torch.cat([x1, x2], dim=1).shape)\n",
    "\n",
    "# Flatten the tensor using view(-1)\n",
    "z = x1.view(-1)\n",
    "print(\"Flattened tensor shape:\", z.shape)\n",
    "\n",
    "# Reshape with batch dimension\n",
    "batch = 64\n",
    "x = torch.rand(batch, 2, 5)\n",
    "print(\"Reshaped to (batch, -1):\", x.view(batch, -1).shape)\n",
    "\n",
    "# Permute dimensions\n",
    "z = x.permute(0, 2, 1)\n",
    "print(\"Permuted tensor shape:\", z.shape)\n",
    "\n",
    "# Unsqueeze examples (adding new dimensions)\n",
    "x = torch.arange(10)\n",
    "print(\"Original x:\", x)\n",
    "print(\"x unsqueezed at dim 0:\", x.unsqueeze(0).shape, x.unsqueeze(0))\n",
    "print(\"x unsqueezed at dim 1:\", x.unsqueeze(1).shape, x.unsqueeze(1))\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
