{
 "nbformat": 4,
 "nbformat_minor": 0,
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "name": "python3",
   "language": "python",
   "display_name": "Python 3 (ipykernel)"
  },
  "language_info": {
   "name": "python"
  },
  "accelerator": "GPU",
  "gpuClass": "standard"
 },
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "![SG - Horizontal.png]()"
   ],
   "metadata": {
    "id": "sh6t_y7KzqBH"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "#  Using SuperGradients (**recipes**)\n",
    "\n",
    "This tutorial will explain what **recipes** are, when and how can recipes help you scallable training and reproducing results, and how to use them.\n",
    "\n"
   ],
   "metadata": {
    "id": "5aISf1B-AGDQ"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "!pip install -q super-gradients==3.7.1"
   ],
   "metadata": {
    "id": "8uZM-4va5Rpu",
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "outputId": "a93c1b2d-3b81-4cd7-aeaa-f62f6d6bf81d",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:37.419483600Z",
     "start_time": "2024-03-07T13:05:34.691990700Z"
    }
   },
   "execution_count": 13,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# What is a **recipe**"
   ],
   "metadata": {
    "id": "-1nPOPmc1lGp"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "To train a model, it is necessary to configure **4** main components:\n",
    "1. **dataset**: what dataset to use, input size, augmentations, etc.\n",
    "\n",
    "  ImageNet of size 224x224 with color jitter is one option. CIFAR10 is another. We possibly have our custom dataset and augmentations as-well...\n",
    "\n",
    "2. **architecture**: what model to train, how many blocks, dropout rate, etc.\n",
    "\n",
    "  Is it ResNet18? ResNet50? Maybe it's YOLO? or our SuperCustomModel with RepVGG backbone, a dropout probability of 0.2 and bottleneck ratio of 0.5?\n",
    "\n",
    "3. **training hyperparameters**: number of epochs, initial learning rate, learning rate scheduler, loss function, optimizer, etc.\n",
    "\n",
    "  Train for 300 epochs using SGD with a learning rate of 0.01, or maybe 400 epochs with Cosine scheduler and ADAM? Should we use EMA or not? What about weight decay? We can plug our custom loss function, metrics, optimizers and others as-well!\n",
    "\n",
    "4. **checkpoints**: location of pretrained weights, location of current training's checkpoints and artifacts, etc."
   ],
   "metadata": {
    "id": "IvWzUj7Q_KeW"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "All recipes can be found [here](https://github.com/Deci-AI/super-gradients/blob/master/docs/assets/SG_img/Training_Recipes.md)"
   ],
   "metadata": {
    "id": "k5aBkCQXbNiW"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "Recipes support out of the box every model, metric or loss that is implemented in SuperGradients, but you can easily extend this to any custom object that you need by \"registering it\". Check out [this tutorial](https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/common/registry) for more information."
   ],
   "metadata": {
    "id": "vroKwZSl3zSR"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "To standardize these components, SG uses the following hierarchy-based format:\n",
    "\n",
    "```\n",
    ".\n",
    "├── src/super_gradients/recipes/\n",
    "│   ├── arch_params/\n",
    "│   │   ├── default_arch_params.yaml\n",
    "│   │   ├── resnet50_arch_params.yaml\n",
    "│   │   ├── yolo_arch_params.yaml\n",
    "│   │   └── ...         \n",
    "│   ├── dataset_params/\n",
    "│   │   ├── imagenet_dataset_params.yaml\n",
    "│   │   ├── coco_detection_dataset_params.yaml\n",
    "│   │   └── ...   \n",
    "│   ├── training_hyperparams/\n",
    "│   │   ├── imagenet_resnet50_train_params.yaml\n",
    "│   │   ├── coco2017_yolox_train_params.yaml\n",
    "│   │   └── ...  \n",
    "└── └── checkpoint_params/    \n",
    "        ├── default_checkpoint_params.yaml\n",
    "        └── ...\n",
    "```"
   ],
   "metadata": {
    "id": "dooBqeANyLtn"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "These components are aggregated into a single \"main\" recipe `.yaml` file that inherits the aforementioned dataset, architecture, training and checkpoint params. It is also possible (and recomended for flexibility) to override default settings with custom ones.\n",
    "\n",
    "Examples of \"main\" recipes are found at `src/super_gradients/recipes/`. We'll have a glance on `coco2017_yolox.yaml`:\n",
    "\n",
    "```\n",
    "defaults:\n",
    "  - training_hyperparams: coco2017_yolox_train_params\n",
    "  - dataset_params: coco_detection_dataset_params\n",
    "  - arch_params: yolox_s_arch_params\n",
    "  - checkpoint_params: default_checkpoint_params\n",
    "\n",
    "train_dataloader: coco2017_train\n",
    "val_dataloader: coco2017_val\n",
    "\n",
    "model_checkpoints_location: local\n",
    "\n",
    "load_checkpoint: False\n",
    "training_hyperparams:\n",
    "  initial_lr: 0.001\n",
    "\n",
    "architecture: yolox_s\n",
    "\n",
    "multi_gpu: DDP\n",
    "num_gpus: 8\n",
    "\n",
    "experiment_suffix: res${dataset_params.train_dataset_params.input_dim}\n",
    "experiment_name: ${architecture}_coco2017_${experiment_suffix}\n",
    "```\n",
    "\n",
    "We can understand that this recipe consists of `coco_detection_dataset_params` for dataset, `yolox_s_arch_params` for architecture, `coco2017_yolox_train_params` for training, and `default_checkpoint_params` for checkpoints.\n",
    "\n",
    "We have overridden the default value of `training_hyperparams.initial_lr` with a value of `0.001`, and we also plan to launch the training using 8 GPUs on DDP mode.\n",
    "\n",
    "\n"
   ],
   "metadata": {
    "id": "5mNpNi-q0aJw"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# How to use recipes in SuperGradients"
   ],
   "metadata": {
    "id": "njthhNJR1pJm"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Getting training hyperparameters based on recipe\n",
    "\n",
    "Load training hyperparams for ResNet <> ImageNet"
   ],
   "metadata": {
    "id": "DxcgHs9bG-ya"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "from super_gradients.training import training_hyperparams"
   ],
   "metadata": {
    "id": "DFbJpOmo8Lri",
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "outputId": "3277b6a0-92f7-43a4-8e69-de4e3153b321",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:37.422485700Z",
     "start_time": "2024-03-07T13:05:37.338480800Z"
    }
   },
   "execution_count": 14,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# Two options to load same training hyperparameters\n",
    "training_params_from_yaml_file = training_hyperparams.get('cifar10_resnet')\n",
    "training_params_predefined = training_hyperparams.cifar10_resnet_train_params()\n",
    "\n",
    "# We don't assert values' equality because some are numpy arrays, etc\n",
    "assert set(training_params_predefined.keys()) == set(training_params_from_yaml_file.keys())\n",
    "\n",
    "print(\"TRAINING HYPERPARAMS:\")\n",
    "for k, v in training_params_predefined.items():\n",
    "    print(f\"{k}: {v}\")"
   ],
   "metadata": {
    "id": "CIpnwngoJ15h",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:38.204445600Z",
     "start_time": "2024-03-07T13:05:37.351483200Z"
    }
   },
   "execution_count": 15,
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "ERROR: Could not find a version that satisfies the requirement super-gradients==3.6.1 (from versions: 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.5.0, 2.6.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.1.0, 3.1.1, 3.1.2, 3.1.3, 3.2.0, 3.2.1, 3.3.0, 3.3.1, 3.4.0, 3.4.1, 3.5.0, 3.6.0)\n",
      "ERROR: No matching distribution found for super-gradients==3.6.1\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "There are multiple settings in the training hyperparams. All are accessible and modifiable in a key-value way, for example:"
   ],
   "metadata": {
    "id": "OIV_f74ML8sd"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "training_params_predefined['initial_lr'] = 0.05\n",
    "print(training_params_predefined['initial_lr'])"
   ],
   "metadata": {
    "id": "PjpuYXYtL8aK",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:38.206445Z",
     "start_time": "2024-03-07T13:05:38.023221500Z"
    }
   },
   "execution_count": 16,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "You can also overwrite with custom on-the-fly:"
   ],
   "metadata": {
    "id": "QQKb1sFnsgOP"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "training_params_custom = training_hyperparams.cifar10_resnet_train_params(overriding_params={'initial_lr': 0.05})\n",
    "training_params_custom['initial_lr']"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "t8H6RTDfta1M",
    "outputId": "66a9836d-f789-4d95-bb1b-40d7a2384224",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:38.593005800Z",
     "start_time": "2024-03-07T13:05:38.039221700Z"
    }
   },
   "execution_count": 17,
   "outputs": [
    {
     "data": {
      "text/plain": "0.05"
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Getting a `DataLoader` based on a recipe"
   ],
   "metadata": {
    "id": "jaJWfyqH4fSU"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "Now we'll get a dataloader object from a recipe.\n",
    "Note that we set `num_workers` to `0` in this cell. \n",
    "This is because we are running in a Jupyter notebook and on Windows platforms multiprocessing from Jupyter does not work that great. \n",
    "Since Cifar-10 is pretty small dataset there should be no noticeable difference in performance. \n",
    "\n"
   ],
   "metadata": {
    "id": "d5CviqXB4lee"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "from super_gradients.training.datasets.classification_datasets.cifar import Cifar10\n",
    "from super_gradients.training import dataloaders"
   ],
   "metadata": {
    "id": "isMXe8S5WTCq",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:38.597256300Z",
     "start_time": "2024-03-07T13:05:38.366869100Z"
    }
   },
   "execution_count": 18,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# Load a predefined dataset\n",
    "train_dataloader = dataloaders.cifar10_train()\n",
    "\n",
    "# OR use your yaml file. Can also override specific params, such as batch_size\n",
    "train_dataloader = dataloaders.get_data_loader(\n",
    "        config_name='cifar10_dataset_params',\n",
    "        dataset_cls=Cifar10,\n",
    "        train=True,\n",
    "        dataloader_params={'batch_size': 42, 'num_workers': 0}\n",
    ")\n",
    "\n",
    "print(\"batch size:\", train_dataloader.batch_size)\n",
    "\n",
    "val_dataloader = dataloaders.cifar10_val(dataloader_params={'num_workers': 0})"
   ],
   "metadata": {
    "id": "av42ahGM4hau",
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "outputId": "4e153886-827f-4dbd-b5f0-c58cb8095d88",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:41.924836900Z",
     "start_time": "2024-03-07T13:05:38.383878Z"
    }
   },
   "execution_count": 19,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Getting a model"
   ],
   "metadata": {
    "id": "7NwNPzpu4hgs"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "from super_gradients.training import models"
   ],
   "metadata": {
    "id": "Dwe9iez7ffw2",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:41.936981300Z",
     "start_time": "2024-03-07T13:05:41.926836400Z"
    }
   },
   "execution_count": 20,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "model = models.get('resnet18', num_classes=10)"
   ],
   "metadata": {
    "id": "Feho5xg-YhlU",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:42.029578600Z",
     "start_time": "2024-03-07T13:05:41.942090400Z"
    }
   },
   "execution_count": 21,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Launch a training, based on model, dataset, training hyperparameters"
   ],
   "metadata": {
    "id": "YU68Mj6p4vN5"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "from super_gradients.training import Trainer"
   ],
   "metadata": {
    "id": "jkjs7eLzgZmr",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:05:42.045580700Z",
     "start_time": "2024-03-07T13:05:42.033583Z"
    }
   },
   "execution_count": 22,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "trainer = Trainer(\"recipe_example\", ckpt_root_dir=\"experiments\")\n",
    "\n",
    "# For the sake of this demonstration we will change the max epochs to 3\n",
    "training_params_from_yaml_file['max_epochs'] = 3\n",
    "\n",
    "trainer.train(\n",
    "    model=model,\n",
    "    training_params=training_params_from_yaml_file,\n",
    "    train_loader=train_dataloader,\n",
    "    valid_loader=val_dataloader\n",
    ")"
   ],
   "metadata": {
    "id": "XUQrw1k74kDE",
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "outputId": "3100a0a8-c9bb-4f02-a725-23ecf06b1acb",
    "ExecuteTime": {
     "end_time": "2024-03-07T13:08:02.669216500Z",
     "start_time": "2024-03-07T13:05:42.049579100Z"
    }
   },
   "execution_count": 23,
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Train epoch 0: 100%|██████████| 1191/1191 [01:45<00:00, 11.31it/s, Accuracy=0.285, CrossEntropyLoss=2.02, Top5=0.817, gpu_mem=0.3]\n",
      "Validating: 100%|██████████| 20/20 [00:09<00:00,  2.12it/s]\n",
      "Train epoch 1:  19%|█▉        | 232/1191 [00:24<01:48,  8.82it/s, Accuracy=0.374, CrossEntropyLoss=1.68, Top5=0.883, gpu_mem=0.3]"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Launch a training based on a **recipe** in a single line!\n",
    "\n",
    "To load a recipe, you should pass the following CLI argument: `--config-name=my_recipe`, and you also can override settings using `nested.key=value` syntax.\n",
    "\n",
    "For this example we will set the number of workers for train & validation data loaders to two.\n",
    "\n",
    "The full override command would look like `dataset_params.train_dataloader_params.num_workers=0 dataset_params.val_dataloader_params.num_workers=0`. To save the time SuperGradients offers a list of shortcut settings to achieve the same result: `num_workers=0`.\n",
    "\n",
    "You can read more about shortcuts [here](https://docs.deci.ai/super-gradients/latest/documentation/source/Recipes_Training.html#command-line-override-shortcuts):"
   ],
   "metadata": {
    "id": "OnNaB_1Pwpq-"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "!python -m super_gradients.train_from_recipe --config-name=cifar10_resnet num_workers=0 epochs=20"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "MTF3sl9SLnDe",
    "outputId": "14892eef-deeb-4e54-f92f-aab86d393120",
    "is_executing": true,
    "ExecuteTime": {
     "start_time": "2024-03-07T13:08:02.540426200Z"
    }
   },
   "execution_count": null,
   "outputs": []
  }
 ]
}
