{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "protecting-algebra",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/drive/1XlUIBNyaKLApsZLT3KQuRR_GqvcLe4ck?usp=sharing\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "complicated-upset",
   "metadata": {},
   "source": [
    "# Training StyleGAN2 on the cropped dataset in Google CoLab "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "native-alarm",
   "metadata": {},
   "source": [
    "![sample_images](https://raw.githubusercontent.com/mahdi-darvish/GAN-augmented-pet-classifier/main/Figures/thumb.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "collectible-burton",
   "metadata": {},
   "source": [
    " ### Google Drive setups\n",
    "\n",
    "One of the main requirements for starting is google drive. We're going to use G-DRIVE to store our training data and trained neural networks. we built some folders with the path of follows:\n",
    "```\n",
    "content/drive/MyDrive/gan \n",
    "``` \n",
    "\n",
    "### Managing GPU\n",
    "\n",
    "There are two options for training GANs with Google Colab, Google Colab Free or Pro. We will go with the pro one because it has advantages like better GPU, longer runtime, and timeouts, and most important, it will not disconnect before 24 hours. In using the free version of google colab, you should make sure that you run the notebook with GPU runtime. For doing this, go to \"Runtime\" in the bar and select the \"change the runtime type\" option. Then, put the Hardware accelerator on GPU and save it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "private-chosen",
   "metadata": {},
   "outputs": [],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "unable-large",
   "metadata": {},
   "source": [
    "### Steps of the code"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "overhead-least",
   "metadata": {},
   "outputs": [],
   "source": [
    "import shutil\n",
    "import os\n",
    "import os\n",
    "from tqdm.notebook import tqdm\n",
    "from PIL import Image\n",
    "from os import listdir\n",
    "from PIL import Image\n",
    "import os, sys\n",
    "import os"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "reflected-millennium",
   "metadata": {},
   "source": [
    "Connecting Colab to GDRIVE, to save the snapshots there"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "sapphire-surfing",
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    from google.colab import drive\n",
    "    drive.mount('/content/drive', force_remount=True)\n",
    "    COLAB = True\n",
    "    print(\"Note: using Google CoLab\")\n",
    "except:\n",
    "    print(\"Note: not using Google CoLab\")\n",
    "    COLAB = False"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "geographic-flooring",
   "metadata": {},
   "source": [
    "Creating needful directories in GDRIVE"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "changing-stack",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir drive/MyDrive/\n",
    "!mkdir drive/MyDrive/gans_training\n",
    "!mkdir drive/MyDrive/gans_training/images\n",
    "!mkdir drive/MyDrive/gans_training/dataset\n",
    "!mkdir drive/MyDrive/gans_training/experiments"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "accompanied-graphic",
   "metadata": {},
   "source": [
    "Setting the version of the torch, torchvision,  torchaudio to become compatible with Style-GAN2ada"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "painted-oregon",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "continuous-thumbnail",
   "metadata": {},
   "source": [
    "Installing NVIDIA StyleGAN2 ADA PyTorch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fiscal-pursuit",
   "metadata": {},
   "outputs": [],
   "source": [
    "!git clone https://github.com/NVlabs/stylegan2-ada-pytorch.git\n",
    "!pip install ninja"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "noble-merit",
   "metadata": {},
   "source": [
    "Download [the __cropped__](https://github.com/mahdi-darvish/GAN-augmented-pet-classifier#dataset) dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "hispanic-montreal",
   "metadata": {},
   "outputs": [],
   "source": [
    "# !wget [insert dataset's download link here]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "universal-blake",
   "metadata": {},
   "source": [
    "Extract the images"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "alien-parts",
   "metadata": {},
   "outputs": [],
   "source": [
    "!tar xvf  images.tar.gz -C drive/MyDrive/gans_training/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "automated-canberra",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir drive/MyDrive/gans_training/image"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "loving-massachusetts",
   "metadata": {},
   "source": [
    "Selecting a subset of the dataset.\n",
    "\n",
    "Every breed involves 200 images, so you can consider that and fill the percentage variable according to your need and train the the model on that amount of data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "capital-mirror",
   "metadata": {},
   "outputs": [],
   "source": [
    "source = \"drive/MyDrive/gans_training/images\"\n",
    "destination = \"drive/MyDrive/gans_training/image\"\n",
    "percentage = None    #set this parameter according to your need\n",
    "files_list = os.listdir(source)\n",
    "lst = []\n",
    "for files in sorted(files_list):\n",
    "    x = files.rsplit('_', 1)[0]\n",
    "    lst.append(x)\n",
    "for l in list(set(lst)):\n",
    "    cnt = 0\n",
    "    for i in range(1, 100):\n",
    "    try:\n",
    "        shutil.copy(source + '/' + '{}_{}.jpg'.format(l, i), destination + '/' + '{}_{}.jpg'.format(l, i))\n",
    "        cnt += 1\n",
    "    except:\n",
    "        print('{}_{}.jpg'.format(l, i))\n",
    "    if cnt == 20:\n",
    "        print('selected [] images from [] category'.format(percentage, l))\n",
    "        break"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "agricultural-report",
   "metadata": {},
   "source": [
    "Unzipping the training images in \"images\" folder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "spiritual-popularity",
   "metadata": {},
   "outputs": [],
   "source": [
    "!unzip drive/MyDrive/images.zip -d drive/MyDrive/gans_training/images"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "typical-baptist",
   "metadata": {},
   "source": [
    "Number of total training images and establishig the exact path of images"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "embedded-checkout",
   "metadata": {},
   "outputs": [],
   "source": [
    "!ls drive/MyDrive/gans_training/images | wc -l"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "extended-firewall",
   "metadata": {},
   "source": [
    "This code takes a python file and converts the first given path to tensors into the second given direction. We can face an error in this part in conditions like inconsistent types and variant sizes of images. We convert all the images into jpg type and resize them to 128 x 128 pixels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "sized-watch",
   "metadata": {},
   "outputs": [],
   "source": [
    "!python /content/stylegan2-ada-pytorch/dataset_tool.py --source /content/drive/MyDrive/gans_training/images/ --dest /content/drive/MyDrive/gans_training/dataset/"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "crucial-dubai",
   "metadata": {},
   "source": [
    "To clear out the newly created dataset in case something went wrong "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "central-living",
   "metadata": {},
   "outputs": [],
   "source": [
    "!rm -rf /root/.cache/torch_extensions/*"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bearing-crack",
   "metadata": {},
   "source": [
    "Making all the images to the exact dimensions and color depth"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "humanitarian-vacuum",
   "metadata": {},
   "outputs": [],
   "source": [
    "IMAGE_PATH = 'drive/MyDrive/gans_training/images'\n",
    "files = [f for f in listdir(IMAGE_PATH) if os.path.isfile(join(IMAGE_PATH, f))]\n",
    "\n",
    "base_size = None\n",
    "for file in tqdm(files):\n",
    "    file2 = os.path.join(IMAGE_PATH,file)\n",
    "    img = Image.open(file2)\n",
    "    sz = img.size\n",
    "    if base_size and sz!=base_size:\n",
    "    print(f\"Inconsistant size: {file2}\")\n",
    "    elif img.mode!='RGB':\n",
    "    print(f\"Inconsistant color format: {file2}\")\n",
    "    else:\n",
    "    base_size = sz\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "stupid-oxford",
   "metadata": {},
   "source": [
    "Converting all the images to jpg type"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "international-yesterday",
   "metadata": {},
   "outputs": [],
   "source": [
    "path = \"drive/MyDrive/gans_training/images\"\n",
    "\n",
    "for item in os.listdir(path):\n",
    "    im = Image.open(path + '/' +  item)\n",
    "    if im.mode != \"RGB\" :\n",
    "        im = im.convert(\"RGB\")\n",
    "    imResize = im.resize((128,128))\n",
    "    imResize.save(path + '/' +  item )\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "comfortable-dylan",
   "metadata": {},
   "source": [
    "### Perform Initial Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "christian-movement",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Modify these to suit your needs\n",
    "EXPERIMENTS = \"/content/drive/MyDrive/gans_training/experiments\"\n",
    "DATA = \"/content/drive/MyDrive/gans_training/dataset\"\n",
    "SNAP = 20\n",
    "\n",
    "# Build the command and run it\n",
    "cmd = f\"/usr/bin/python3 /content/stylegan2-ada-pytorch/train.py --snap {SNAP} --outdir {EXPERIMENTS} --data {DATA}\"\n",
    "!{cmd}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "subtle-satin",
   "metadata": {},
   "source": [
    "### Resume Training"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adjusted-williams",
   "metadata": {},
   "source": [
    "Removing the last trained network into the experiment folder for continuing the training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "accompanied-cincinnati",
   "metadata": {},
   "outputs": [],
   "source": [
    "!rm drive/MyDrive/dogs/data/gan/experiments/network-snapshot* drive/MyDrive/dogs/data/gan/experiments"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "professional-dallas",
   "metadata": {},
   "outputs": [],
   "source": [
    "EXPERIMENTS = \"/content/drive/MyDrive/gans_training/experiments/\"\n",
    "NETWORK = \"network-snapshot-000480.pkl\"\n",
    "RESUME = os.path.join(EXPERIMENTS, NETWORK)\n",
    "DATA = \"/content/drive/MyDrive/gans_training/dataset\"\n",
    "SNAP = 20\n",
    "\n",
    "cmd = f\"/usr/bin/python3 /content/stylegan2-ada-pytorch/train.py --snap {SNAP} --resume {RESUME} --outdir {EXPERIMENTS} --data {DATA}\"\n",
    "!{cmd}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adequate-space",
   "metadata": {},
   "source": [
    "Copyright 2021 by [MIT license](https://github.com/mahdi-darvish/GAN-augmented-pet-classifier/blob/main/LICENSE)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
