{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HRuCOcrJvCHb"
      },
      "source": [
        "# License\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "you may not use this file except in compliance with the License.\n",
        "You may obtain a copy of the License at:\n",
        "\n",
        "https://www.apache.org/licenses/LICENSE-2.0\n",
        "\n",
        "Unless required by applicable law or agreed to in writing, software\n",
        "distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "See the License for the specific language governing permissions and\n",
        "limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "On-5AY2HPQqw"
      },
      "source": [
        "# Instructions\n",
        "\n",
        "This Notebook allows to reproduce all the experiments reported in the publication titled:\n",
        "\n",
        "[\"*muNet: Evolving Pretrained Deep Neural Network into Scalable Auto-tuning Multitask Systems*\" (2022)](https://arxiv.org/abs/2205.10937)\n",
        "\n",
        "---\n",
        "\n",
        "Set `EXPERIMENT_NAME` to a name of choice.\n",
        "\n",
        "Set `BENCHMARK` to:\n",
        "\n",
        "1. `ViT tiny 3 layers / characters benchmark` to reproduce the experiments on the \"Multitask Character Classification Benchmark\".\n",
        "1. `ViT base / decathlon benchmark` to reproduce the experiments on the \"Visual Domain Decathlon Benchmark\".\n",
        "\n",
        "Set `CONFIGURATION` to:\n",
        "1. `muNet` to run the muNet evolutionary method with scale factor = 1.\n",
        "1. `Size scale:X` to run muNet with scale factor = X/100.\n",
        "1. `Finetune all` to run the corresponding full fine-tuning baseline model.\n",
        "1. `Freeze bottom layers:X` to run fine-tuning baseline with X layers shared and frozen.\n",
        "1. `Adapters:X` to run the correspoinding residual adapters baseline with X inner dimension.\n",
        "\n",
        "Select `AUTOTUNE` to activate auto-tuning for muNet experiments.\n",
        "\n",
        "Set `EXPERIMENTS_ROOT_DIR` to the desired root directory that will contain experiment directories storing configuration and state.\n",
        "\n",
        "To reproduce the configuration of the experiments reported in the paper it is required to connect to a TPUv3 machine with 8 cores.\n",
        "\n",
        "To start the configured experiment select \"Run all\" from the \"Runtime\" menu.\n",
        "\n",
        "The output is printed after the last cell.\n",
        "\n",
        "Note: this Colab connects by default to a free TPUv2 machine with 8 cores,\n",
        "while the experiments reported in the paper are executed on a TPUv3 machine.\n",
        "Thus, the system requirement for larger models (e.g. ViT B/16)\n",
        "and datasets (e.g. visual_domain_decathlon/imagenet12)\n",
        "may exceed the capacity of the default instance and may require a custom GCE VM."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M93tll7z29rX"
      },
      "outputs": [],
      "source": [
        "# @title Experiment configuration\n",
        "EXPERIMENT_NAME = 'Experiment'  # @param { type: 'string', isTemplate: true }\n",
        "BENCHMARK = 'ViT tiny 3 layers / characters benchmark' # @param ['ViT tiny 3 layers / characters benchmark', 'ViT base / decathlon benchmark', 'ViT large / ViT benchmark'] { type: 'string', isTemplate: true }\n",
        "CONFIGURATION = 'muNet'  # @param ['muNet', 'Size scale:98', 'Size scale:95', 'Size scale:90', 'Size scale:70', 'Size scale:30', 'Size scale:2', 'Finetune all', 'Freeze bottom layers:0', 'Freeze bottom layers:1', 'Freeze bottom layers:2', 'Freeze bottom layers:3', 'Freeze bottom layers:4', 'Freeze bottom layers:12', 'Adapters:8', 'Adapters:16', 'Adapters:32', 'Adapters:64', 'Adapters:128', 'Adapters:256', 'Adapters:512']  { type: 'string', isTemplate: true }\n",
        "AUTO_TUNE = True  # @param [True, False] { type: 'boolean', isTemplate: true }\n",
        "EXPERIMENTS_ROOT_DIR = '/tmp/' # @param { type: 'string', isTemplate: true }\n",
        "\n",
        "if AUTO_TUNE:\n",
        "  assert CONFIGURATION == 'muNet' or CONFIGURATION.startswith('Size scale:'), \\\n",
        "      f'Invalid configuration for auto-tune: {CONFIGURATION}'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "iqE9f8QHR8j8"
      },
      "outputs": [],
      "source": [
        "# @title Additional parameters\n",
        "# Set to true to continue interrupted experiment with matching EXPERIMENT_NAME\n",
        "AUTO_CONTINUE = False  # @param [True, False] { type: 'boolean', isTemplate: true }\n",
        "# Print debug statements.\n",
        "VERBOSE = False  # @param [True, False] { type: 'boolean', isTemplate: true }"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "26PsLbrXviTh"
      },
      "outputs": [],
      "source": [
        "!pip install -q flax"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qQjzRlQUxRpd"
      },
      "outputs": [],
      "source": [
        "!pip install -q ml_collections"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yOtkx9MxzTnf"
      },
      "outputs": [],
      "source": [
        "![ -d vision_transformer ] || git clone --depth=1 https://github.com/google-research/vision_transformer\n",
        "!pip install -qr vision_transformer/vit_jax/requirements.txt\n",
        "import sys\n",
        "if './vision_transformer' not in sys.path:\n",
        "  sys.path.append('./vision_transformer')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LzhEweKzMg6k"
      },
      "outputs": [],
      "source": [
        "import copy\n",
        "import datetime\n",
        "import jax\n",
        "import jax.numpy as jnp\n",
        "import json\n",
        "import math\n",
        "import matplotlib\n",
        "import numpy as np\n",
        "import random\n",
        "import re\n",
        "import os\n",
        "import optax\n",
        "import pandas as pd\n",
        "import time\n",
        "from collections import defaultdict\n",
        "from functools import partial\n",
        "from matplotlib import pyplot as plt\n",
        "from threading import Thread\n",
        "from typing import Optional"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "No1y4k7SWvMi"
      },
      "outputs": [],
      "source": [
        "import jax.tools.colab_tpu\n",
        "jax.tools.colab_tpu.setup_tpu()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "T8y4Hlv7QQVZ"
      },
      "outputs": [],
      "source": [
        "import flax\n",
        "import flax.linen as nn\n",
        "from flax.training import checkpoints as flax_checkpoints"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "CKrbsWJ2PfBV"
      },
      "outputs": [],
      "source": [
        "import tensorflow as tf\n",
        "import tensorflow_datasets as tfds\n",
        "tf.compat.v1.enable_eager_execution()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6QHBzcuUYeh5"
      },
      "outputs": [],
      "source": [
        "from ml_collections import ConfigDict, FrozenConfigDict\n",
        "from vision_transformer.vit_jax import input_pipeline\n",
        "from vision_transformer.vit_jax import checkpoint\n",
        "from vision_transformer.vit_jax.configs import models as models_config  # Model configurations.\n",
        "from vision_transformer.vit_jax import models_vit as models # Actual model code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "SHBWj0JmpWDX"
      },
      "outputs": [],
      "source": [
        "# Ref Tfds catalog: https://www.tensorflow.org/datasets/catalog/beans\n",
        "TFDS_IMAGE_CLASSIFCATON_DATASETS = set([\n",
        "   'emnist/digits',\n",
        "   'emnist/letters',\n",
        "   'kmnist',\n",
        "   'mnist',\n",
        "   'omniglot',\n",
        "   'cmaterdb/bangla',\n",
        "   'cmaterdb/devanagari',\n",
        "   'cmaterdb/telugu',\n",
        "   'visual_domain_decathlon/imagenet12',\n",
        "   'visual_domain_decathlon/svhn',\n",
        "   'visual_domain_decathlon/cifar100',\n",
        "   'visual_domain_decathlon/gtsrb',\n",
        "   'visual_domain_decathlon/daimlerpedcls',\n",
        "   'visual_domain_decathlon/omniglot',\n",
        "   'visual_domain_decathlon/ucf101',\n",
        "   'visual_domain_decathlon/aircraft',\n",
        "   'visual_domain_decathlon/dtd',\n",
        "   'visual_domain_decathlon/vgg-flowers',\n",
        "])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-MZlT5WKpOWW"
      },
      "outputs": [],
      "source": [
        "# tfds.builder is slow, this build a cache in the background using parallel threads.\n",
        "# Call TfdsBuildersCache.regenerate() to force regeneration after editing list of tasks.\n",
        "class TfdsBuildersCache():\n",
        "  class Worker():\n",
        "    def __init__(self, tfds_name):\n",
        "      self.tfds_name = tfds_name\n",
        "      self.thread = Thread(\n",
        "        target=self.set_builder, args=())\n",
        "      self.thread.start()\n",
        "    def set_builder(self):\n",
        "      self.builder = tfds.builder(self.tfds_name)\n",
        "    def get_builder(self):\n",
        "      self.thread.join()\n",
        "      return self.builder\n",
        "  def initalize():\n",
        "    if 'TFDS_BUILDERS_CACHE' not in globals():\n",
        "      print('CREATING TFDS_BUILDERS_CACHE')\n",
        "      global TFDS_BUILDERS_CACHE\n",
        "      TFDS_BUILDERS_CACHE = {}\n",
        "      workers = []\n",
        "      for tfds_name in TFDS_IMAGE_CLASSIFCATON_DATASETS:\n",
        "        workers.append(TfdsBuildersCache.Worker(tfds_name))\n",
        "      for worker in workers:\n",
        "        assert worker.tfds_name not in TFDS_BUILDERS_CACHE\n",
        "        TFDS_BUILDERS_CACHE[worker.tfds_name] = worker\n",
        "  def get(tfds_name):\n",
        "    return TFDS_BUILDERS_CACHE[tfds_name].get_builder()\n",
        "\n",
        "  def regenerate():\n",
        "    if 'TFDS_BUILDERS_CACHE' in globals():\n",
        "      print('REGENERATING TFDS_BUILDERS_CACHE')\n",
        "      global TFDS_BUILDERS_CACHE\n",
        "      del TFDS_BUILDERS_CACHE\n",
        "    TfdsBuildersCache.initalize()\n",
        "\n",
        "TfdsBuildersCache.initalize()\n",
        "# TfdsBuildersCache.regenerate()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "gXbngcQSaatN"
      },
      "outputs": [],
      "source": [
        "def get_splits(tfds_name):\n",
        "  info = TfdsBuildersCache.get(tfds_name).info\n",
        "  splits = list(info.splits.keys())\n",
        "  assert 'train' in splits, splits\n",
        "  splits.remove('train')\n",
        "  used_percent = 0\n",
        "  slice_percent = 5\n",
        "  pp = {}\n",
        "  for k in ['test', 'validation']:\n",
        "    if k in splits:\n",
        "      pp[k] = k\n",
        "      splits.remove(k)\n",
        "    else:\n",
        "      pp[k] = f'train[{used_percent}%:{used_percent+slice_percent}%]'\n",
        "      used_percent += slice_percent\n",
        "  pp['train'] = f'train[{used_percent}%:]'\n",
        "  return pp\n",
        "\n",
        "# Task names must be unique and immutable across experiments to allow reloads.\n",
        "def add_dataset_config(\n",
        "    tasks_configs,\n",
        "    tfds_name,\n",
        "    unique_name=None,\n",
        "    private=False):\n",
        "  if tfds_name in ['imagenet_v2', 'cifar10_1']:\n",
        "    return  # Used as validation set for other tasks.\n",
        "\n",
        "  config = ConfigDict()\n",
        "  if tfds_name == 'imagenet2012':\n",
        "    config.dataset = {\n",
        "        'train':'imagenet2012', 'validation':'imagenet_v2', 'test':'imagenet2012'}\n",
        "    config.splits = {\n",
        "        'train':'train', 'validation':'test', 'test':'validation'}\n",
        "  elif tfds_name == 'cifar100':\n",
        "    config.dataset = tfds_name\n",
        "    config.splits = {\n",
        "        'train':'train[:98%]', 'validation':'train[98%:]', 'test':'test'}\n",
        "  elif tfds_name == 'cifar10':\n",
        "    config.dataset = {\n",
        "        'train':'cifar10', 'validation':'cifar10_1', 'test':'cifar10'}\n",
        "    config.splits = {\n",
        "        'train':'train', 'validation':'test', 'test':'test'}\n",
        "  elif tfds_name.startswith('visual_domain_decathlon'):\n",
        "    config.dataset = tfds_name\n",
        "    # test has no labels, split validation in half.\n",
        "    config.splits =  {\n",
        "        'train':'train', 'validation':'validation[:50%]', 'test':'validation[50%:]'}\n",
        "  elif tfds_name == 'omniglot':\n",
        "    # test has no labels, and missing validation, use additional splits.\n",
        "    config.dataset = tfds_name\n",
        "    config.splits = {'train':'train', 'validation':'small1', 'test':'small2'}\n",
        "  else:\n",
        "    config.dataset = tfds_name\n",
        "    config.splits = get_splits(tfds_name)\n",
        "  config.unique_name = unique_name if unique_name else tfds_name\n",
        "  config.private = private\n",
        "  assert unique_name not in tasks_configs\n",
        "  tasks_configs[config.unique_name] = FrozenConfigDict(config)\n",
        "\n",
        "def get_task_configs():\n",
        "  task_configs = {}\n",
        "\n",
        "  # Add standard tasks.\n",
        "  for tfds_name in TFDS_IMAGE_CLASSIFCATON_DATASETS:\n",
        "    add_dataset_config(task_configs, tfds_name)\n",
        "\n",
        "  # Add private tasks.\n",
        "  tfds_names_private = []\n",
        "  for tfds_name in TFDS_IMAGE_CLASSIFCATON_DATASETS:\n",
        "    if tfds_name.startswith('visual_domain_decathlon/'):\n",
        "      tfds_names_private.append(tfds_name)\n",
        "  for tfds_name in tfds_names_private:\n",
        "    add_dataset_config(\n",
        "        task_configs,\n",
        "        tfds_name,\n",
        "        unique_name=f'private:{tfds_name}',\n",
        "        private=True)\n",
        "\n",
        "  return task_configs"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jTOBlYBpWNMl"
      },
      "outputs": [],
      "source": [
        "def ids_str2ints(ids_str):\n",
        "  return [int(v) for v in ids_str.split('_')] if ids_str else []\n",
        "def ids_ints2str(ids_ints):\n",
        "  return '_'.join([str(v) for v in sorted(ids_ints)])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "e88M0y8YVZrC"
      },
      "outputs": [],
      "source": [
        "AddPositionEmbs = models.AddPositionEmbs\n",
        "Encoder1DBlock = models.Encoder1DBlock\n",
        "VisionTransformer = models.VisionTransformer\n",
        "\n",
        "class ResidualAdapter(nn.Module):\n",
        "  adapter_dim: int\n",
        "\n",
        "  @nn.compact\n",
        "  def __call__(self, x):\n",
        "    hidden_dim = x.shape[-1]\n",
        "    y = nn.LayerNorm()(x)\n",
        "    y = nn.Dense(self.adapter_dim)(y)\n",
        "    y = nn.gelu(y)\n",
        "    # Default initalization.\n",
        "    # y = nn.Dense(hidden_dim)(y)\n",
        "    # Initialization from https://arxiv.org/pdf/1902.00751.pdf\n",
        "    # y = nn.Dense(hidden_dim, kernel_init=nn.initializers.normal(stddev=1e-3))(y)\n",
        "    # Zero Initialization so that added adapter does not change the representation.\n",
        "    y = nn.Dense(hidden_dim, kernel_init=jax.nn.initializers.zeros)(y)\n",
        "    return x + y  # Residual.\n",
        "\n",
        "# Modified from vision_transformer/vit_jax/models Encoder to add residual adapters.\n",
        "class Encoder(nn.Module):\n",
        "  num_layers: int\n",
        "  mlp_dim: int\n",
        "  num_heads: int\n",
        "  adapter_layers: str  # \u003cMOD\n",
        "  adapter_dim: int  # MOD\u003e\n",
        "  dropout_rate: float = 0.1\n",
        "  attention_dropout_rate: float = 0.1\n",
        "\n",
        "  @nn.compact\n",
        "  def __call__(self, inputs, *, train):\n",
        "    assert inputs.ndim == 3  # (batch, len, emb)\n",
        "\n",
        "    x = AddPositionEmbs(\n",
        "        posemb_init=nn.initializers.normal(stddev=0.02),  # from BERT.\n",
        "        name='posembed_input')(\n",
        "            inputs)\n",
        "    x = nn.Dropout(rate=self.dropout_rate)(x, deterministic=not train)\n",
        "\n",
        "    # Input Encoder\n",
        "    adapter_layers_ids = ids_str2ints(self.adapter_layers)  # \u003cMOD\u003e\n",
        "    for lyr in range(self.num_layers):\n",
        "      if lyr in adapter_layers_ids:  # \u003cMOD\n",
        "        x = ResidualAdapter(\n",
        "            adapter_dim=self.adapter_dim,\n",
        "            name=f'residual_adapter_{lyr}'\n",
        "            )(x)  # MOD\u003e\n",
        "      x = Encoder1DBlock(\n",
        "          mlp_dim=self.mlp_dim,\n",
        "          dropout_rate=self.dropout_rate,\n",
        "          attention_dropout_rate=self.attention_dropout_rate,\n",
        "          name=f'encoderblock_{lyr}',\n",
        "          num_heads=self.num_heads)(\n",
        "              x, deterministic=not train)\n",
        "    encoded = nn.LayerNorm(name='encoder_norm')(x)\n",
        "    return encoded"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "vvZ_4-kJ9Pt3"
      },
      "outputs": [],
      "source": [
        "def get_vit_filename(query):\n",
        "  df = checkpoint.get_augreg_df()\n",
        "  res = df.query(query).filename.unique()\n",
        "  assert len(res) == 1\n",
        "  return res[0]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0lvqd47g9ZsW"
      },
      "outputs": [],
      "source": [
        "USE_DROPOUT = False\n",
        "VIT_CONFIG_CACHE = {}\n",
        "\n",
        "def get_vit_config(query):\n",
        "  if query not in VIT_CONFIG_CACHE:\n",
        "    filename = get_vit_filename(query)\n",
        "    config = models_config.AUGREG_CONFIGS[filename.split('-')[0]].copy_and_resolve_references()\n",
        "    # Ovewrite with custom Encoder.\n",
        "    config.unlock()\n",
        "    config.encoder = Encoder\n",
        "    config.transformer.adapter_layers = ''\n",
        "    config.transformer.adapter_dim = -1\n",
        "    if not USE_DROPOUT:\n",
        "      config.transformer.dropout_rate = 0.0\n",
        "      config.transformer.attention_dropout_rate = 0.0\n",
        "    config.lock()\n",
        "    VIT_CONFIG_CACHE[query] = config\n",
        "  return VIT_CONFIG_CACHE[query].copy_and_resolve_references()\n",
        "\n",
        "def get_max_num_layers(query):\n",
        "  config = get_vit_config(query)\n",
        "  return config.transformer.num_layers"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "P5_XVkhGhRlp"
      },
      "outputs": [],
      "source": [
        "DATASET_HPARAMS_KEYS_PRERFIX = 'ds_'\n",
        "OPTIMIZER_HPARAMS_KEYS_PRERFIX = 'opt_'\n",
        "\n",
        "def get_exp_config_ti3_chars():\n",
        "  exp_config = ConfigDict()\n",
        "  exp_config.experiment_name = EXPERIMENT_NAME\n",
        "  exp_config.experiments_root_dir = EXPERIMENTS_ROOT_DIR\n",
        "  # Capping the size of an epoch.\n",
        "  exp_config.num_train_batches_between_validations_max = 100\n",
        "  exp_config.num_validations_per_path_training = 5\n",
        "  exp_config.num_validation_batches_max = 10\n",
        "  exp_config.batch_size = 512\n",
        "  exp_config.num_task_iters = 2\n",
        "  exp_config.num_samples_per_task = 8*8\n",
        "  exp_config.mutation_prob = 0.1\n",
        "  exp_config.mutate_adapters = True\n",
        "  # Force finetune last layer norm that technically is part of the head.\n",
        "  exp_config.force_finetune_components = ['encoder_norm']\n",
        "  # Population policy params:\n",
        "  exp_config.policy_class = 'PPDecay'\n",
        "  exp_config.policy_kwargs = {}\n",
        "  # Scorer params:\n",
        "  exp_config.scorer_class = 'ScorerDecay'\n",
        "  exp_config.scorer_kwargs = dict(\n",
        "      base=1.0,\n",
        "      num_params=1_484_162,  # 1_484_162 params in Ti/16 with 3 layers.\n",
        "      )\n",
        "\n",
        "  # Seed models params:\n",
        "  exp_config.load_rand_init = False\n",
        "  exp_config.load_vit_checkpoint = True\n",
        "  exp_config.load_vit_checkpoint_query = 'name==\"Ti/16\" and ds==\"i21k\" and aug==\"light1\" and wd==0.1 and sd==0.0'\n",
        "  exp_config.load_experiment = False\n",
        "  exp_config.load_experiment_dir = ''\n",
        "\n",
        "  # Hyperparameters:\n",
        "  exp_config.models_default_hparams = {\n",
        "      # Default num_classes has no effect since it is always overwritten or used\n",
        "      # for rand init models whose head is always replaced.\n",
        "      'num_classes': 1,\n",
        "      # Set to ids_ints2str(range(max_num_layers)) to activate all adapters.\n",
        "      'adapter_layers': '',\n",
        "      'num_layers': 3,\n",
        "      'adapter_dim': 32,\n",
        "      'opt_lr': 0.01,\n",
        "      'opt_lr_schedule': 'cosine',\n",
        "      'opt_lr_warmup_ratio': 0.1,\n",
        "      'opt_momentum': 0.9,\n",
        "      'opt_nesterov': False,\n",
        "      'ds_image_size': 32,\n",
        "      'ds_area_range_min': 0.05,\n",
        "      'ds_aspect_ratio_range_min': 0.75,\n",
        "      'ds_flip_left_right': True,\n",
        "      'ds_brightness_delta': 0.0,\n",
        "      'ds_contrast_delta': 0.0,\n",
        "      'ds_saturation_delta': 0.0,\n",
        "      'ds_hue_delta': 0.0,\n",
        "  }\n",
        "\n",
        "  exp_config.models_mutation_ranges = {\n",
        "      'num_layers': list(range(1, exp_config.models_default_hparams['num_layers']+1)),\n",
        "  }\n",
        "\n",
        "  # Tasks params:\n",
        "  exp_config.task_configs = get_task_configs()\n",
        "  # Tasks to train on during this experiment.\n",
        "  exp_config.task_names = \\\n",
        "  [\n",
        "   'emnist/digits',\n",
        "   'emnist/letters',\n",
        "   'kmnist',\n",
        "   'mnist',\n",
        "   'omniglot',\n",
        "   'cmaterdb/bangla',\n",
        "   'cmaterdb/devanagari',\n",
        "   'cmaterdb/telugu',\n",
        "   ]\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config\n",
        "\n",
        "def get_exp_config_base_deca():\n",
        "  exp_config = ConfigDict()\n",
        "  exp_config.experiment_name = EXPERIMENT_NAME\n",
        "  exp_config.experiments_root_dir = EXPERIMENTS_ROOT_DIR\n",
        "  exp_config.num_train_batches_between_validations_max = 200\n",
        "  exp_config.num_validations_per_path_training = 30\n",
        "  exp_config.num_validation_batches_max = 10\n",
        "  exp_config.batch_size = 256\n",
        "  exp_config.num_task_iters = 2\n",
        "  exp_config.num_samples_per_task = 8*8\n",
        "  exp_config.mutation_prob = 0.1\n",
        "  exp_config.mutate_adapters = True\n",
        "  exp_config.force_finetune_components = ['encoder_norm']\n",
        "  # Population policy params:\n",
        "  exp_config.policy_class = 'PPDecay'\n",
        "  exp_config.policy_kwargs = {}\n",
        "  # Scorer params:\n",
        "  exp_config.scorer_class = 'ScorerDecay'\n",
        "  exp_config.scorer_kwargs = dict(\n",
        "      base=1.0,\n",
        "      num_params=85_652_738,  # 85_652_738 params in B/16\n",
        "      )\n",
        "  # Seed models params:\n",
        "  exp_config.load_rand_init = False\n",
        "  exp_config.load_vit_checkpoint = True\n",
        "  exp_config.load_vit_checkpoint_query = 'name==\"B/16\" and ds==\"i21k\" and aug==\"medium1\" and wd==0.1 and sd==0'\n",
        "  exp_config.load_experiment = False\n",
        "  exp_config.load_experiment_dir = ''\n",
        "  # Hyperparameters:\n",
        "  max_num_layers = get_max_num_layers(exp_config.load_vit_checkpoint_query)\n",
        "  exp_config.models_default_hparams = {\n",
        "      'num_classes': 1,\n",
        "      'adapter_layers': '',\n",
        "      'num_layers': max_num_layers,\n",
        "      'adapter_dim': 32,\n",
        "      'opt_lr': 0.01,\n",
        "      'opt_lr_schedule': 'cosine',\n",
        "      'opt_lr_warmup_ratio': 0.1,\n",
        "      'opt_momentum': 0.9,\n",
        "      'opt_nesterov': False,\n",
        "      'ds_image_size': 80,\n",
        "      'ds_area_range_min': 0.05,\n",
        "      'ds_aspect_ratio_range_min': 0.75,\n",
        "      'ds_flip_left_right': True,\n",
        "      'ds_brightness_delta': 0.0,\n",
        "      'ds_contrast_delta': 0.0,\n",
        "      'ds_saturation_delta': 0.0,\n",
        "      'ds_hue_delta': 0.0,\n",
        "  }\n",
        "\n",
        "  exp_config.models_mutation_ranges = {\n",
        "      'num_layers': list(range(1, exp_config.models_default_hparams['num_layers']+1)),\n",
        "  }\n",
        "\n",
        "  exp_config.task_configs = get_task_configs()\n",
        "  exp_config.task_names = [\n",
        "      'visual_domain_decathlon/imagenet12',\n",
        "      'visual_domain_decathlon/svhn',\n",
        "      'visual_domain_decathlon/cifar100',\n",
        "      'visual_domain_decathlon/gtsrb',\n",
        "      'visual_domain_decathlon/daimlerpedcls',\n",
        "      'visual_domain_decathlon/omniglot',\n",
        "      'visual_domain_decathlon/ucf101',\n",
        "      'visual_domain_decathlon/aircraft',\n",
        "      'visual_domain_decathlon/dtd',\n",
        "      'visual_domain_decathlon/vgg-flowers',\n",
        "      ]\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config\n",
        "\n",
        "def get_exp_config_large():\n",
        "  exp_config = ConfigDict()\n",
        "  exp_config.experiment_name = EXPERIMENT_NAME\n",
        "  exp_config.experiments_root_dir = EXPERIMENTS_ROOT_DIR\n",
        "\n",
        "  # 1/10th of epoch for imagenet to have similar ratio of exps reported in:\n",
        "  # https://arxiv.org/abs/2106.10270\n",
        "  exp_config.num_train_batches_between_validations_max = 4000\n",
        "  exp_config.num_validations_per_path_training = 2\n",
        "  # 312 * 32 ~= 10k size of imagenet2012 validation set.\n",
        "  exp_config.num_validation_batches_max = 312\n",
        "  # Reduced batch size to fit in HBM, but increased num batches.\n",
        "  exp_config.batch_size = 32\n",
        "  exp_config.num_task_iters = 32\n",
        "  exp_config.num_samples_per_task = 8*2\n",
        "  exp_config.mutation_prob = 0.1\n",
        "  exp_config.mutate_adapters = True\n",
        "  exp_config.force_finetune_components = ['encoder_norm']\n",
        "  # Population policy params:\n",
        "  exp_config.policy_class = 'PPDecay'\n",
        "  exp_config.policy_kwargs = {}\n",
        "  # Scorer params:\n",
        "  exp_config.scorer_class = 'ScorerDecay'\n",
        "  exp_config.scorer_kwargs = dict(\n",
        "      base=1.0,\n",
        "      num_params=303_303_682,  # Params in L/16\n",
        "      )\n",
        "  # Seed models params:\n",
        "  exp_config.load_rand_init = False\n",
        "  exp_config.load_vit_checkpoint = True\n",
        "  exp_config.load_vit_checkpoint_query = 'name==\"L/16\" and ds==\"i21k\" and aug==\"medium2\" and wd==0.03 and sd==0.1'\n",
        "  # 'name==\"L/16\" and ds==\"i21k\" and aug==\"light1\" and wd==0.1 and sd==0.0'\n",
        "  exp_config.load_experiment = False\n",
        "  exp_config.load_experiment_dir = ''\n",
        "  # Hyperparameters:\n",
        "  max_num_layers = get_max_num_layers(exp_config.load_vit_checkpoint_query)\n",
        "  exp_config.models_default_hparams = {\n",
        "      'num_classes': 1,\n",
        "      'adapter_layers': '',\n",
        "      'num_layers': max_num_layers,\n",
        "      'adapter_dim': 32,\n",
        "      'opt_lr': 0.01,\n",
        "      'opt_lr_schedule': 'cosine',\n",
        "      'opt_lr_warmup_ratio': 0.05,\n",
        "      'opt_momentum': 0.9,\n",
        "      'opt_nesterov': False,\n",
        "      'ds_image_size': 384,\n",
        "      'ds_area_range_min': 0.05,\n",
        "      'ds_aspect_ratio_range_min': 0.75,\n",
        "      'ds_flip_left_right': True,\n",
        "      'ds_brightness_delta': 0.0,\n",
        "      'ds_contrast_delta': 0.0,\n",
        "      'ds_saturation_delta': 0.0,\n",
        "      'ds_hue_delta': 0.0,\n",
        "  }\n",
        "\n",
        "  exp_config.models_mutation_ranges = {}\n",
        "\n",
        "  exp_config.task_configs = get_task_configs()\n",
        "  exp_config.task_names = [\n",
        "      'imagenet2012',\n",
        "      'cifar100',\n",
        "      'cifar10',\n",
        "      ]\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config\n",
        "\n",
        "def exp_config_add_auto_tune(exp_config):\n",
        "  exp_config.models_mutation_ranges['adapter_dim'] = [8, 16, 32, 64, 128]\n",
        "  exp_config.models_mutation_ranges['opt_lr'] = [0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1]\n",
        "  exp_config.models_mutation_ranges['opt_lr_schedule'] = ['constant', 'cosine', 'restarts']\n",
        "  exp_config.models_mutation_ranges['opt_lr_warmup_ratio'] = [0.01, 0.02, 0.05, 0.1, 0.2, 0.3, 0.4]\n",
        "  exp_config.models_mutation_ranges['opt_momentum'] = [None, 0.2, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99]\n",
        "  exp_config.models_mutation_ranges['opt_nesterov'] = [True, False]\n",
        "  exp_config.models_mutation_ranges['ds_image_size'] = [ 16 * i for i in (range(1, 1+int(384/16))) ]\n",
        "  exp_config.models_mutation_ranges['ds_area_range_min'] = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0]\n",
        "  exp_config.models_mutation_ranges['ds_aspect_ratio_range_min'] = [0.25, 0.5, 0.75, 1.0]\n",
        "  exp_config.models_mutation_ranges['ds_flip_left_right'] = [True, False]\n",
        "  exp_config.models_mutation_ranges['ds_brightness_delta'] = [0.0, 0.01, 0.02, 0.05, 0.1, 0.2]\n",
        "  exp_config.models_mutation_ranges['ds_contrast_delta'] = [0.0, 0.01, 0.02, 0.05, 0.1, 0.2]\n",
        "  exp_config.models_mutation_ranges['ds_saturation_delta'] = [0.0, 0.01, 0.02, 0.05, 0.1, 0.2]\n",
        "  exp_config.models_mutation_ranges['ds_hue_delta'] = [0.0, 0.01, 0.02, 0.05, 0.1, 0.2]\n",
        "  return exp_config\n",
        "\n",
        "def exp_config_validate(exp_config):\n",
        "  for khp in exp_config.models_default_hparams:\n",
        "    if khp in exp_config.models_mutation_ranges:\n",
        "      assert exp_config.models_default_hparams[khp] \\\n",
        "          in exp_config.models_mutation_ranges[khp]\n",
        "\n",
        "def exp_config_set_size_scale(exp_config, base_percent:int):\n",
        "  exp_config.scorer_kwargs['base'] = float(base_percent) / 100.0\n",
        "  return exp_config\n",
        "\n",
        "def exp_config_set_baseline_common(exp_config):\n",
        "  parallelism = jax.local_device_count()\n",
        "  assert (int(exp_config.num_samples_per_task / parallelism) ==\n",
        "          exp_config.num_samples_per_task / parallelism)\n",
        "  exp_config.num_validations_per_path_training *= \\\n",
        "      exp_config.num_task_iters \\\n",
        "      * int(exp_config.num_samples_per_task/parallelism)\n",
        "  exp_config.num_task_iters = 1\n",
        "  exp_config.num_samples_per_task = parallelism\n",
        "  exp_config.models_mutation_ranges = {}\n",
        "  exp_config.policy_class = 'PPBaseline'\n",
        "  exp_config.policy_kwargs = {}\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config\n",
        "\n",
        "def exp_config_set_baseline_finetune_all(exp_config):\n",
        "  exp_config = exp_config_set_baseline_common(exp_config)\n",
        "  exp_config.mutation_prob = 1.0\n",
        "  exp_config.mutate_adapters = False\n",
        "  exp_config.models_default_hparams['adapter_layers'] = ''\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config\n",
        "\n",
        "def exp_config_set_baseline_freeze_bottom_layers(exp_config, num_layers:int):\n",
        "  exp_config = exp_config_set_baseline_common(exp_config)\n",
        "  max_num_layers = exp_config.models_default_hparams['num_layers']\n",
        "  assert max_num_layers \u003e= num_layers\n",
        "  unfrozen_layers = [f'encoderblock_{id}' for id in range(num_layers, max_num_layers)]\n",
        "  exp_config.force_finetune_components = ['encoder_norm'] + unfrozen_layers\n",
        "  exp_config.mutation_prob = 0.0\n",
        "  exp_config.mutate_adapters = False\n",
        "  exp_config.models_default_hparams['adapter_layers'] = ''\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config\n",
        "\n",
        "def exp_config_set_baseline_adapters(exp_config, adapter_dim:int):\n",
        "  exp_config = exp_config_set_baseline_common(exp_config)\n",
        "  # To unfreeze all layer norms in the model also set GATHER_LAYER_NORMS to True.\n",
        "  exp_config.force_finetune_components = ['encoder_norm']\n",
        "  exp_config.mutation_prob = 0.0\n",
        "  exp_config.mutate_adapters = True\n",
        "  max_num_layers = exp_config.models_default_hparams['num_layers']\n",
        "  exp_config.models_default_hparams['adapter_layers'] = ids_ints2str(\n",
        "      range(max_num_layers))\n",
        "  exp_config.models_default_hparams['adapter_dim'] = adapter_dim\n",
        "  exp_config_validate(exp_config)\n",
        "  return exp_config"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3MRsK4hvocq8"
      },
      "outputs": [],
      "source": [
        "def get_sample_image(image_size:int, batch_size:int):\n",
        "  return np.zeros((batch_size, image_size, image_size, 3))\n",
        "\n",
        "def get_sample_label(batch_size:int):\n",
        "  return np.zeros(batch_size, dtype=np.int32)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0jPjzRlYYi0x"
      },
      "outputs": [],
      "source": [
        "def get_vit_checkpoint(image_size, query):\n",
        "  filename = get_vit_filename(query)\n",
        "\n",
        "  config = get_vit_config(query)\n",
        "\n",
        "  model = VisionTransformer(**config, num_classes=2)  # num_classes unused.\n",
        "  init_params = model.init(jax.random.PRNGKey(0),\n",
        "                           get_sample_image(image_size=image_size,\n",
        "                                            batch_size=1),\n",
        "                           train=USE_DROPOUT)['params']\n",
        "\n",
        "  params = checkpoint.load_pretrained(\n",
        "    pretrained_path=f'gs://vit_models/augreg/{filename}.npz',\n",
        "    init_params=init_params,\n",
        "    model_config=config)\n",
        "\n",
        "  return params\n",
        "\n",
        "def get_vit_checkpoint_mapped(image_size, query):\n",
        "  params = get_vit_checkpoint(image_size, query)\n",
        "  params = params_model_to_comps(params)\n",
        "  return params\n",
        "\n",
        "def get_reshaped_posembed_component(image_size, query):\n",
        "  params = get_vit_checkpoint_mapped(image_size, query)['posembed_input']\n",
        "  return Component(name='posembed_input',\n",
        "                   params=params,\n",
        "                   train_locks=[NOT_TRAINABLE])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "G_Xsw_tdLIRC"
      },
      "outputs": [],
      "source": [
        "# Parameter mapping.\n",
        "TRANSFORMER_KEYS = set()\n",
        "# Set this to True to unfreeze all the layernorms in the model.\n",
        "# Can be useful for variants of the residual adapters baseline.\n",
        "GATHER_LAYER_NORMS = False\n",
        "\n",
        "def params_model_to_comps(params):\n",
        "  global TRANSFORMER_KEYS\n",
        "  TRANSFORMER_KEYS.update(params['Transformer'].keys())\n",
        "  new_params = {}\n",
        "  for k in params.keys():\n",
        "    if k == 'Transformer':\n",
        "      t_params = params[k]\n",
        "      for t_k in t_params.keys():\n",
        "        new_params[t_k] = t_params[t_k]\n",
        "    else:\n",
        "      new_params[k] = params[k]\n",
        "  params = flax.core.freeze(new_params)\n",
        "\n",
        "  if GATHER_LAYER_NORMS:\n",
        "    params = params.unfreeze()\n",
        "    params['encoder_norm']['gathered'] = {}\n",
        "    for k in params.keys():\n",
        "      if k.startswith('encoderblock_'):\n",
        "        params['encoder_norm']['gathered'][k] = {}\n",
        "        encoderblock_keys = list(params[k].keys())\n",
        "        for ek in encoderblock_keys:\n",
        "          if ek.startswith('LayerNorm_'):\n",
        "            params['encoder_norm']['gathered'][k][ek] = params[k].pop(ek)\n",
        "\n",
        "  return flax.core.freeze(params)\n",
        "\n",
        "def params_comps_to_model(params):\n",
        "  params = params.unfreeze()\n",
        "\n",
        "  if GATHER_LAYER_NORMS:\n",
        "    gathered = params['encoder_norm'].pop('gathered')\n",
        "    for k in gathered:\n",
        "      assert k.startswith('encoderblock_')\n",
        "      assert k in params\n",
        "      for ke in gathered[k].keys():\n",
        "        assert ke.startswith('LayerNorm_')\n",
        "        assert ke not in params[k]\n",
        "        params[k][ke] = gathered[k][ke]\n",
        "\n",
        "  params['Transformer'] = {}\n",
        "  keys = list(params.keys())\n",
        "  assert len(TRANSFORMER_KEYS) != 0\n",
        "  for k in keys:\n",
        "    if k in TRANSFORMER_KEYS:\n",
        "      params['Transformer'][k] = params.pop(k)\n",
        "  return flax.core.freeze(params)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2Ktk16O9PhYZ"
      },
      "outputs": [],
      "source": [
        "def get_model_kwargs(hparams, exp_config):\n",
        "  # Validate adapters params.\n",
        "  for v in ids_str2ints(hparams['adapter_layers']):\n",
        "    assert v \u003c hparams['num_layers']\n",
        "  return {\n",
        "        'num_classes': int(hparams['num_classes']),\n",
        "        'num_layers': int(hparams['num_layers']),\n",
        "        'image_size': int(hparams['ds_image_size']),\n",
        "        'adapter_layers': str(hparams['adapter_layers']),\n",
        "        'adapter_dim': int(hparams['adapter_dim']),\n",
        "        'query': str(exp_config.load_vit_checkpoint_query),\n",
        "    }\n",
        "\n",
        "def get_vit_model(num_classes, num_layers, adapter_layers, adapter_dim, query):\n",
        "  config = get_vit_config(query)\n",
        "  config['transformer']['num_layers'] = num_layers\n",
        "  config['transformer']['adapter_layers'] = adapter_layers\n",
        "  config['transformer']['adapter_dim'] = adapter_dim\n",
        "  config = FrozenConfigDict(config)\n",
        "  model = VisionTransformer(**config, num_classes=num_classes)\n",
        "  return model\n",
        "\n",
        "def get_vit_model_and_params(\n",
        "    num_classes, num_layers, image_size, adapter_layers, adapter_dim, query,\n",
        "    rng_key=0):\n",
        "  model = get_vit_model(\n",
        "      num_classes, num_layers, adapter_layers, adapter_dim, query)\n",
        "  init_params = model.init(\n",
        "      jax.random.PRNGKey(rng_key),\n",
        "      get_sample_image(image_size=image_size, batch_size=1),\n",
        "      train=USE_DROPOUT)['params']\n",
        "  return model, init_params\n",
        "\n",
        "def get_vit_model_and_params_mapped(**kwargs):\n",
        "  model, init_params = get_vit_model_and_params(**kwargs)\n",
        "  init_params = params_model_to_comps(init_params)\n",
        "  return model, init_params"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-F8s09QiK8ri"
      },
      "outputs": [],
      "source": [
        "def format_params(a, b):\n",
        "  params = a.copy(b)\n",
        "  assert len(params) == len(a) + len(b)  # Dicts should not overlap.\n",
        "  params = params_comps_to_model(params)\n",
        "  return params"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FsuttUqHvYGr"
      },
      "outputs": [],
      "source": [
        "def get_optimizer(\n",
        "    lr: float,\n",
        "    lr_schedule: str,\n",
        "    lr_warmup_ratio: float,\n",
        "    momentum: float,\n",
        "    nesterov: bool,\n",
        "    num_train_batches_between_validations: int,\n",
        "    num_validations_per_path_training: int,\n",
        "    ):\n",
        "  if lr_schedule == 'constant':\n",
        "    # Divide by 2 so that average lr is the same as other types.\n",
        "    learning_rate = 0.5 * lr\n",
        "  elif lr_schedule == 'cosine':\n",
        "    train_steps = int(num_train_batches_between_validations\n",
        "                      * num_validations_per_path_training)\n",
        "    learning_rate = optax.warmup_cosine_decay_schedule(\n",
        "        init_value=lr/100.0,\n",
        "        peak_value=lr,\n",
        "        warmup_steps=int(lr_warmup_ratio * train_steps),\n",
        "        decay_steps=train_steps)\n",
        "  elif lr_schedule == 'restarts':\n",
        "    train_steps = num_train_batches_between_validations\n",
        "    repeats = num_validations_per_path_training\n",
        "    kwargs = dict(\n",
        "        init_value=lr/100.0,\n",
        "        peak_value=lr,\n",
        "        warmup_steps=int(lr_warmup_ratio * train_steps),\n",
        "        decay_steps=train_steps,\n",
        "    )\n",
        "    kwargs = [kwargs] * repeats\n",
        "    learning_rate = optax.sgdr_schedule(kwargs)\n",
        "  else:\n",
        "    assert False, f'Invalid lr schedule: {lr_schedule}'\n",
        "\n",
        "  return optax.chain(\n",
        "      optax.clip_by_global_norm(1.0),\n",
        "      optax.sgd(\n",
        "          learning_rate=learning_rate,\n",
        "          momentum=momentum,\n",
        "          nesterov=nesterov,\n",
        "          accumulator_dtype=jnp.bfloat16))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kqnhOnk5jbnm"
      },
      "outputs": [],
      "source": [
        "class Task():\n",
        "  def __init__(self, name, exp_config):\n",
        "    self.exp_config = exp_config\n",
        "    if name.startswith(NOT_TRAINABLE):\n",
        "      self.name = name\n",
        "      self.private = False\n",
        "      return\n",
        "    self.config = exp_config.task_configs[name]\n",
        "    self.name = name\n",
        "    self.private = self.config.private\n",
        "    self.num_classes = self.get_builder('train').info.features['label'].num_classes\n",
        "    num_train_examples = self.get_builder('train').info.splits[self.config.splits['train']].num_examples\n",
        "    self.train_batch_size = exp_config.batch_size\n",
        "    self.num_train_batches_between_validations = min(\n",
        "        math.ceil(num_train_examples / self.train_batch_size),\n",
        "        exp_config.num_train_batches_between_validations_max)\n",
        "    self.cache_train = num_train_examples \u003c min(100_000, (\n",
        "        exp_config.num_validations_per_path_training\n",
        "        * self.num_train_batches_between_validations\n",
        "        * self.train_batch_size))\n",
        "\n",
        "    num_validation_examples_tot = self.get_builder('validation').info.splits[self.config.splits['validation']].num_examples\n",
        "    num_validation_examples_max = exp_config.batch_size * exp_config.num_validation_batches_max\n",
        "    if num_validation_examples_max \u003c= num_validation_examples_tot:\n",
        "      self.num_validation_batches = exp_config.num_validation_batches_max\n",
        "      self.validation_batch_size = exp_config.batch_size\n",
        "    else:\n",
        "      # Adjust batch_size and num_batches to cover the smaller validation sets.\n",
        "      self.num_validation_batches = math.ceil(\n",
        "          num_validation_examples_tot / exp_config.batch_size)\n",
        "      self.validation_batch_size = math.floor(\n",
        "          num_validation_examples_tot / self.num_validation_batches)\n",
        "      assert num_validation_examples_tot \u003e= (self.num_validation_batches*self.validation_batch_size)\n",
        "    self.num_validation_examples = self.num_validation_batches * self.validation_batch_size\n",
        "\n",
        "    print(f'Task: {self.name}')\n",
        "    print(f'  Train batches between validations: {self.num_train_batches_between_validations}')\n",
        "    print(f'  Validation batches: {self.num_validation_batches}')\n",
        "    print(f'  Validation batch size: {self.validation_batch_size}')\n",
        "    print(f'  Dataset {{\\n{self.config.dataset}}}')\n",
        "    print(f'  Splits {{\\n{self.config.splits}}}')\n",
        "\n",
        "\n",
        "  def get_builder(self, mode):\n",
        "    if type(self.config.dataset) == str:\n",
        "      return TfdsBuildersCache.get(self.config.dataset)\n",
        "    return TfdsBuildersCache.get(self.config.dataset[mode])\n",
        "\n",
        "  def __str__(self):\n",
        "    return f'Task_{self.name}'\n",
        "  def is_trainable(self):\n",
        "    return not self.name.startswith(NOT_TRAINABLE)\n",
        "  def is_private(self):\n",
        "    return self.private\n",
        "\n",
        "  def get_ds(self, mode, hparams):\n",
        "    builder = self.get_builder(mode)\n",
        "    builder.download_and_prepare()\n",
        "    data = builder.as_dataset(\n",
        "        split=self.config.splits[mode],\n",
        "        shuffle_files=mode=='train')\n",
        "\n",
        "    def _pp(data):\n",
        "      im = data['image']\n",
        "      im = tf.cast(im, tf.float32)\n",
        "      # Must have 3 channels.\n",
        "      if im.shape[-1] == 1:\n",
        "        im = tf.squeeze(tf.stack([im] * 3, -1), axis=-2)\n",
        "      assert im.shape[-1] == 3\n",
        "      # Values in range [-1 , 1]\n",
        "      im = im / 127.5 - 1\n",
        "\n",
        "      if mode == 'train':\n",
        "        if hparams['ds_area_range_min'] \u003c 1.0:\n",
        "          channels = im.shape[-1]\n",
        "          begin, size, _ = tf.image.sample_distorted_bounding_box(\n",
        "              tf.shape(im),\n",
        "              tf.zeros([0, 0, 4], tf.float32),\n",
        "              aspect_ratio_range=[hparams['ds_aspect_ratio_range_min'],\n",
        "                                  1.0/hparams['ds_aspect_ratio_range_min']],\n",
        "              area_range=[hparams['ds_area_range_min'], 1.0],\n",
        "              # Overlap with bounding box, the bounding box should anyway\n",
        "              # default defaults to whole image in this case.\n",
        "              min_object_covered=0,\n",
        "              use_image_if_no_bounding_boxes=True)\n",
        "          im = tf.slice(im, begin, size)\n",
        "          # Restore the depth-dimension lost by the above operation.\n",
        "          im.set_shape([None, None, channels])\n",
        "        if hparams['ds_flip_left_right']:\n",
        "          if tf.random.uniform(shape=[]) \u003e 0.5:\n",
        "            im = tf.image.flip_left_right(im)\n",
        "        if hparams['ds_brightness_delta'] \u003e 0.0:\n",
        "          im = tf.image.random_brightness(\n",
        "              im, max_delta=hparams['ds_brightness_delta'])\n",
        "        if hparams['ds_contrast_delta'] \u003e 0.0:\n",
        "          im = tf.image.random_contrast(\n",
        "              im, lower=1 - hparams['ds_contrast_delta'],\n",
        "              upper=1 + hparams['ds_contrast_delta'])\n",
        "        if hparams['ds_saturation_delta'] \u003e 0.0:\n",
        "          im = tf.image.random_saturation(\n",
        "              im, lower=1 - hparams['ds_saturation_delta'],\n",
        "              upper=1 + hparams['ds_saturation_delta'])\n",
        "        if hparams['ds_hue_delta'] \u003e 0.0:\n",
        "          im = tf.image.random_hue(im, max_delta=hparams['ds_hue_delta'])\n",
        "\n",
        "      im = tf.image.resize(im, [hparams['ds_image_size'],\n",
        "                                hparams['ds_image_size']])\n",
        "      im = tf.clip_by_value(im, -1, 1)\n",
        "\n",
        "      return {'image': im, 'label': data['label']}\n",
        "\n",
        "    if mode == 'validation':\n",
        "      data = data.take(self.num_validation_examples)\n",
        "    if mode == 'validation' or (mode == 'train' and self.cache_train):\n",
        "      data = data.cache()\n",
        "    if mode != 'test':\n",
        "      data = data.repeat()\n",
        "    data = data.map(_pp, tf.data.AUTOTUNE)\n",
        "    if mode == 'train':\n",
        "      batch_size = self.train_batch_size\n",
        "    else:\n",
        "      batch_size = self.validation_batch_size\n",
        "    data = data.batch(batch_size)\n",
        "    if mode == 'train':\n",
        "      data = data.shuffle(10)\n",
        "    return tfds.as_numpy(data.prefetch(tf.data.AUTOTUNE))\n",
        "\n",
        "def get_task_factory_fn(exp_config):\n",
        "  def get_task(task_name):\n",
        "    return Task(name=task_name, exp_config=exp_config)\n",
        "  return get_task\n",
        "\n",
        "NOT_TRAINABLE = 'NOT_TRAINABLE'\n",
        "not_trainable = Task(NOT_TRAINABLE, None)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2A3aZXSWYwCd"
      },
      "outputs": [],
      "source": [
        "def get_num_params(params):\n",
        "  return sum(jax.tree_flatten(\n",
        "      jax.tree_map(lambda p: np.prod(p.shape), params)\n",
        "      )[0])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rzxoZ4rdQZcA"
      },
      "outputs": [],
      "source": [
        "def params2comps(params, train_locks , name=None):\n",
        "  \"\"\"Convert frozend dict of params to a list of components.\"\"\"\n",
        "  components = []\n",
        "  for k in params:\n",
        "    if name is None or name == k:\n",
        "      c = Component(name=k, params=params[k], train_locks=train_locks)\n",
        "      components.append(c)\n",
        "  return components\n",
        "\n",
        "def params2comp_names(params):\n",
        "  return list(params.keys())"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "DNxjDX13_dm_"
      },
      "outputs": [],
      "source": [
        "def fingerprint_params(params):\n",
        "  return np.sum(np.array(jax.tree_leaves(jax.tree_map(jnp.sum, params))))\n",
        "\n",
        "class Component():\n",
        "  counter = 0\n",
        "  def reset_globals():\n",
        "    Component.counter = 0\n",
        "  def __init__(self, name:str, params, train_locks:set):\n",
        "    self.name = name\n",
        "    self.params = jax.device_get(params)\n",
        "    self.num_params = None\n",
        "    self.train_locks = set(train_locks)\n",
        "    self.id = Component.counter\n",
        "    Component.counter += 1\n",
        "\n",
        "  def __str__(self):\n",
        "    rtn = f'Component: {self.id}\\n  Name: {self.name}'\n",
        "    rtn += f'\\n  Train locks: {self.train_locks}'\n",
        "    rtn += f'\\n  Fingerprint: {self.fingerprint()}'\n",
        "    rtn += f'\\n  Num params: {self.num_params}'\n",
        "    return rtn\n",
        "\n",
        "  def get_num_params(self):\n",
        "    if self.num_params is None:\n",
        "      self.num_params = get_num_params(self.params)\n",
        "    return self.num_params\n",
        "\n",
        "  def fingerprint(self):\n",
        "    return fingerprint_params(self.params)\n",
        "\n",
        "  def is_trainable(self):\n",
        "    return len(self.train_locks) == 0\n",
        "\n",
        "  def clone(self):\n",
        "    return Component(name=self.name,\n",
        "                     params=copy.deepcopy(jax.device_get(self.params)),\n",
        "                     train_locks=set())"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2FrDHFPU6NV-"
      },
      "outputs": [],
      "source": [
        "class ObjectCache():\n",
        "  def __init__(self, factory_fn):\n",
        "    self.factory_fn = factory_fn\n",
        "    self.cache = {}\n",
        "  def __call__(self, *args, **kwargs):\n",
        "    assert not args\n",
        "    key = json.dumps(kwargs, sort_keys=True)\n",
        "    if key not in self.cache:\n",
        "      self.cache[key] = self.factory_fn(**kwargs)\n",
        "      # print(f\"Added to cache: {self.factory_fn.__name__}({key})  [cache size {len(self.cache)}]\")\n",
        "    return self.cache[key]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kwhVlaqRHp-0"
      },
      "outputs": [],
      "source": [
        "def incremental_mutation(value, values_list:list):\n",
        "  assert value in values_list, f'{value} not in {values_list}'\n",
        "  idx = values_list.index(value)\n",
        "  idx += 1 if np.random.uniform() \u003c 0.5 else -1\n",
        "  idx = max(0, min(len(values_list)-1, idx))\n",
        "  return values_list[idx]\n",
        "\n",
        "def random_mutation(values_list:list):\n",
        "  return np.random.choice(values_list)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8PU5ffvd_gC9"
      },
      "outputs": [],
      "source": [
        "class Path():\n",
        "\n",
        "  def reset_globals(exp_config):\n",
        "    Path.exp_config = exp_config\n",
        "    Path.counter = 0\n",
        "    Path.paths = []\n",
        "    Path.scorer = None  # To be set to scorer of choice during init of exp.\n",
        "    # Cache output of functions calls with same args.\n",
        "    Path.tasks = ObjectCache(get_task_factory_fn(exp_config))\n",
        "    Path.posembed_components = ObjectCache(get_reshaped_posembed_component)\n",
        "    Path.optimizers = ObjectCache(get_optimizer)\n",
        "    Path.models = ObjectCache(get_vit_model)\n",
        "\n",
        "  def __init__(self, hparams, components, parent, task:Task):\n",
        "    self.components = components\n",
        "    self.id = Path.counter\n",
        "    Path.counter += 1\n",
        "    self.task = task\n",
        "    self.parent = parent\n",
        "    self.hparams = hparams\n",
        "    self.metrics = {\n",
        "        'offsprings': 0,\n",
        "        'reloads': 0,\n",
        "        'generation': 0 if parent is None else parent.metrics['generation']+1,\n",
        "        'private': task.is_private(),\n",
        "    }\n",
        "    self.model = Path.models(\n",
        "        num_classes=int(hparams['num_classes']),\n",
        "        num_layers=int(hparams['num_layers']),\n",
        "        adapter_layers=str(hparams['adapter_layers']),\n",
        "        adapter_dim=int(hparams['adapter_dim']),\n",
        "        query=str(self.exp_config.load_vit_checkpoint_query))\n",
        "    Path.paths.append(self)\n",
        "\n",
        "  def __str__(self):\n",
        "    rtn = f\"Path: {self.id}\"\n",
        "    rtn += f\"\\n  Components: {[c.id for c in self.components]}\"\n",
        "    if self.parent:\n",
        "      rtn += f\"\\n  Parent: {self.parent.id}\"\n",
        "    rtn += f\"\\n  Task: {self.task.name}\"\n",
        "    rtn += f\"\\n  Total Parameters: {get_num_params(self.get_all_params())}\"\n",
        "    rtn += f\"\\n  Accounted params: {self.accounted_num_params()}\"\n",
        "    for k,v in self.hparams.items():\n",
        "      rtn += f\"\\n    {k}: {v}\"\n",
        "    for k,v in self.metrics.items():\n",
        "      rtn += f\"\\n    {k}: {v}\"\n",
        "    rtn += f\"\\n  Score: {self.score()}\"\n",
        "    return rtn\n",
        "\n",
        "  def is_trainable(self):\n",
        "    return self.task.is_trainable()\n",
        "\n",
        "  def is_private(self):\n",
        "    return self.task.is_private()\n",
        "\n",
        "  def score(self):\n",
        "    return Path.scorer.score(self)\n",
        "\n",
        "  def get_all_params(self):\n",
        "    params = {}\n",
        "    for c in self.components:\n",
        "      params[c.name] = c.params\n",
        "    return flax.core.freeze(params)\n",
        "\n",
        "  def get_trainable_params(self):\n",
        "    params = {}\n",
        "    for c in self.components:\n",
        "      if c.is_trainable():\n",
        "        params[c.name] = c.params\n",
        "    return flax.core.freeze(params)\n",
        "\n",
        "  def get_fixed_params(self):\n",
        "    params = {}\n",
        "    for c in self.components:\n",
        "      if not c.is_trainable():\n",
        "        params[c.name] = c.params\n",
        "    return flax.core.freeze(params)\n",
        "\n",
        "  def update_trainable(self, trained_params):\n",
        "    trainable_count = 0\n",
        "    for c in self.components:\n",
        "      if c.is_trainable():\n",
        "        trainable_count += 1\n",
        "        assert c.name in trained_params.keys()\n",
        "        c.params = trained_params[c.name]\n",
        "    assert len(trained_params.keys()) == trainable_count, (\n",
        "        f'{len(trained_params.keys())} {trainable_count}')\n",
        "\n",
        "  def accounted_num_params(self):\n",
        "    rtn = 0\n",
        "    for c in self.components:\n",
        "      tl = copy.copy(c.train_locks)\n",
        "      assert type(tl) is set\n",
        "      tl.add(self.task.name)\n",
        "      if NOT_TRAINABLE in tl:\n",
        "        tl.remove(NOT_TRAINABLE)\n",
        "      if len(tl) == 0:\n",
        "        return np.nan\n",
        "      rtn += c.get_num_params() / len(tl)\n",
        "    return rtn\n",
        "\n",
        "  def clone(\n",
        "      self,\n",
        "      task:Task,\n",
        "      ds_hparams,\n",
        "      policy,\n",
        "      mutate:bool):\n",
        "    exp_config = Path.exp_config\n",
        "    assert exp_config == task.exp_config\n",
        "    comps = []\n",
        "    new_hparams = copy.deepcopy(self.hparams)\n",
        "    new_hparams['num_classes'] = task.num_classes\n",
        "    # Overwrite dataset hparams with those sampled for the generation batch.\n",
        "    new_hparams.update(ds_hparams)\n",
        "\n",
        "    def get_component_ref(c, clone):\n",
        "      if c.is_trainable() or clone:\n",
        "        # Clone trainable component.\n",
        "        return c.clone()\n",
        "      # Refer to frozen component.\n",
        "      return c\n",
        "\n",
        "    if mutate:\n",
        "      for k in exp_config.models_mutation_ranges:\n",
        "        if (policy.do_mutate() and\n",
        "            (k in ['num_layers', 'adapter_dim']\n",
        "             or k.startswith(OPTIMIZER_HPARAMS_KEYS_PRERFIX))):\n",
        "          new_hparams[k] = incremental_mutation(\n",
        "              new_hparams[k],\n",
        "              exp_config.models_mutation_ranges[k])\n",
        "      new_hparams['adapter_layers'] = mutate_adapters(\n",
        "          exp_config.mutate_adapters,\n",
        "          new_hparams['adapter_layers'],\n",
        "          new_hparams['num_layers'],\n",
        "          policy)\n",
        "\n",
        "    _, init_params = get_vit_model_and_params_mapped(\n",
        "        **get_model_kwargs(new_hparams, exp_config),\n",
        "        # Use Path.counter so it is deterministic if we rerun same experiment.\n",
        "        rng_key=Path.counter)\n",
        "    new_comp_names = params2comp_names(init_params)\n",
        "    for new_comp_name in new_comp_names:\n",
        "      comp = None\n",
        "      # Attept to reuse matching component from closer ancestor.\n",
        "      ancestor = self\n",
        "      while ancestor is not None:\n",
        "        comps_lookup = {c.name:c for c in ancestor.components}\n",
        "        if new_comp_name in comps_lookup:\n",
        "          # Head must be trainable if no acestor is of same task will fall back\n",
        "          # to random init of correct shape.\n",
        "          if new_comp_name == 'head' and not comps_lookup[new_comp_name].is_trainable():\n",
        "            assert task.name != ancestor.task.name, f'{task.name} != {ancestor.task.name}'\n",
        "            ancestor = ancestor.parent\n",
        "            continue\n",
        "\n",
        "          # Check shapes match otherwise skip.\n",
        "          if jax.tree_map(jnp.shape, init_params[new_comp_name]) != jax.tree_map(jnp.shape, comps_lookup[new_comp_name].params):\n",
        "            if new_comp_name == 'posembed_input':\n",
        "              # Change of image size changed shape of position embeddings,\n",
        "              # this can happen if ds_image_size is tuned,\n",
        "              # continue searching through ancestors for matching size.\n",
        "              assert 'ds_image_size' in exp_config.models_mutation_ranges\n",
        "              assert new_hparams['ds_image_size'] != ancestor.hparams['ds_image_size']\n",
        "              ancestor = ancestor.parent\n",
        "              continue\n",
        "            if new_comp_name.startswith('residual_adapter_'):\n",
        "              # Change of adapter inner dimension changed shape of dense layers,\n",
        "              # this can happen if adapter_dim is tuned,\n",
        "              # continue searching through ancestors for matching size.\n",
        "              assert 'adapter_dim' in exp_config.models_mutation_ranges\n",
        "              assert new_hparams['adapter_dim'] != ancestor.hparams['adapter_dim']\n",
        "              ancestor = ancestor.parent\n",
        "              continue\n",
        "\n",
        "            print(f'WARNING: Shapes do not match for component: {new_comp_name}  {ancestor.task.name}-\u003e{task.name}')\n",
        "            print(jax.tree_map(jnp.shape, init_params[new_comp_name]))\n",
        "            print(jax.tree_map(jnp.shape, comps_lookup[new_comp_name].params))\n",
        "            assert False  # Should not happen in current configuration.\n",
        "\n",
        "          comp = get_component_ref(comps_lookup[new_comp_name],\n",
        "                                   clone=mutate and policy.do_mutate(new_comp_name))\n",
        "          break\n",
        "        ancestor = ancestor.parent\n",
        "\n",
        "      # Get reshaped posembed_input.\n",
        "      if comp is None and new_comp_name == 'posembed_input':\n",
        "        pe_comp = Path.posembed_components(\n",
        "            image_size=new_hparams['ds_image_size'],\n",
        "            query=exp_config.load_vit_checkpoint_query)\n",
        "        comp = get_component_ref(pe_comp, clone=mutate and policy.do_mutate(new_comp_name))\n",
        "\n",
        "      # Otherwise create one from random init params.\n",
        "      if comp is None:\n",
        "        if VERBOSE:\n",
        "          print('Init:', new_comp_name)\n",
        "        # Possible in current configuration.\n",
        "        assert (new_comp_name == 'head'\n",
        "                or new_comp_name.startswith('residual_adapter_'))\n",
        "        comp = params2comps(init_params, train_locks=[], name=new_comp_name)[0]\n",
        "      assert comp is not None\n",
        "      comps.append(comp)\n",
        "\n",
        "    rtn = Path(new_hparams, comps, parent=self, task=task)\n",
        "    if task == self.task:\n",
        "      self.metrics['offsprings'] = self.metrics.get('offsprings', 0) + 1\n",
        "    return rtn\n",
        "\n",
        "  def get_optimizer(self):\n",
        "    return Path.optimizers(\n",
        "        lr=float(self.hparams['opt_lr']),\n",
        "        lr_schedule=str(self.hparams['opt_lr_schedule']),\n",
        "        lr_warmup_ratio=float(self.hparams['opt_lr_warmup_ratio']),\n",
        "        momentum=float(self.hparams['opt_momentum']),\n",
        "        nesterov=bool(self.hparams['opt_nesterov']),\n",
        "        num_train_batches_between_validations=int(\n",
        "            self.task.num_train_batches_between_validations),\n",
        "        num_validations_per_path_training=int(\n",
        "            self.task.exp_config.num_validations_per_path_training),\n",
        "    )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "C3-3N-LPV09d"
      },
      "outputs": [],
      "source": [
        "def mutate_adapters(mutate, adapter_layers_ids, num_layers, policy, allow_removal=False):\n",
        "  a_ids = set(ids_str2ints(adapter_layers_ids))\n",
        "  if mutate:\n",
        "    for a_id in range(num_layers):\n",
        "      if policy.do_mutate():\n",
        "        if a_id in a_ids:\n",
        "          if allow_removal:\n",
        "            a_ids.remove(a_id)\n",
        "        else:\n",
        "          a_ids.add(a_id)\n",
        "  # Drop adapters of layers dropped by a possible mutation in num_layers.\n",
        "  a_ids = [a_id for a_id in a_ids if a_id \u003c num_layers]\n",
        "  return ids_ints2str(a_ids)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bkRmcJgzUbwN"
      },
      "outputs": [],
      "source": [
        "class Scorer():\n",
        "  def score(self, path):\n",
        "    assert False, 'Not implemented'\n",
        "\n",
        "class ScorerQuality(Scorer):\n",
        "  def score(self, path):\n",
        "    if ('quality' not in path.metrics\n",
        "        or math.isnan(path.metrics['quality'])):\n",
        "      return None\n",
        "    assert path.metrics['quality'] \u003e= 0, \\\n",
        "        f'{path.task.name} {path.metrics[\"quality\"]}'\n",
        "    score = path.metrics['quality']\n",
        "    assert score \u003e= 0\n",
        "    return score\n",
        "\n",
        "class ScorerDecay(Scorer):\n",
        "  def __init__(self, base, num_params):\n",
        "    self.base = base\n",
        "    assert self.base \u003e 0.0\n",
        "    assert self.base \u003c= 1.0\n",
        "    self.num_params = num_params\n",
        "    assert self.num_params \u003e 0\n",
        "  def score(self, path):\n",
        "    if ('quality' not in path.metrics\n",
        "        or math.isnan(path.metrics['quality'])):\n",
        "      return None\n",
        "    assert path.metrics['quality'] \u003e= 0, \\\n",
        "        f'{path.task.name} {path.metrics[\"quality\"]}'\n",
        "    score = path.metrics['quality'] * (self.base ** (path.accounted_num_params() / self.num_params))\n",
        "    assert score \u003e= 0\n",
        "    return score"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZgIWhEBBHLv5"
      },
      "outputs": [],
      "source": [
        "class PopulationPolicy():\n",
        "  def sample_parent(self, paths):\n",
        "    assert False, 'Not implemented'\n",
        "\n",
        "# Random parent sampling policy.\n",
        "# WARNING: Not used recently, may need updates.\n",
        "class PPRand(PopulationPolicy):\n",
        "  def sample_parent(self, paths):\n",
        "    sampled = paths[np.random.randint(0, len(paths))]\n",
        "    return sampled\n",
        "\n",
        "# Tournament policy similar to https://arxiv.org/abs/1802.01548\n",
        "# WARNING: Not used recently, may need updates.\n",
        "class PPTournament(PopulationPolicy):\n",
        "  def __init__(self, subset_size, max_size, exp_config):\n",
        "    self.subset_size = subset_size\n",
        "    self.max_size = max_size\n",
        "    self.exp_config = exp_config\n",
        "\n",
        "  def reset(self):\n",
        "    self.seed_paths_id = 0\n",
        "\n",
        "  def prune(self, paths):\n",
        "    while len(paths) \u003e self.max_size:\n",
        "      # subset = np.random.choice(paths, self.subset_size, replace=True).tolist()\n",
        "      minp = min(paths, key=lambda x: x.score())\n",
        "      paths.remove(minp)\n",
        "      print(f'REMOVED: {minp.id} {minp.metrics[\"quality\"]:.2f}')\n",
        "      assert minp not in paths\n",
        "    return paths\n",
        "\n",
        "  def do_mutate(self, comp_name=None):\n",
        "    if comp_name:\n",
        "      if comp_name in self.exp_config.force_finetune_components:\n",
        "        return True\n",
        "    return self.exp_config.mutation_prob\u003enp.random.uniform()\n",
        "\n",
        "  def allow_mutations(self, pop):\n",
        "    return not self.seed_paths_id \u003c len(pop.seed_paths)\n",
        "\n",
        "  def sample_parent(self, paths):\n",
        "    subset = np.random.choice(paths, self.subset_size, replace=True).tolist()\n",
        "    sampled = max(subset, key=lambda x: x.score())\n",
        "    return sampled\n",
        "\n",
        "  def sample_path(self, pop, task:Task, ds_hparams):\n",
        "    # Prune population to max_size if necessary.\n",
        "    pop.paths[task] = self.prune(pop.paths[task])\n",
        "    parent = None\n",
        "    mutate = self.allow_mutations(pop)\n",
        "    if self.seed_paths_id \u003c len(pop.seed_paths):\n",
        "      assert mutate == False\n",
        "      parent = pop.seed_paths[self.seed_paths_id]\n",
        "      if VERBOSE:\n",
        "        print('Seed path', parent.id, parent.task.name)\n",
        "      self.seed_paths_id += 1\n",
        "    else:\n",
        "      assert mutate == True\n",
        "    if parent is None and len(pop.paths[task]) \u003c= 1:\n",
        "      # This case is needed to fill the first batch.\n",
        "      parent = random.choice(pop.seed_paths)\n",
        "      if VERBOSE:\n",
        "        print('Rand seed', parent.id, parent.task.name)\n",
        "    if parent is None and len(pop.paths[task]) \u003c self.max_size:\n",
        "      parent = random.choice(pop.paths[task])\n",
        "      if VERBOSE:\n",
        "        print('Rand parent', parent.id, parent.task.name)\n",
        "    if parent is None:\n",
        "      parent = self.sample_parent(pop.paths[task])\n",
        "\n",
        "    child = parent.clone(task, ds_hparams, self, mutate)\n",
        "\n",
        "    # Store record of mutations.\n",
        "    mutations = {}\n",
        "    for k in child.hparams:\n",
        "      if parent.hparams.get(k) != child.hparams[k]:\n",
        "        mutations[k] = (parent.hparams.get(k), child.hparams[k])\n",
        "    child.metrics['mutations'] = json.dumps(mutations)\n",
        "    if mutations:\n",
        "      print(child.id, child.metrics['mutations'])\n",
        "    return child\n",
        "\n",
        "  def sample_ds_hparams(self, pop, task:Task):\n",
        "    mutate = self.allow_mutations(pop)\n",
        "    assert pop.exp_config is self.exp_config\n",
        "    ds_hparams = {}\n",
        "    for key in self.exp_config.models_default_hparams:\n",
        "      if key.startswith(DATASET_HPARAMS_KEYS_PRERFIX):\n",
        "        ds_hparams[key] = self.exp_config.models_default_hparams[key]\n",
        "    best_path = pop.get_best_path(task)\n",
        "    if best_path:\n",
        "      ds_hparams.update(\n",
        "          {k : best_path.hparams[k] for k in ds_hparams if k in best_path.hparams})\n",
        "    if mutate:\n",
        "      for k in ds_hparams:\n",
        "        if (k in self.exp_config.models_mutation_ranges\n",
        "            and pop.policy.do_mutate()):\n",
        "          ds_hparams[k] = incremental_mutation(\n",
        "              ds_hparams[k],\n",
        "              self.exp_config.models_mutation_ranges[k])\n",
        "    return ds_hparams"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "iWZeLq5zZlfb"
      },
      "outputs": [],
      "source": [
        "# muNet decay policy.\n",
        "class PPDecay(PopulationPolicy):\n",
        "  def __init__(self, exp_config):\n",
        "    self.exp_config = exp_config\n",
        "\n",
        "  def reset(self):\n",
        "    self.seed_paths_id = 0\n",
        "\n",
        "  def do_mutate(self, comp_name=None):\n",
        "    if comp_name:\n",
        "      if comp_name in exp_config.force_finetune_components:\n",
        "        return True\n",
        "    return self.exp_config.mutation_prob\u003enp.random.uniform()\n",
        "\n",
        "  def allow_mutations(self, pop):\n",
        "    return not self.seed_paths_id \u003c len(pop.seed_paths)\n",
        "\n",
        "  def sample_parent(self, paths):\n",
        "    sorted_paths = sorted(paths, key=lambda p: p.score(), reverse=True)\n",
        "    sampled = None\n",
        "    for path in sorted_paths:\n",
        "      offsprings = path.metrics['offsprings']\n",
        "      assert not math.isnan(offsprings)\n",
        "      print('\u003e\u003e\u003e considering', path.id, offsprings)\n",
        "      if np.random.uniform() \u003c 0.5 ** offsprings:\n",
        "        print(f'selected', path.id)\n",
        "        sampled = path\n",
        "        break\n",
        "    return sampled\n",
        "\n",
        "  def sample_path(self, pop, task:Task, ds_hparams):\n",
        "    parent = None\n",
        "    mutate = self.allow_mutations(pop)\n",
        "    if self.seed_paths_id \u003c len(pop.seed_paths):\n",
        "      assert mutate == False\n",
        "      parent = pop.seed_paths[self.seed_paths_id]\n",
        "      print('Seed path', parent.id, parent.task.name)\n",
        "      self.seed_paths_id += 1\n",
        "    else:\n",
        "      assert mutate == True\n",
        "\n",
        "    if not parent:\n",
        "      parent = self.sample_parent(pop.paths[task])\n",
        "\n",
        "    if not parent:\n",
        "      parent = np.random.choice(pop.seed_paths + pop.paths[task])\n",
        "      print('\u003e\u003e\u003e seed', parent.id)\n",
        "    child = parent.clone(task, ds_hparams, self, mutate=mutate)\n",
        "\n",
        "    # Store record of mutations.\n",
        "    mutations = {}\n",
        "    for k in child.hparams:\n",
        "      if parent.hparams.get(k) != child.hparams[k]:\n",
        "        mutations[k] = (parent.hparams.get(k), child.hparams[k])\n",
        "    child.metrics['mutations'] = json.dumps(mutations)\n",
        "    if mutations:\n",
        "      print(child.id, child.metrics['mutations'])\n",
        "    return child\n",
        "\n",
        "  def sample_ds_hparams(self, pop, task:Task):\n",
        "    mutate = self.allow_mutations(pop)\n",
        "    assert pop.exp_config is self.exp_config\n",
        "    ds_hparams = {}\n",
        "    for key in self.exp_config.models_default_hparams:\n",
        "      if key.startswith(DATASET_HPARAMS_KEYS_PRERFIX):\n",
        "        ds_hparams[key] = self.exp_config.models_default_hparams[key]\n",
        "    best_path = pop.get_best_path(task)\n",
        "    if best_path:\n",
        "      ds_hparams.update(\n",
        "          {k : best_path.hparams[k] for k in ds_hparams if k in best_path.hparams})\n",
        "    if mutate:\n",
        "      for k in ds_hparams:\n",
        "        if (k in self.exp_config.models_mutation_ranges\n",
        "            and pop.policy.do_mutate()):\n",
        "          ds_hparams[k] = incremental_mutation(\n",
        "              ds_hparams[k],\n",
        "              self.exp_config.models_mutation_ranges[k])\n",
        "    return ds_hparams"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tKZ8VTr84VQu"
      },
      "outputs": [],
      "source": [
        "# Baselines policy.\n",
        "class PPBaseline(PopulationPolicy):\n",
        "  def __init__(self, exp_config):\n",
        "    self.exp_config = exp_config\n",
        "  def reset(self):\n",
        "    return None\n",
        "  def sample_parent(self, paths):\n",
        "    assert False, 'Baselines should not reach evolutionary codepath.'\n",
        "  def do_mutate(self, comp_name=None):\n",
        "    if comp_name:\n",
        "      if comp_name in exp_config.force_finetune_components:\n",
        "        return True\n",
        "    if self.exp_config.mutation_prob == 0.0:\n",
        "      return False\n",
        "    elif self.exp_config.mutation_prob == 1.0:\n",
        "      return True\n",
        "    else:\n",
        "      assert False, self.exp_config.mutation_prob\n",
        "\n",
        "  def sample_path(self, pop, task:Task, ds_hparams):\n",
        "    assert len(pop.paths[not_trainable]) == 1\n",
        "    parent = pop.paths[not_trainable][0]\n",
        "    mutate = True\n",
        "    child = parent.clone(task, ds_hparams, self, mutate)\n",
        "    return child\n",
        "\n",
        "  def sample_ds_hparams(self, pop, task:Task):\n",
        "    ds_hparams = {}\n",
        "    for key in self.exp_config.models_default_hparams:\n",
        "      if key.startswith(DATASET_HPARAMS_KEYS_PRERFIX):\n",
        "        ds_hparams[key] = self.exp_config.models_default_hparams[key]\n",
        "    return ds_hparams"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YMbYgKd8_nyi"
      },
      "outputs": [],
      "source": [
        "class Population():\n",
        "  def __init__(self, exp_config):\n",
        "    self.paths = defaultdict(list)\n",
        "    self.exp_config = exp_config\n",
        "    self.paths_df = pd.DataFrame()\n",
        "    self.comps_df = pd.DataFrame()\n",
        "    self.policy = globals()[exp_config.policy_class](\n",
        "        **exp_config.policy_kwargs,\n",
        "        exp_config=exp_config)\n",
        "\n",
        "  def get_best_path(self, task:Task):\n",
        "    if len(self.paths[task]) == 0:\n",
        "      return None\n",
        "    return max(self.paths[task], key=lambda p: p.score())\n",
        "\n",
        "  def sample_path(self, task:Task, ds_hparams):\n",
        "    return self.policy.sample_path(pop=self, task=task, ds_hparams=ds_hparams)\n",
        "\n",
        "  def sample_ds_hparams(self, task:Task):\n",
        "    return self.policy.sample_ds_hparams(pop=self, task=task)\n",
        "\n",
        "  def add_train_locks(self, task:Task):\n",
        "    # Check.\n",
        "    for ps in self.paths.values():\n",
        "      for p in ps:\n",
        "        for c in p.components:\n",
        "          assert task.name not in c.train_locks\n",
        "    # Add locks.\n",
        "    paths = self.paths[task]\n",
        "    for p in paths:\n",
        "      for c in p.components:\n",
        "        c.train_locks.add(task.name)\n",
        "  def rm_train_locks(self, task:Task):\n",
        "    # Remove locks.\n",
        "    paths = self.paths[task]\n",
        "    for p in paths:\n",
        "      for c in p.components:\n",
        "        if task.name in c.train_locks:\n",
        "          c.train_locks.remove(task.name)\n",
        "    # Check.\n",
        "    for ps in self.paths.values():\n",
        "      for p in ps:\n",
        "        for c in p.components:\n",
        "          assert task.name not in c.train_locks\n",
        "\n",
        "  def set_seed_paths(self, task:Task):\n",
        "    self.seed_paths = []\n",
        "    for paths in self.paths.values():\n",
        "      for path in paths:\n",
        "        if path.task is task:\n",
        "          continue\n",
        "        if path.task.is_private():\n",
        "          continue\n",
        "        self.seed_paths.append(path)\n",
        "    # random.shuffle(self.seed_paths)\n",
        "    # Deterministic ordering.\n",
        "    self.seed_paths = sorted(self.seed_paths, key=lambda p: p.id, reverse=True)\n",
        "\n",
        "  def start_task(self, task:Task):\n",
        "    self.set_seed_paths(task)\n",
        "    self.policy.reset()\n",
        "    self.rm_train_locks(task)\n",
        "\n",
        "  def end_task(self, task:Task):\n",
        "    # Keep only best one.\n",
        "    best_path = self.get_best_path(task)\n",
        "    assert best_path is not None\n",
        "    self.paths[task] = [best_path]\n",
        "\n",
        "    # Add train locks.\n",
        "    self.add_train_locks(task)\n",
        "\n",
        "    # Store stats before dropping references to trigger garbage collection\n",
        "    # of unused paths, components and parameters.\n",
        "    self.paths_df = self.paths_df.append(paths_to_df(Path.paths),\n",
        "                                         ignore_index=True)\n",
        "    self.comps_df = self.comps_df.append(components_to_df(Path.paths),\n",
        "                                         ignore_index=True)\n",
        "\n",
        "    # Drop unused paths generated in this task iteration for garbage collection.\n",
        "    Path.paths = []\n",
        "    # Simplify ancestor tree to contain only live paths.\n",
        "    live_paths_ids = [p.id for paths in self.paths.values() for p in paths]\n",
        "    # Notice that the simplification is done also for paths of other tasks,\n",
        "    # since they may be pointing to a path of this task that was just pruned.\n",
        "    for path in [path for paths in self.paths.values() for path in paths]:\n",
        "      ancestor = path.parent\n",
        "      if ancestor is None:\n",
        "        continue\n",
        "      while True:\n",
        "        if ancestor.id in live_paths_ids:\n",
        "          path.parent = ancestor\n",
        "          break\n",
        "        ancestor = ancestor.parent"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4EgQHbawpNcS"
      },
      "outputs": [],
      "source": [
        "pd.set_option('display.expand_frame_repr', False)\n",
        "pd.set_option('display.max_columns', 100)\n",
        "pd.set_option('display.max_rows', 100)\n",
        "\n",
        "def pop_to_df(pop):\n",
        "  return paths_to_df([p for paths in pop.paths.values() for p in paths])\n",
        "\n",
        "def paths_to_df(paths):\n",
        "  # Collect all metrics names.\n",
        "  metrics_keys = set()\n",
        "  hparams_keys = set()\n",
        "  for path in paths:\n",
        "    metrics_keys.update(path.metrics)\n",
        "    hparams_keys.update(path.hparams)\n",
        "\n",
        "  data = defaultdict(list)\n",
        "  for path in paths:\n",
        "    data['task_name'].append(path.task.name)\n",
        "    data['id'].append(path.id)\n",
        "    data['parent_id'].append(path.parent.id if path.parent else -1)\n",
        "    data['parent_task_name'].append(path.parent.task.name if path.parent else None)\n",
        "    data['final_accounted_params'].append(path.accounted_num_params())\n",
        "    data['components'].append('_'.join([str(c.id) for c in path.components]))\n",
        "    for k in hparams_keys:\n",
        "      data[f'hparams.{k}'].append(path.hparams[k] if k in path.hparams else None)\n",
        "    for k in metrics_keys:\n",
        "      data[f'metrics.{k}'].append(path.metrics[k] if k in path.metrics else None)\n",
        "    data['score'].append(path.score())\n",
        "  return pd.DataFrame(data)\n",
        "\n",
        "def components_to_df(paths):\n",
        "  # Collect all components.\n",
        "  comps = set()\n",
        "  for p in paths:\n",
        "    comps.update(p.components)\n",
        "\n",
        "  data = defaultdict(list)\n",
        "  for c in comps:\n",
        "    data['id'].append(c.id)\n",
        "    data['name'].append(c.name)\n",
        "    data['num_params'].append(c.get_num_params())\n",
        "    data['train_locks'].append(','.join(c.train_locks))\n",
        "  return pd.DataFrame(data)\n",
        "\n",
        "def df_leaderboard(df):\n",
        "  df = df.loc[df['task_name'] != NOT_TRAINABLE]\n",
        "  # Place columns on the left for readability.\n",
        "  cols = df.columns.tolist()\n",
        "  for k in ['metrics.test_quality', 'metrics.quality', 'score']:\n",
        "    if k in cols:\n",
        "      cols.remove(k)\n",
        "      cols.insert(1, k)\n",
        "  df = df[cols]\n",
        "  print(df)\n",
        "  print(f'Avg score:        {df[\"score\"].mean():.6f}')\n",
        "  print(f'Avg quality:      {df[\"metrics.quality\"].mean():.6f}')\n",
        "  if 'metrics.test_quality' in df:\n",
        "    print(f'Avg test quality: {df[\"metrics.test_quality\"].mean():.6f}')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xmYkTEY-PBgh"
      },
      "outputs": [],
      "source": [
        "def prp(path):\n",
        "  rtn = []\n",
        "  if VERBOSE:\n",
        "    rtn.append(str(path))\n",
        "    for c in path.components:\n",
        "      rtn.append(str(c))\n",
        "  else:\n",
        "    rtn.append(str(path.id))\n",
        "  return '\\n'.join(rtn)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M_RTFjoOwfpM"
      },
      "outputs": [],
      "source": [
        "def df_write_to_file(df, dir_path, df_name):\n",
        "  filename_df = os.path.join(dir_path, f'{df_name}.csv')\n",
        "  with tf.io.gfile.GFile(filename_df, 'w') as outfile:\n",
        "    df.to_csv(outfile, index=False)\n",
        "\n",
        "def df_read_from_file(dir_path, df_name,):\n",
        "  filename_df = os.path.join(dir_path, f'{df_name}.csv')\n",
        "  with tf.io.gfile.GFile(filename_df, 'r') as infile:\n",
        "    df = pd.read_csv(infile)\n",
        "  # Pandas read_csv() reads empty stings as NaNs. Set NaNs to empty strings in\n",
        "  # columns with type strings/object.\n",
        "  for c in df.columns:\n",
        "    if df[c].dtype == np.object_:\n",
        "        df[c].fillna('', inplace=True)\n",
        "  return df\n",
        "\n",
        "def checkpoint_save(experiment_dir:str, pop:Population, step=None):\n",
        "  comps_params = {}\n",
        "  for c in set([c for paths in pop.paths.values() for p in paths for c in p.components]):\n",
        "    comps_params[f'{c.name}:{c.id}'] = c.params\n",
        "  flax_checkpoints.save_checkpoint(\n",
        "      ckpt_dir=experiment_dir,\n",
        "      target=comps_params,\n",
        "      step=step)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7yWdy7DskBph"
      },
      "outputs": [],
      "source": [
        "def load_population_from_checkpoint(\n",
        "    pop:Population,\n",
        "    ckpt_dir:str,\n",
        "    population_df,\n",
        "    step=None):\n",
        "  loaded_params = flax.core.freeze(\n",
        "      flax_checkpoints.restore_checkpoint(\n",
        "          ckpt_dir=ckpt_dir, target=None, step=step))\n",
        "  id_2_comp = {}\n",
        "  for k in loaded_params.keys():\n",
        "    name,id = k.split(':')\n",
        "    c = Component(name=name, params=loaded_params[k], train_locks=[])\n",
        "    c.id = int(id)\n",
        "    assert c.id not in id_2_comp\n",
        "    id_2_comp[c.id] = c\n",
        "  # For parent assignemt.\n",
        "  id_2_path = {}\n",
        "  path_2_parent_id = {}\n",
        "  for index, row in population_df.iterrows():\n",
        "    comps_ids = row['components'].split('_')\n",
        "    comps = []\n",
        "    for id in comps_ids:\n",
        "      comps.append(id_2_comp[int(id)])\n",
        "    task_name = row['task_name']\n",
        "    if task_name == NOT_TRAINABLE:\n",
        "      task = not_trainable\n",
        "    else:\n",
        "      task = Path.tasks(task_name=task_name)\n",
        "    # Retrieve hparams and metrics.\n",
        "    hparams = {}\n",
        "    metrics = {}\n",
        "    for k in row.keys():\n",
        "      if k.startswith('hparams.'):\n",
        "        hparams[k[len('hparams.'):]] = row[k]\n",
        "      if k.startswith('metrics.'):\n",
        "        metrics[k[len('metrics.'):]] = row[k]      \n",
        "    if type(hparams['adapter_layers']) is float:\n",
        "      if math.isnan(hparams['adapter_layers']):\n",
        "        hparams['adapter_layers'] = ''\n",
        "      else:\n",
        "        hparams['adapter_layers'] = str(int(hparams['adapter_layers']))\n",
        "    metrics['reloads'] = metrics['reloads'] + 1\n",
        "    # Create path.\n",
        "    path = Path(\n",
        "        hparams=hparams,\n",
        "        components=comps,\n",
        "        parent=None,\n",
        "        task=task,\n",
        "        )\n",
        "    path.metrics = metrics\n",
        "    path.id = int(row['id'])\n",
        "    # Add train locks.\n",
        "    for c in path.components:\n",
        "      c.train_locks.add(task_name)\n",
        "    pop.paths[task].append(path)\n",
        "    assert path.id not in id_2_path\n",
        "    id_2_path[path.id] = path\n",
        "    if task_name != NOT_TRAINABLE:\n",
        "      path_2_parent_id[path] = int(row['parent_id'])\n",
        "\n",
        "  # Set parents.\n",
        "  for path, parent_id in path_2_parent_id.items():\n",
        "    path.parent = id_2_path[parent_id]\n",
        "  Path.counter = 1 + max([id for id in id_2_path])\n",
        "  Component.counter = 1 + max([id for id in id_2_comp])\n",
        "  Path.paths = []"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "m6vSjSIvNPq4"
      },
      "outputs": [],
      "source": [
        "@partial(jax.jit, static_argnames='model')\n",
        "def eval_step(params, images, labels, model):\n",
        "  logits = model.apply({'params': params}, images, train=USE_DROPOUT)\n",
        "  # Avg accuracy on the batch.\n",
        "  return (logits.argmax(axis=-1) == labels).mean()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_m2xl8XR7cWy"
      },
      "outputs": [],
      "source": [
        "@partial(jax.jit, static_argnames=['model', 'optimizer'], donate_argnums=[0, 2])\n",
        "def train_step(params, fixed_params, opt_state, images, labels, model, optimizer):\n",
        "  def loss_fn(params, fixed_params, images, labels):\n",
        "    logits = model.apply({'params': format_params(params, fixed_params)},\n",
        "                         images, train=USE_DROPOUT)\n",
        "    labels = jax.nn.one_hot(labels, logits.shape[-1])\n",
        "    return -jnp.mean(jnp.sum(labels * nn.log_softmax(logits), axis=-1))\n",
        "  grads = jax.grad(loss_fn)(params, fixed_params, images, labels)\n",
        "  updates, opt_state = optimizer.update(grads, opt_state, params=params)\n",
        "  params = optax.apply_updates(params, updates)\n",
        "  return params, opt_state"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "solv_icgGVBW"
      },
      "outputs": [],
      "source": [
        "LOOP_START = time.time()\n",
        "\n",
        "def train_loop(paths, ds_train, ds_validation, devices, exp_config):\n",
        "  global LOOP_START\n",
        "  timing = {'start_time': time.time(),\n",
        "            'start_time_loop': LOOP_START}\n",
        "  task = paths[0].task\n",
        "  # The following values should be shared by all paths in this generation batch.\n",
        "  for path in paths:\n",
        "    assert task == path.task\n",
        "    assert paths[0].hparams['ds_image_size'] == path.hparams['ds_image_size']\n",
        "\n",
        "  for p_id, path in enumerate(paths):\n",
        "    if VERBOSE:\n",
        "      print('Parent')\n",
        "      print(prp(path.parent))\n",
        "      print(prp(path))\n",
        "    path.device = devices[p_id % len(devices)]\n",
        "    path.optimizer = path.get_optimizer()\n",
        "    path.optimizer_init_fn = jax.jit(\n",
        "        path.optimizer.init,\n",
        "        device=path.device)\n",
        "    path.best_params_local = None\n",
        "    path.best_quality = None\n",
        "    path.best_score = path.parent.score() if path.task is path.parent.task else -np.inf\n",
        "    path.evals = []\n",
        "\n",
        "    # Launch parallel compilation of eval and train step functions.\n",
        "    params_local = path.get_trainable_params()\n",
        "    path.compile_params_device = jax.device_put(params_local, path.device)\n",
        "    path.compile_fixed_params_device = jax.device_put(\n",
        "        path.get_fixed_params(),\n",
        "        path.device)\n",
        "    path.compile_train = Thread(\n",
        "        target=train_step,\n",
        "        args=(path.compile_params_device,\n",
        "              path.compile_fixed_params_device,\n",
        "              path.optimizer_init_fn(params_local),\n",
        "              get_sample_image(\n",
        "                  image_size=path.hparams['ds_image_size'],\n",
        "                  batch_size=task.train_batch_size),\n",
        "              get_sample_label(\n",
        "                  batch_size=task.train_batch_size),\n",
        "              path.model,\n",
        "              path.optimizer))\n",
        "    path.compile_eval = Thread(\n",
        "        target=eval_step,\n",
        "        args=(\n",
        "            format_params(\n",
        "                path.compile_params_device,\n",
        "                path.compile_fixed_params_device),\n",
        "            get_sample_image(\n",
        "                image_size=path.hparams['ds_image_size'],\n",
        "                batch_size=task.validation_batch_size),\n",
        "            get_sample_label(\n",
        "                batch_size=task.validation_batch_size),\n",
        "            path.model))\n",
        "    path.compile_eval.start()\n",
        "\n",
        "  for path in paths:\n",
        "    path.compile_eval.join()\n",
        "    del path.compile_eval\n",
        "    timing['end_compile_eval'] = time.time()\n",
        "    path.compile_train.start()\n",
        "\n",
        "  iter_ds_validation = iter(ds_validation)\n",
        "  # TRAIN\n",
        "  for t_step, batch in zip(\n",
        "      range(exp_config.num_validations_per_path_training\n",
        "            * task.num_train_batches_between_validations),\n",
        "      ds_train,\n",
        "  ):\n",
        "    for p_id, path in enumerate(paths):\n",
        "      if t_step == 0:\n",
        "        path.compile_train.join()\n",
        "        del path.compile_train\n",
        "        del path.compile_params_device\n",
        "        del path.compile_fixed_params_device\n",
        "        timing['end_compile'] = time.time()\n",
        "        path.params_device = jax.device_put(\n",
        "            path.get_trainable_params(),\n",
        "            path.device)\n",
        "        path.fixed_params_device = jax.device_put(\n",
        "            path.get_fixed_params(),\n",
        "            path.device)\n",
        "        path.opt_state_device = path.optimizer_init_fn(path.params_device)\n",
        "        t_step_0_time = time.time()\n",
        "\n",
        "      path.params_device, path.opt_state_device = train_step(\n",
        "          path.params_device,\n",
        "          path.fixed_params_device,\n",
        "          path.opt_state_device,\n",
        "          batch['image'],\n",
        "          batch['label'],\n",
        "          path.model,\n",
        "          path.optimizer)\n",
        "      if t_step == 0 and time.time() - t_step_0_time \u003e 3 and p_id \u003e 3:\n",
        "        # Notice first step or first paths may overlap with compilation joined\n",
        "        # in the first step of later paths, so this may fire at times.\n",
        "        print(f'WARNING: First train step took: {time.time()-t_step_0_time:.2f} s')\n",
        "\n",
        "    # EVAL\n",
        "    if (t_step+1) % task.num_train_batches_between_validations == 0:\n",
        "      first_eval = ((t_step+1) == task.num_train_batches_between_validations)\n",
        "      if first_eval:\n",
        "        timing['start_eval'] = time.time()\n",
        "      for path in paths:\n",
        "        path.accs = []\n",
        "      for e_step, batch in zip(\n",
        "          range(task.num_validation_batches),\n",
        "          iter_ds_validation,\n",
        "          ):\n",
        "        for p_id, path in enumerate(paths):\n",
        "          if first_eval and e_step == 0:\n",
        "            e_step_0_time = time.time()\n",
        "          path.accs.append(\n",
        "              eval_step(\n",
        "                  format_params(path.params_device, path.fixed_params_device),\n",
        "                  batch['image'],\n",
        "                  batch['label'],\n",
        "                  path.model))\n",
        "          if first_eval and e_step == 0 and time.time() - e_step_0_time \u003e 1:\n",
        "            print(f'WARNING: First eval step took: {time.time()-e_step_0_time:.2f} s')\n",
        "\n",
        "      qs = []\n",
        "      eval_idx = (t_step+1) // task.num_train_batches_between_validations\n",
        "      for path in paths:\n",
        "        quality = np.mean(path.accs)\n",
        "        del path.accs\n",
        "        qs.append(f'{quality:.4f}')\n",
        "        path.evals.append(quality)\n",
        "        # Set quality in metrics for current score computation.\n",
        "        path.metrics['quality'] = quality\n",
        "        path_score = path.score()\n",
        "        if path_score \u003e path.best_score:\n",
        "          path.best_params_local = jax.device_get(path.params_device)\n",
        "          path.best_score = path_score\n",
        "          path.best_quality = quality\n",
        "          qs[-1] += '*'\n",
        "      train_time = time.time() - timing['end_compile']\n",
        "      avg_path_time = (train_time / eval_idx) / len(paths)\n",
        "      print(('\\t'.join(qs) + f'\\t\u003c Eval {eval_idx}').expandtabs(8),\n",
        "            f'tot:{train_time:.1f}s', f'avg/path:{avg_path_time:.1f}s')\n",
        "\n",
        "      if first_eval:\n",
        "        timing['end_eval'] = time.time()\n",
        "\n",
        "  for path in paths:\n",
        "    del path.params_device\n",
        "    del path.fixed_params_device\n",
        "    del path.opt_state_device\n",
        "    del path.optimizer\n",
        "    del path.optimizer_init_fn\n",
        "\n",
        "  timing['end_train'] = time.time()\n",
        "\n",
        "  loop_time = timing['start_time'] - LOOP_START\n",
        "  compile_time = timing['end_compile'] - timing['start_time']\n",
        "  compile_eval_time = timing['end_compile_eval'] - timing['start_time']\n",
        "  compile_train_time = timing['end_compile'] - timing['end_compile_eval']\n",
        "  train_time = timing['end_train'] - timing['end_compile']\n",
        "  eval_time = timing['end_eval'] - timing['start_eval']\n",
        "  LOOP_START = time.time()\n",
        "\n",
        "  for path in paths:\n",
        "    path.metrics['loop_time'] = loop_time\n",
        "    path.metrics['compile_time'] = compile_time\n",
        "    path.metrics['train_time'] = train_time\n",
        "    path.metrics['eval_time'] = eval_time\n",
        "    path.metrics['start_time'] = timing['start_time']\n",
        "    path.metrics['start_time_loop'] = timing['start_time_loop']\n",
        "    path.metrics['end_time'] = time.time()\n",
        "    num_all_params = get_num_params(path.get_all_params())\n",
        "    num_trainable_params = get_num_params(path.get_trainable_params())\n",
        "    path.metrics['trainable_params_ratio'] = num_trainable_params/num_all_params\n",
        "    path.metrics['num_trainable_params'] = num_trainable_params\n",
        "    path.metrics['quality'] = max(path.evals)\n",
        "    path.metrics['evals'] = json.dumps([float(v) for v in path.evals])\n",
        "    path.metrics['training_accounted_params'] = path.accounted_num_params()\n",
        "    path.metrics['training_score'] = path.score()\n",
        "\n",
        "    if path.best_params_local:\n",
        "      path.metrics['improved'] = True\n",
        "      path.update_trainable(path.best_params_local)\n",
        "      assert path.best_quality == path.metrics['quality']\n",
        "      assert path.best_score == path.metrics['training_score']\n",
        "    else:\n",
        "      path.metrics['improved'] = False\n",
        "      # Path will be early pruned if not an improvement, so skip parameters update.\n",
        "      assert path.best_params_local == None\n",
        "      assert path.best_quality == None\n",
        "\n",
        "    del path.best_params_local\n",
        "    del path.best_score\n",
        "    del path.best_quality\n",
        "    del path.evals\n",
        "\n",
        "    if VERBOSE:\n",
        "      print('UPDATED:')\n",
        "      print(prp(path))\n",
        "\n",
        "  pqs = []\n",
        "  qs = []\n",
        "  psc = []\n",
        "  sc = []\n",
        "  for path in paths:\n",
        "    if path.task is path.parent.task:\n",
        "      pqs.append(f'{path.parent.metrics[\"quality\"]:.4f}')\n",
        "      psc.append(f'{path.parent.score():.4f}')\n",
        "    else:\n",
        "      pqs.append('NEW')\n",
        "      psc.append('NEW')\n",
        "    qs.append(f'{path.metrics[\"quality\"]:.4f}')\n",
        "    sc.append(f'{path.score():.4f}')\n",
        "    if path.metrics['improved']:\n",
        "      sc[-1] += '+'\n",
        "\n",
        "  print(('\\t'.join([f'{path.parent.id}' for path in paths]) +\n",
        "        '\\t\u003c Parent id').expandtabs(8))\n",
        "  print(('\\t'.join([f'{path.id}' for path in paths]) +\n",
        "        '\\t\u003c Path id').expandtabs(8))\n",
        "  print(('\\t'.join(pqs) + '\\t\u003c Parent best quality').expandtabs(8))\n",
        "  print(('\\t'.join(qs) + '\\t\u003c Path best quality').expandtabs(8))\n",
        "  print(('\\t'.join(psc) + '\\t\u003c Parent score').expandtabs(8))\n",
        "  print(('\\t'.join(sc) + '\\t\u003c Path score').expandtabs(8))\n",
        "\n",
        "  print('time\\tINIT\\tCOMPevl\\tCOMPtrn\\tTRN+EVL\\t1stEVAL'.expandtabs(8))\n",
        "  print(f'(s)\\t{loop_time:.1f}\\t{compile_eval_time:.1f}\\t{compile_train_time:.1f}\\t{train_time:.1f}\\t{eval_time:.1f}'.expandtabs(8))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "p-3gYmECRBOU"
      },
      "outputs": [],
      "source": [
        "# Run a full paths sampling iteration for a task.\n",
        "def task_iter(task, devices, pop:Population, exp_config:FrozenConfigDict):\n",
        "  num_devices = len(devices)\n",
        "  # Track best path.\n",
        "  best_path = pop.get_best_path(task)\n",
        "  num_gen_batches = math.ceil(exp_config.num_samples_per_task/num_devices)\n",
        "  for generation_batch_id in range(num_gen_batches):\n",
        "    print('----')\n",
        "    print(f'GENERATION: [{generation_batch_id+1}/{num_gen_batches}]')\n",
        "    ds_hparams = pop.sample_ds_hparams(task)\n",
        "    ds_train = task.get_ds('train', ds_hparams)\n",
        "    ds_validation = task.get_ds('validation', ds_hparams)\n",
        "    paths = [pop.sample_path(task, ds_hparams) for _ in range(num_devices)]\n",
        "    train_loop(paths, ds_train, ds_validation, devices, exp_config)\n",
        "    for path in paths:\n",
        "      if path.metrics['improved']:\n",
        "        assert path not in pop.paths\n",
        "        pop.paths[task].append(path)\n",
        "    # Track best path.\n",
        "    curr_best_path = pop.get_best_path(task)\n",
        "    if curr_best_path != best_path:\n",
        "      if best_path:\n",
        "        assert curr_best_path.score() \u003e= best_path.score()\n",
        "      best_path = curr_best_path\n",
        "      best_path.metrics['new_best'] = True\n",
        "      print(f'Best id:{best_path.id}',\n",
        "            f'score:{best_path.score():.4f}',\n",
        "            f'quality:{best_path.metrics[\"quality\"]:.4f}',\n",
        "            f'gen:{generation_batch_id}',\n",
        "            f'\\n{best_path.hparams}')\n",
        "  assert best_path in pop.paths[task]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JVT8nwIWMAVf"
      },
      "outputs": [],
      "source": [
        "TEST_MODELS_IMMUTABILITY = False\n",
        "\n",
        "# Run final eval on test set.\n",
        "def run_test_eval(path, ds_test):\n",
        "  # Running on same device should allow to reuse the fn compiled for validation\n",
        "  # if batch size matches.\n",
        "  params = path.get_all_params()\n",
        "  params_device = jax.device_put(params_comps_to_model(params), path.device)\n",
        "  acc_sum = []\n",
        "  tot_num_samples = 0\n",
        "  # Warning: if repeat() is called on this dataset, then this loop never ends.\n",
        "  for batch in ds_test:\n",
        "    acc_avg = jax.device_get(\n",
        "        eval_step(\n",
        "            params_device,\n",
        "            batch['image'],\n",
        "            batch['label'],\n",
        "            path.model))\n",
        "    batch_size = batch['image'].shape[0]\n",
        "    # Need to recompute sum because last batch can have different size to allow\n",
        "    # for exact eval on the test set.\n",
        "    acc_sum.append(acc_avg * batch_size)\n",
        "    tot_num_samples += batch_size\n",
        "  del params_device\n",
        "  acc_avg = np.sum(acc_sum) / tot_num_samples\n",
        "  if 'test_quality' in path.metrics:\n",
        "    assert np.isclose(path.metrics['test_quality'], acc_avg), \\\n",
        "        f'{path.task.name} {path.metrics[\"test_quality\"]} {acc_avg}'\n",
        "  path.metrics['test_quality'] = acc_avg\n",
        "\n",
        "def run_all_test_evals(pop):\n",
        "  threads = []\n",
        "  for path in [path for paths in pop.paths.values() for path in paths if path.is_trainable()]:\n",
        "    if 'test_quality' in path.metrics and not TEST_MODELS_IMMUTABILITY:\n",
        "      continue\n",
        "    ds_test = path.task.get_ds('test', path.hparams)\n",
        "    thread = Thread(target=run_test_eval, args=(path, ds_test))\n",
        "    thread.start()\n",
        "    threads.append(thread)\n",
        "  for thread in threads:\n",
        "    thread.join()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WjIr72eO0oBq"
      },
      "outputs": [],
      "source": [
        "def reset_globals(exp_config):\n",
        "  Path.reset_globals(exp_config)\n",
        "  Component.reset_globals()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Zc1DDzKl3sjc"
      },
      "outputs": [],
      "source": [
        "def init_population(exp_config:FrozenConfigDict, continue_exp:bool):\n",
        "  reset_globals(exp_config)\n",
        "\n",
        "  Path.scorer = globals()[exp_config.scorer_class](**exp_config.scorer_kwargs)\n",
        "  pop = Population(exp_config=exp_config)\n",
        "\n",
        "  def reload_state(load_exp_dir):\n",
        "    pop.paths_df = df_read_from_file(\n",
        "        load_exp_dir,\n",
        "        df_name='paths')\n",
        "    pop.comps_df = df_read_from_file(\n",
        "        load_exp_dir,\n",
        "        df_name='components')\n",
        "    df_reloaded_population = df_read_from_file(\n",
        "        load_exp_dir,\n",
        "        df_name='population')\n",
        "    load_population_from_checkpoint(\n",
        "        pop,\n",
        "        load_exp_dir,\n",
        "        df_reloaded_population)\n",
        "    print('Loaded models from', load_exp_dir, ':')\n",
        "    df_leaderboard(pop_to_df(pop))\n",
        "    Path.counter = 1 + pop.paths_df['id'].max()\n",
        "    Component.counter = 1 + pop.comps_df['id'].max()\n",
        "\n",
        "  # Load population from previous experiment.\n",
        "  if continue_exp:\n",
        "    load_exp_dir = exp_config.experiment_dir\n",
        "    reload_state(load_exp_dir)\n",
        "    return pop\n",
        "  elif exp_config.load_experiment:\n",
        "    load_exp_dir = exp_config.load_experiment_dir\n",
        "    reload_state(load_exp_dir)\n",
        "\n",
        "  # Add new seed models.\n",
        "  if not continue_exp and (\n",
        "      exp_config.load_rand_init or exp_config.load_vit_checkpoint):\n",
        "    hparams = exp_config.models_default_hparams.as_configdict()\n",
        "    # Add a randomly initialized model.\n",
        "    if exp_config.load_rand_init:\n",
        "      _, path0_params = get_vit_model_and_params_mapped(\n",
        "          **get_model_kwargs(hparams, exp_config))\n",
        "      path = Path(\n",
        "          hparams,\n",
        "          params2comps(path0_params, train_locks=[NOT_TRAINABLE]),\n",
        "          parent=None,\n",
        "          task=not_trainable)\n",
        "      pop.paths[not_trainable].append(path)\n",
        "    # Add model loaded from checkpoint.\n",
        "    if exp_config.load_vit_checkpoint:\n",
        "      path_params = get_vit_checkpoint_mapped(\n",
        "          hparams['ds_image_size'],\n",
        "          exp_config.load_vit_checkpoint_query)\n",
        "      path = Path(hparams, params2comps(\n",
        "          path_params,\n",
        "          train_locks=[NOT_TRAINABLE]),\n",
        "          parent=None,\n",
        "          task=not_trainable)\n",
        "      pop.paths[not_trainable].append(path)\n",
        "\n",
        "  return pop"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3HlykGxjGpvg"
      },
      "outputs": [],
      "source": [
        "# Experiment setup.\n",
        "def continue_exp(exp_dir):\n",
        "  # Load configs.\n",
        "  print('CONTINUING EXISTING EXPERIMENT:', exp_dir)\n",
        "  load_config_dict_file = os.path.join(exp_dir, 'config.json')\n",
        "  exp_config = FrozenConfigDict(json.load(\n",
        "      tf.io.gfile.GFile(load_config_dict_file, 'r')))\n",
        "  pop = init_population(exp_config, continue_exp=True)\n",
        "  # Get loop_id from checkpoint file name.\n",
        "  checkpoint_path = flax_checkpoints.latest_checkpoint(exp_dir)\n",
        "  matched = re.findall(r'checkpoint_([0-9]+)$', checkpoint_path)\n",
        "  assert len(matched)==1\n",
        "  loop_id = int(matched[0])\n",
        "  print('FROM CHECKPOINT:', loop_id)\n",
        "  assert exp_config.experiment_dir == exp_dir\n",
        "  return pop, exp_config, loop_id\n",
        "\n",
        "def setup_new_experiment(exp_config):\n",
        "  # Finalize and save config.\n",
        "  exp_config.experiment_id = exp_config.experiment_name \\\n",
        "      + datetime.datetime.strftime(\n",
        "          datetime.datetime.now(), ':%Y-%m-%d-%H-%M-%S')\n",
        "  exp_config.experiment_dir = os.path.join(exp_config.experiments_root_dir,\n",
        "                                           exp_config.experiment_id)\n",
        "  exp_config = FrozenConfigDict(exp_config)\n",
        "  pop = init_population(exp_config, continue_exp=False)\n",
        "  print('NEW EXPERIMENT:', exp_config.experiment_dir)\n",
        "  return pop, exp_config, 0"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PXCBJxPR4zd1"
      },
      "outputs": [],
      "source": [
        "def setup_exp():\n",
        "  if BENCHMARK == 'ViT tiny 3 layers / characters benchmark':\n",
        "    exp_config = get_exp_config_ti3_chars()\n",
        "    exp_config.experiment_name += ':t3-chars'\n",
        "  elif BENCHMARK == 'ViT base / decathlon benchmark':\n",
        "    exp_config = get_exp_config_base_deca()\n",
        "    exp_config.experiment_name += ':b-deca'\n",
        "  elif BENCHMARK == 'ViT large / ViT benchmark':\n",
        "    exp_config = get_exp_config_large()\n",
        "    exp_config.experiment_name += ':l-vit'\n",
        "  else:\n",
        "    assert False, BENCHMARK\n",
        "\n",
        "  if AUTO_TUNE:\n",
        "    assert CONFIGURATION == 'muNet' or CONFIGURATION.startswith('Size scale:')\n",
        "    exp_config.experiment_name += ':autotune'\n",
        "    exp_config = exp_config_add_auto_tune(exp_config)\n",
        "\n",
        "  if CONFIGURATION == 'Finetune all':\n",
        "    exp_config = exp_config_set_baseline_finetune_all(exp_config)\n",
        "    exp_config.experiment_name += ':finetune'\n",
        "  elif CONFIGURATION.startswith('Freeze bottom layers'):\n",
        "    num_layers = int(CONFIGURATION.split(':')[1])\n",
        "    exp_config = exp_config_set_baseline_freeze_bottom_layers(\n",
        "        exp_config, num_layers)\n",
        "    exp_config.experiment_name += f':freeze{num_layers}'\n",
        "  elif CONFIGURATION.startswith('Adapters:'):\n",
        "    adapter_dim = int(CONFIGURATION.split(':')[1])\n",
        "    exp_config = exp_config_set_baseline_adapters(exp_config, adapter_dim)\n",
        "    exp_config.experiment_name += f':adapters{adapter_dim}'\n",
        "  elif CONFIGURATION.startswith('Size scale:'):\n",
        "    base_percent = int(CONFIGURATION.split(':')[1])\n",
        "    exp_config = exp_config_set_size_scale(exp_config, base_percent)\n",
        "    exp_config.experiment_name += f':size{base_percent}'\n",
        "  elif CONFIGURATION == 'muNet':\n",
        "    exp_config.experiment_name += f':munet'\n",
        "  else:\n",
        "    assert False, CONFIGURATION\n",
        "\n",
        "  if AUTO_CONTINUE:\n",
        "    exp_dir_prefix = os.path.join(exp_config.experiments_root_dir,\n",
        "                                  exp_config.experiment_name)\n",
        "    matching_dirs = tf.io.gfile.GFile(exp_dir_prefix + '*')\n",
        "    assert len(matching_dirs) \u003c 2, \\\n",
        "        f'Multiple dirs matched for auto restart {matching_dirs}'\n",
        "    if len(matching_dirs) == 1:\n",
        "      print('AUTO CONTINE')\n",
        "      return continue_exp(matching_dirs[0])\n",
        "\n",
        "  return setup_new_experiment(exp_config)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YfRWFSDannwv"
      },
      "outputs": [],
      "source": [
        "# Main loop over tasks.\n",
        "pop, exp_config, loop_id = setup_exp()\n",
        "\n",
        "devices = jax.local_devices()\n",
        "print('DEVICE COUNT:', len(devices))\n",
        "num_tasks = len(exp_config.task_names)\n",
        "num_loops = exp_config.num_task_iters * num_tasks\n",
        "for _ in range(num_loops):\n",
        "  if loop_id \u003e= num_loops:\n",
        "    break\n",
        "  t_i = loop_id // num_tasks\n",
        "  task_idx = loop_id % num_tasks\n",
        "  task_name = exp_config.task_names[task_idx]\n",
        "  print('\\n\\n====')\n",
        "  print(f'LOOP: [{loop_id+1}/{exp_config.num_task_iters * num_tasks}]')\n",
        "  print(f'TASK: {task_name}')\n",
        "  task = Path.tasks(task_name=task_name)\n",
        "  pop.start_task(task)\n",
        "  task_iter(task, devices, pop, exp_config)\n",
        "  pop.end_task(task)\n",
        "  loop_id += 1\n",
        "\n",
        "  end_loop_st = time.time()\n",
        "  # Run test evals.\n",
        "  run_all_test_evals(pop)\n",
        "  pop_df = pop_to_df(pop)\n",
        "  # Save data needed to resume exp.\n",
        "  start_write = time.time()\n",
        "  print('WRITING CHECKPOINT:', loop_id)\n",
        "  if loop_id == 1:\n",
        "    tf.io.gfile.makedirs(exp_config.experiment_dir)\n",
        "    json.dump(exp_config.as_configdict().to_dict(),\n",
        "              tf.io.gfile.GFile(os.path.join(exp_config.experiment_dir,\n",
        "                                             'config.json'),\n",
        "                                'wb'), indent=2)\n",
        "  checkpoint_save(exp_config.experiment_dir, pop, step=loop_id)\n",
        "  df_write_to_file(pop_df, exp_config.experiment_dir, 'population')\n",
        "  df_write_to_file(pop.paths_df, exp_config.experiment_dir, 'paths')\n",
        "  df_write_to_file(pop.comps_df, exp_config.experiment_dir, 'components')\n",
        "  print(f'TEST EVAL TIME: {start_write - end_loop_st:.2f} s')\n",
        "  print(f'WRITE TIME: {time.time() - start_write:.2f} s')\n",
        "  # Display stats.\n",
        "  df_leaderboard(pop_df)\n",
        "  avg_time_per_sample = (\n",
        "      pop.paths_df['metrics.end_time'].mean() \\\n",
        "          - pop.paths_df['metrics.start_time_loop'].mean()\n",
        "      ) / len(devices)\n",
        "  print(f'Avg time per path: {avg_time_per_sample:.2f} s')"
      ]
    }
  ],
  "metadata": {
    "accelerator": "TPU",
    "colab": {
      "last_runtime": {
        "build_target": "//learning/deepmind/public/tools/ml_python:ml_notebook",
        "kind": "private"
      },
      "name": "muNet.ipynb",
      "private_outputs": true,
      "provenance": [
        {
          "file_id": "1qG0qNWixAyVI_8B4Oad30RwKnpl6EQcx",
          "timestamp": 1653080659320
        }
      ]
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
