{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DYNrjZhEodOi"
      },
      "source": [
        "\n",
        "Copyright 2022 Google LLC.\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\n",
        "\n",
        "https://www.apache.org/licenses/LICENSE-2.0\n",
        "Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UMGvjQvJa_3r"
      },
      "source": [
        "**LocoProp: Enhancing BackProp via Local Loss Optimization**  \n",
        "Ehsan Amid, Rohan Anil, Manfred K. Warmuth - AISTATS 2022\n",
        "https://proceedings.mlr.press/v151/amid22a/amid22a.pdf\n",
        "\n",
        "\n",
        "![picture](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX5vceZXAWIJohaqhy5tPqs52ryTd78pxjlGiF4qOkAdTZ2tA_2nCFX2lFYJSqAHyWvXG_3vSwix6YhQPQlHLYcEN8JxrC-P-E2nK1b5oSKCqbST5AisTpmo8p0F0xN7UaKfErkit2juHxHc7U4TCEBiNtBzORZ0fpCFv4IK7k_aVj5_1VaBQ8mOjW0w/s16000/image1.gif)\n",
        "\n",
        "\n",
        "Second-order methods have shown state-of-the-art performance for optimizing deep neural networks. Nonetheless, their large memory requirement and high computational complexity, compared to first-order methods, hinder their versatility in a typical low-budget setup. This paper introduces a general framework of layerwise loss construction for multilayer neural networks that achieves a performance closer to second-order methods while utilizing first-order optimizers only. Our methodology lies upon a three-component loss, target, and regularizer combination, for which altering each component results in a new update rule. We provide examples using squared loss and layerwise Bregman divergences induced by the convex integral functions of various transfer\n",
        "functions. Our experiments on benchmark models and datasets validate the\n",
        "efficacy of our new approach, reducing the gap between first-order and\n",
        "second-order optimizers. See our [Google AI blog post](https://ai.googleblog.com/2022/07/enhancing-backpropagation-via-local.html) for further details.\n",
        "\n",
        "\n",
        "Following illustrates how to train with LocoProp-M and LocoProp-S variants on MNIST with a Deep AutoEncoder. We primarily focus on optimizing training loss and autoencoders are known to be notoriously difficult to optimize. Our current version is in tensorflow-v1 and we plan to release JAX versions in the future.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "MeLzLREDYoiL"
      },
      "outputs": [],
      "source": [
        "\"\"\"LocoProp: Enhancing BackProp via Local Loss Optimization.\n",
        "\n",
        "https://arxiv.org/abs/2106.06199, AISTATS 2022\n",
        "Ehsan Amid, Rohan Anil, Manfred K. Warmuth\n",
        "\n",
        "Second-order methods have shown state-of-the-art performance for optimizing\n",
        "deep neural networks. Nonetheless, their large memory requirement and\n",
        "high computational complexity, compared to first-order methods, hinder their\n",
        "versatility in a typical low-budget setup. This paper introduces a general\n",
        "framework of layerwise loss construction for multilayer neural networks that\n",
        "achieves a performance closer to second-order methods while utilizing\n",
        "first-order optimizers only. Our methodology lies upon a three-component loss,\n",
        "target, and regularizer combination, for which altering each component results\n",
        "in a new update rule. We provide examples using squared loss and layerwise\n",
        "Bregman divergences induced by the convex integral functions of various transfer\n",
        "functions. Our experiments on benchmark models and datasets validate the\n",
        "efficacy of our new approach, reducing the gap between first-order and\n",
        "second-order optimizers.\n",
        "\n",
        "\"\"\"\n",
        "import functools\n",
        "import math\n",
        "\n",
        "from absl import app\n",
        "from absl import flags\n",
        "from keras.datasets import mnist\n",
        "import numpy as np\n",
        "import tensorflow.compat.v1 as tf\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "tf.disable_v2_behavior()\n",
        "tf.disable_eager_execution()\n",
        "\n",
        "flags.DEFINE_float('learning_rate', 1e-5, help='Base learning rate.')\n",
        "flags.DEFINE_float(\n",
        "    'activation_learning_rate', 10, help='Activation learning rate.')\n",
        "flags.DEFINE_integer('num_local_iters', 10, help='Number of local iterations.')\n",
        "flags.DEFINE_enum('mode', 'LocoPropM', ['LocoPropS', 'LocoPropM', 'BP'],\n",
        "                  'Which algorithm to use')\n",
        "flags.DEFINE_enum('activation', 'TANH', ['RELU', 'TANH'],\n",
        "                  'Which activation function to use')\n",
        "flags.DEFINE_enum('optimizer', 'rmsprop', [\n",
        "    'sgd', 'momentum', 'nesterov', 'adam', 'rmsprop', 'adagrad'],\n",
        "                  'Which algorithm to use')\n",
        "\n",
        "flags.DEFINE_float('one_minus_beta1', 0.001, help='Beta1 for Adam')\n",
        "flags.DEFINE_float('one_minus_beta2', 0.1, help='Beta2 for Adam')\n",
        "flags.DEFINE_float('epsilon', 1e-5, help='Diagonal epsilon')\n",
        "\n",
        "flags.DEFINE_float('weight_decay', 1e-5, help='Weight decay.')\n",
        "flags.DEFINE_integer('batch_size',\n",
        "                     1000, help='Batch size.')\n",
        "flags.DEFINE_integer('model_size_multiplier',\n",
        "                     1, help='Multiply model size by a constant')\n",
        "flags.DEFINE_integer('model_depth_multiplier',\n",
        "                     1, help='Multiply model depth by a constant')\n",
        "FLAGS = flags.FLAGS\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_aqTYx6--ek9"
      },
      "source": [
        "### Training setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sLIyeQJybGku"
      },
      "outputs": [],
      "source": [
        "def compute_squared_error(logits, targets):\n",
        "  \"\"\"Computes mean squared error between logits and targets.\"\"\"\n",
        "  return tf.reduce_mean(tf.square(targets - tf.nn.sigmoid(logits)))\n",
        "\n",
        "\n",
        "def compute_cross_entropy_loss(logits, labels):\n",
        "  \"\"\"Computes cross entropy loss from logits.\"\"\"\n",
        "  loss_matrix = tf.nn.sigmoid_cross_entropy_with_logits(\n",
        "      logits=logits, labels=labels)\n",
        "  ce_loss = tf.reduce_sum(tf.reduce_mean(loss_matrix, axis=0))\n",
        "  return ce_loss\n",
        "\n",
        "\n",
        "def optimizer_from_params(params):\n",
        "  \"\"\"Construct a tf.train.Optimizer from params.\"\"\"\n",
        "  if params['optimizer'] == 'sgd':\n",
        "    optimizer_class = tf.train.GradientDescentOptimizer\n",
        "    optimizer_hparams = {'learning_rate': params['learning_rate']}\n",
        "  elif params['optimizer'] == 'momentum':\n",
        "    optimizer_class = tf.train.MomentumOptimizer\n",
        "    optimizer_hparams = {\n",
        "        'learning_rate': params['learning_rate'],\n",
        "        'momentum': params['momentum']\n",
        "    }\n",
        "  elif params['optimizer'] == 'nesterov':\n",
        "    optimizer_class = tf.train.MomentumOptimizer\n",
        "    optimizer_hparams = {\n",
        "        'learning_rate': params['learning_rate'],\n",
        "        'momentum': params['momentum'],\n",
        "        'use_nesterov': True\n",
        "    }\n",
        "  elif params['optimizer'] == 'adam':\n",
        "    optimizer_class = tf.train.AdamOptimizer\n",
        "    optimizer_hparams = {\n",
        "        'learning_rate': params['learning_rate'],\n",
        "        'beta1': params['beta1'],\n",
        "        'beta2': params['beta2'],\n",
        "        'epsilon': params['epsilon']}\n",
        "  elif params['optimizer'] == 'rmsprop':\n",
        "    optimizer_class = tf.train.RMSPropOptimizer\n",
        "    optimizer_hparams = {\n",
        "        'learning_rate': params['learning_rate'],\n",
        "        'momentum': params['beta1'],\n",
        "        'decay': params['beta2'],\n",
        "        'epsilon': params['epsilon']\n",
        "    }\n",
        "  elif params['optimizer'] == 'adagrad':\n",
        "    optimizer_class = tf.train.AdagradOptimizer\n",
        "    optimizer_hparams = {\n",
        "        'learning_rate': params['learning_rate'],\n",
        "        'initial_accumulator_value': params['epsilon']\n",
        "    }\n",
        "  optimizer = optimizer_class(**optimizer_hparams)\n",
        "  return optimizer\n",
        "\n",
        "def act_fn(activation_fn_name):\n",
        "  if activation_fn_name == 'NONE':\n",
        "    act_fun = tf.identity\n",
        "  elif activation_fn_name == 'TANH':\n",
        "    act_fun = tf.nn.tanh\n",
        "  elif activation_fn_name == 'RELU':\n",
        "    act_fun = tf.nn.relu\n",
        "  elif activation_fn_name == 'SIGMOID':\n",
        "    act_fun = tf.nn.sigmoid\n",
        "  return act_fun\n",
        "\n",
        "\n",
        "# A simple autoencoder model with cross-entropy loss.\n",
        "def create_autoencoder_model(input_image,\n",
        "                             optimizer_hparams,\n",
        "                             encoder_decoder_sizes,\n",
        "                             ext='global',\n",
        "                             mode='train',\n",
        "                             act_lr=1.0,\n",
        "                             transfer_func='RELU',\n",
        "                             batch_size=1000):\n",
        "\n",
        "  fc_layers = []\n",
        "  fc_names = []\n",
        "  fc_fns = []\n",
        "\n",
        "  def get_weight_bias(name, ext, shape):\n",
        "    return (tf.get_variable(\n",
        "        name + '_' + ext + '_weight',\n",
        "        shape=shape,\n",
        "        initializer=tf.keras.initializers.glorot_uniform(),\n",
        "        dtype=tf.float32),\n",
        "            tf.get_variable(\n",
        "                name + '_' + ext + '_bias',\n",
        "                shape=(shape[1]),\n",
        "                initializer=tf.keras.initializers.glorot_uniform(),\n",
        "                dtype=tf.float32))\n",
        "\n",
        "  encoder_sizes, decoder_sizes = encoder_decoder_sizes\n",
        "\n",
        "  # A very simple autoencoder with a bottleneck layer.\n",
        "  with tf.variable_scope('autoencoder_' + ext, reuse=tf.AUTO_REUSE):\n",
        "    # First layer.\n",
        "    fc_layers.append(\n",
        "        get_weight_bias('encoder_layer_0', ext, (784, encoder_sizes[0])))\n",
        "    fc_names.append('encoder_layer_0')\n",
        "    fc_fns.append(transfer_func)\n",
        "\n",
        "    for i in range(1, len(encoder_sizes)):\n",
        "      fc_layers.append(\n",
        "          get_weight_bias('encoder_layer_%d' % i, ext,\n",
        "                          (encoder_sizes[i - 1], encoder_sizes[i])))\n",
        "      fc_names.append('encoder_layer_%d' % i)\n",
        "      if i == len(encoder_sizes) - 1:\n",
        "        fc_fns.append('NONE')\n",
        "      else:\n",
        "        fc_fns.append(transfer_func)\n",
        "    fc_layers.append(\n",
        "        get_weight_bias('decoder_layer_0', ext,\n",
        "                        (encoder_sizes[-1], decoder_sizes[0])))\n",
        "    fc_names.append('decoder_layer_0')\n",
        "    fc_fns.append(transfer_func)\n",
        "\n",
        "    for i in range(1, len(decoder_sizes)):\n",
        "      fc_layers.append(\n",
        "          get_weight_bias('decoder_layer_%d' % i, ext,\n",
        "                          (decoder_sizes[i - 1], decoder_sizes[i])))\n",
        "      fc_names.append('decoder_layer_%d' % i)\n",
        "      fc_fns.append(transfer_func)\n",
        "\n",
        "    fc_layers.append(\n",
        "        get_weight_bias('decoder_layer_%d' % len(decoder_sizes), ext,\n",
        "                        (decoder_sizes[-1], 784)))\n",
        "    fc_names.append('decoder_layer_%d' % len(decoder_sizes))\n",
        "    fc_fns.append('SIGMOID')  # last layer (applied implicitly in the loss)\n",
        "    activations = []\n",
        "    post_activations = []\n",
        "\n",
        "    # LocoProp requires knowing the activation / post activation values to\n",
        "    # compute per layer targets.\n",
        "    x = input_image\n",
        "    for li in range(len(fc_layers)):\n",
        "      x = tf.matmul(x, fc_layers[li][0]) + fc_layers[li][1]\n",
        "      activations.append(x)\n",
        "      if fc_fns[li] == 'TANH':\n",
        "        x = tf.nn.tanh(x)\n",
        "      elif fc_fns[li] == 'RELU':\n",
        "        x = tf.nn.relu(x)\n",
        "      elif fc_fns[li] == 'SIGMOID':\n",
        "        y = x\n",
        "        x = tf.nn.sigmoid(x)\n",
        "      post_activations.append(x)\n",
        "    weight_variables = [w for w, _ in fc_layers]\n",
        "    bias_variables = [b for _, b in fc_layers]\n",
        "    squared_err = compute_squared_error(x, input_image)\n",
        "    ce_loss = compute_cross_entropy_loss(y, input_image)\n",
        "    gs = tf.get_variable('total_steps', shape=[],\n",
        "                         initializer=tf.zeros_initializer(), dtype=tf.int32)\n",
        "    gs = gs.assign(gs + 1)\n",
        "    assign_op = None\n",
        "    train_op_loco_prop_s = None\n",
        "    train_op_loco_prop_m = None\n",
        "    train_op_bp = None\n",
        "    assign_ops_input = []\n",
        "    train_ops_s = []  # train_ops for LocoProp-S\n",
        "    train_ops_m = []  # train_ops for LocoProp-M\n",
        "    reset_optimizer_loco_prop = []\n",
        "    input_checkpoints = []\n",
        "    if mode == 'train':\n",
        "      base_lr = optimizer_hparams['learning_rate']\n",
        "      # construct the matching losses\n",
        "      for i in range(len(fc_layers)):\n",
        "        gs_layer = tf.get_variable(\n",
        "            'total_steps_layer_' + str(i), shape=[],\n",
        "            initializer=tf.zeros_initializer(), dtype=tf.int32)\n",
        "        w, b = fc_layers[i]\n",
        "        def _learning_fn(i):\n",
        "          gs_layer = tf.get_variable(\n",
        "              'total_steps_layer_' + str(i), shape=[],\n",
        "              initializer=tf.zeros_initializer(), dtype=tf.int32)\n",
        "          # Internally, LocoProp involves T steps of training. \n",
        "          # We use a decreasing schedule here.\n",
        "          decay = tf.maximum(\n",
        "              (1.0 - tf.cast(gs_layer, tf.float32) / FLAGS.num_local_iters),\n",
        "              0.25)\n",
        "          return base_lr * decay\n",
        "\n",
        "        optimizer_hparams[\n",
        "            'learning_rate'] = functools.partial(_learning_fn, i)\n",
        "        optimizer_mp = optimizer_from_params(params=optimizer_hparams)\n",
        "        reset_optimizer_loco_prop.append(\n",
        "            tf.variables_initializer(optimizer_mp.variables()))\n",
        "        act_fun = act_fn(fc_fns[i])\n",
        "        activation = activations[i]\n",
        "        post_activation = post_activations[i]\n",
        "        if i == 0:\n",
        "          input_to_layer = input_image\n",
        "        else:\n",
        "          input_to_layer = post_activations[i - 1]\n",
        "\n",
        "        target_gd = activation - act_lr * tf.gradients(ce_loss, [activation])[0]\n",
        "        target_primal = post_activation - act_lr * tf.gradients(\n",
        "            ce_loss, [activation])[0]\n",
        "        train_local_s = [tf.assign(gs_layer, 0)]\n",
        "        train_local_m = [tf.assign(gs_layer, 0)]\n",
        "        batch_target_gd = target_gd\n",
        "        batch_target_primal = target_primal\n",
        "        batch_input_layer = input_to_layer\n",
        "        for _ in range(FLAGS.num_local_iters):\n",
        "          with tf.control_dependencies(train_local_s):\n",
        "            fake_activation = tf.matmul(batch_input_layer, w) + b\n",
        "            delta_s = fake_activation - batch_target_gd\n",
        "            gradient_weights_s = tf.matmul(\n",
        "                tf.transpose(batch_input_layer), delta_s)\n",
        "            gradient_bias_s = tf.reduce_sum(delta_s, 0)\n",
        "            ## Two approaches to calculate the gradients:\n",
        "            # (i) Using autodiff on the matching loss\n",
        "            # (ii) Using matmul to calculate the gradients.\n",
        "            train_local_s = [\n",
        "                optimizer_mp.apply_gradients([(gradient_weights_s, w),\n",
        "                                              (gradient_bias_s, b)],\n",
        "                                             global_step=gs_layer)\n",
        "            ]\n",
        "          with tf.control_dependencies(train_local_m):\n",
        "            fake_activation = tf.matmul(batch_input_layer, w) + b\n",
        "            fake_post_activation = act_fun(fake_activation)\n",
        "            delta_m = fake_post_activation - batch_target_primal\n",
        "            gradient_weights_m = tf.matmul(\n",
        "                tf.transpose(batch_input_layer), delta_m)\n",
        "            gradient_bias_m = tf.reduce_sum(delta_m, 0)\n",
        "            train_local_m = [\n",
        "                optimizer_mp.apply_gradients([(gradient_weights_m, w),\n",
        "                                              (gradient_bias_m, b)],\n",
        "                                             global_step=gs_layer)\n",
        "            ]\n",
        "\n",
        "        train_ops_s.append(tf.group(*train_local_s))\n",
        "        train_ops_m.append(tf.group(*train_local_m))\n",
        "      global_optimizer = optimizer_from_params(params=optimizer_hparams)\n",
        "      train_op_bp = global_optimizer.minimize(\n",
        "          ce_loss, var_list=weight_variables + bias_variables)\n",
        "      train_op_loco_prop_s = tf.group(*train_ops_s)\n",
        "      train_op_loco_prop_m = tf.group(*train_ops_m)\n",
        "      reset_optimizer_loco_prop = tf.group(*reset_optimizer_loco_prop)\n",
        "  return {\n",
        "      'train_op_loco_prop_s': train_op_loco_prop_s,\n",
        "      'train_op_loco_prop_m': train_op_loco_prop_m,\n",
        "      'input_checkpoints': input_checkpoints,\n",
        "      'train_op_bp': train_op_bp,\n",
        "      'loss': ce_loss,\n",
        "      'squared_err': squared_err,\n",
        "      'assign_op': assign_op,\n",
        "      'assign_ops_input': assign_ops_input,\n",
        "      'train_ops_sp': train_ops_s,\n",
        "      'train_ops_mp': train_ops_m,\n",
        "      'reset_optimizer_loco_prop': reset_optimizer_loco_prop,\n",
        "      'fc_layers': fc_layers,\n",
        "      'reconstructed_image': x,\n",
        "      'activations': activations,\n",
        "      'post_activations': post_activations,\n",
        "      'fc_fns': fc_fns,\n",
        "  }\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qSkSHzXV-i2S"
      },
      "source": [
        "### Training steps"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YZusu8OIbPVU"
      },
      "outputs": [],
      "source": [
        "(train_inputs, _), (test_inputs, test_labels) = mnist.load_data()\n",
        "train_inputs = train_inputs.astype(np.float32)\n",
        "test_inputs = test_inputs.astype(np.float32)\n",
        "\n",
        "# Rescale input images to [0, 1]\n",
        "train_inputs = np.reshape(train_inputs, [-1, 784]) / 255.0\n",
        "test_inputs = np.reshape(test_inputs, [-1, 784]) / 255.0\n",
        "\n",
        "num_train_examples = train_inputs.shape[0]\n",
        "num_test_examples = test_inputs.shape[0]\n",
        "print('MNIST dataset:')\n",
        "print('Num train examples: ' + str(num_train_examples))\n",
        "print('Num test examples: ' + str(num_test_examples))\n",
        "\n",
        "tf.reset_default_graph()\n",
        "\n",
        "batch_size = FLAGS.batch_size\n",
        "\n",
        "# We find second-order methods and LocoProp to work quite well for deeper\n",
        "# autoencoders.\n",
        "encoder_sizes = [1000] +  [500] * FLAGS.model_depth_multiplier + [250, 30]\n",
        "decoder_sizes = [250] +  [500] * FLAGS.model_depth_multiplier + [1000]\n",
        "\n",
        "encoder_sizes = [FLAGS.model_size_multiplier * e for e in encoder_sizes]\n",
        "decoder_sizes = [FLAGS.model_size_multiplier * e for e in decoder_sizes]\n",
        "encoder_decoder_sizes = encoder_sizes, decoder_sizes\n",
        "input_image_batch = tf.placeholder(tf.float32, (batch_size, 784))\n",
        "input_image = tf.placeholder(tf.float32, (None, 784))\n",
        "\n",
        "# LocoProp inner routine requires setting up activation learning rates.\n",
        "act_lr = tf.placeholder(tf.float32, name='activation_lr')\n",
        "lr = tf.placeholder(tf.float32, name='lr')\n",
        "transfer_func = FLAGS.activation\n",
        "optimizer_type = FLAGS.optimizer\n",
        "\n",
        "optimizer_hparams = {\n",
        "    'sgd': {\n",
        "        'optimizer': 'sgd',\n",
        "        'learning_rate': lr\n",
        "    },\n",
        "    'momentum': {\n",
        "        'optimizer': 'momentum',\n",
        "        'learning_rate': lr,\n",
        "        'momentum': 1.0 - FLAGS.one_minus_beta1,\n",
        "    },\n",
        "    'nesterov': {\n",
        "        'optimizer': 'nesterov',\n",
        "        'learning_rate': lr,\n",
        "        'momentum': 1.0 - FLAGS.one_minus_beta1,\n",
        "        'use_nesterov': True,\n",
        "    },\n",
        "    'adam': {\n",
        "        'optimizer': 'adam',\n",
        "        'learning_rate': lr,\n",
        "        'beta1': 1.0 - FLAGS.one_minus_beta1,\n",
        "        'beta2': 1.0 - FLAGS.one_minus_beta2,\n",
        "        'epsilon': FLAGS.epsilon\n",
        "    },\n",
        "    'adagrad': {\n",
        "        'optimizer': 'adagrad',\n",
        "        'learning_rate': lr,\n",
        "        'epsilon': FLAGS.epsilon\n",
        "    },\n",
        "    'rmsprop': {\n",
        "        'optimizer': 'rmsprop',\n",
        "        'learning_rate': lr,\n",
        "        'beta1': 1.0 - FLAGS.one_minus_beta1,\n",
        "        'beta2': 1.0 - FLAGS.one_minus_beta2,\n",
        "        'epsilon': FLAGS.epsilon,\n",
        "    },\n",
        "}\n",
        "\n",
        "optimizer_hparams = optimizer_hparams[optimizer_type]\n",
        "global_params_train = create_autoencoder_model(\n",
        "    input_image_batch,\n",
        "    optimizer_hparams,\n",
        "    encoder_decoder_sizes,\n",
        "    ext='global',\n",
        "    mode='train',\n",
        "    act_lr=act_lr,\n",
        "    transfer_func=transfer_func,\n",
        "    batch_size=batch_size)\n",
        "global_params_infer = create_autoencoder_model(\n",
        "    input_image,\n",
        "    optimizer_hparams,\n",
        "    encoder_decoder_sizes,\n",
        "    ext='global',\n",
        "    mode='infer',\n",
        "    act_lr=act_lr,\n",
        "    transfer_func=transfer_func,\n",
        "    batch_size=batch_size)\n",
        "\n",
        "# All experiments use 100 epochs of training with 5 epochs used as a warmup.\n",
        "# A linear warmup followed by a decay is used for training.\n",
        "num_epochs = 100\n",
        "warmup_epochs = 5\n",
        "disp_epoch = 1\n",
        "act_lr_val = FLAGS.activation_learning_rate\n",
        "reset_optimizers = False\n",
        "lr_val = FLAGS.learning_rate\n",
        "train_log = []\n",
        "test_log = []\n",
        "train_method = FLAGS.mode\n",
        "num_local_iters = FLAGS.num_local_iters\n",
        "\n",
        "print('-----------------------')\n",
        "print('Authoencoder model with (%d x size, %d x depth) multipliers and'\n",
        "      ' %s activation function.' % (FLAGS.model_size_multiplier,\n",
        "                                    FLAGS.model_depth_multiplier,\n",
        "                                    FLAGS.activation))\n",
        "print('Train method: %s' % train_method)\n",
        "if 'Loco' in train_method:\n",
        "  print('Number of local iterations: %d' % num_local_iters)\n",
        "  print('Internal optimizer: %s' % FLAGS.optimizer)\n",
        "else:\n",
        "  print('Optimizer: %s' % FLAGS.optimizer)\n",
        "print('-----------------------')\n",
        "\n",
        "# This is mainly recorded for hparam tuning setup that we used.\n",
        "best_train_loss = 1e6\n",
        "with tf.Session() as sess:\n",
        "  sess.run(tf.global_variables_initializer())\n",
        "  train_loss_val, train_squared_err_val = sess.run(\n",
        "      [global_params_infer['loss'], global_params_infer['squared_err']],\n",
        "      feed_dict={input_image: train_inputs})\n",
        "  test_loss_val, test_squared_err_val = sess.run(\n",
        "      [global_params_infer['loss'], global_params_infer['squared_err']],\n",
        "      feed_dict={input_image: test_inputs})\n",
        "  print('init (train, test) loss (%3.3f, %3.3f), '\n",
        "                  '(train, test) squared error (%3.3f, %3.3f)' %\n",
        "                  (train_loss_val, test_loss_val, train_squared_err_val,\n",
        "                    test_squared_err_val))\n",
        "  train_log.append([train_loss_val, train_squared_err_val])\n",
        "  test_log.append([test_loss_val, test_squared_err_val])\n",
        "  for epoch in range(99):\n",
        "    idx_epoch = np.random.permutation(train_inputs.shape[0])\n",
        "    for bb in range(int(num_train_examples / batch_size)):\n",
        "      idx_batch = idx_epoch[np.arange(batch_size) + bb * batch_size]\n",
        "      train_x = train_inputs[idx_batch]\n",
        "      lr_bp = lr_val\n",
        "      if epoch \u003c warmup_epochs:\n",
        "        lr_bp = lr_val * (epoch / warmup_epochs)\n",
        "      else:\n",
        "        lr_bp = lr_val * (1.0 - (epoch + 1 - warmup_epochs) /\n",
        "                          (num_epochs - warmup_epochs))\n",
        "      if train_method in ['LocoPropS', 'LocoPropM']:\n",
        "        train_op_name = ('train_op_loco_prop_s' if train_method == 'LocoPropS'\n",
        "                          else 'train_op_loco_prop_m')\n",
        "        sess.run(\n",
        "            global_params_train[train_op_name],\n",
        "            feed_dict={\n",
        "                input_image_batch: train_x,\n",
        "                act_lr: act_lr_val,\n",
        "                lr: lr_bp\n",
        "            })\n",
        "\n",
        "        if reset_optimizers:\n",
        "          sess.run(global_params_train['reset_optimizer_loco_prop'])\n",
        "      elif train_method == 'BP':\n",
        "        sess.run(\n",
        "            global_params_train['train_op_bp'],\n",
        "            feed_dict={\n",
        "                input_image_batch: train_x,\n",
        "                lr: lr_bp\n",
        "            })\n",
        "    \n",
        "    train_loss_val, train_squared_err_val = sess.run(\n",
        "        [global_params_infer['loss'], global_params_infer['squared_err']],\n",
        "        feed_dict={input_image: train_inputs})\n",
        "    test_loss_val, test_squared_err_val = sess.run(\n",
        "        [global_params_infer['loss'], global_params_infer['squared_err']],\n",
        "        feed_dict={input_image: test_inputs})\n",
        "    if (epoch + 1) % disp_epoch == 0:\n",
        "      print('epoch %d, (train, test) loss (%3.3f, %3.3f), '\n",
        "                      '(train, test) squared error (%3.3f, %3.3f)' %\n",
        "                      (epoch + 1, train_loss_val, test_loss_val,\n",
        "                        train_squared_err_val, test_squared_err_val))\n",
        "    best_train_loss = min(train_loss_val, best_train_loss)\n",
        "\n",
        "    # Used for hyper-parameter tuning early exits.\n",
        "    if math.isnan(\n",
        "        train_loss_val) or best_train_loss \u003e 600 or train_loss_val \u003e 1000:\n",
        "      best_train_loss = 1000\n",
        "      break\n",
        "    if epoch \u003e 10  and best_train_loss \u003e 350:\n",
        "      break\n",
        "    train_log.append([train_loss_val, train_squared_err_val])\n",
        "    test_log.append([test_loss_val, test_squared_err_val])\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uBRil-ED-msq"
      },
      "source": [
        "### Plot the results"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "o5yYhYJDsLRL"
      },
      "outputs": [],
      "source": [
        "plt.figure(figsize=(8, 6), dpi=300)\n",
        "\n",
        "method_name = ('%s (num local iters=%d)' % (train_method, num_local_iters)\n",
        "               if 'Loco' in train_method else 'BP')\n",
        "plt.semilogy(\n",
        "    np.arange(len(train_log)),\n",
        "    [ll[0] for ll in train_log],\n",
        "    '-',\n",
        "    label='Train loss - ' + method_name,\n",
        "    linewidth=2.0)\n",
        "plt.semilogy(\n",
        "    np.arange(len(test_log)),\n",
        "    [ll[0] for ll in test_log],\n",
        "    '--',\n",
        "    label='Test loss  - ' + method_name,\n",
        "    linewidth=2.0)\n",
        "plt.xlabel('Epochs', fontsize=18)\n",
        "plt.xticks(fontsize=12)\n",
        "plt.ylabel('CE loss', fontsize=18)\n",
        "plt.yticks(fontsize=12)\n",
        "plt.legend(fontsize=14)\n",
        "plt.show()"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "last_runtime": {
        "build_target": "//learning/deepmind/public/tools/ml_python:ml_notebook",
        "kind": "private"
      },
      "name": "LocoProp: Enhancing BackProp via Local Loss Optimization",
      "private_outputs": true,
      "provenance": [
        {
          "file_id": "1vPP9LPp2lwGiuNSUdhmfH1_kfCFuCTor",
          "timestamp": 1653805839417
        }
      ]
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
