{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-0BQWhvAP2jb"
      },
      "source": [
        "\n",
        "\u003ca href=\"https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/evaluation.ipynb\" target=\"_parent\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\u003c/a\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bqZYp90PIa1t"
      },
      "source": [
        "# Overview\n",
        "\n",
        "This is the fourth Colab in a [series of tutorials on how to use T5X](https://github.com/google-research/t5x/blob/main/docs/tutorials.md). We assume that you have already completed the [Introductory Colab](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/introduction.ipynb), the [Training Deep Dive](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/training.ipynb), and the [Inference Deep Dive](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/inference.ipynb), or have a basic understanding of the T5X models, checkpoints, partitioner, trainer, and `InteractiveModel`.\n",
        "\n",
        "In the [previous Colab](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/inference.ipynb) in this tutorial series, we dove into how the `InteractiveModel` does decoding to generate predictions and scores for a given input. We will now focus on how the InteractiveModel takes a batch of inputs and targets and runs evaluation to produce various metrics. It should be noted that the code snippets below exactly replicate the InteractiveModel `__init__()` and `evaluate()` methods (see [source code](https://github.com/google-research/t5x/blob/main/t5x/interactive_model.py)); we expose this functionality here in order to demonstrate how various components of the T5X codebase work together to perform evaluation on a model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nZJbWZcfkyxI"
      },
      "source": [
        "# Set-Up"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VkVjJOewsMM8"
      },
      "source": [
        "Note: If you are a using public colab, please use its `Connect to a local runtime` option by following the [setup guide](https://github.com/google-research/t5x/blob/main/t5x/notebooks/README.md)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jIGSIHzD7YPO"
      },
      "outputs": [],
      "source": [
        "from collections.abc import Sequence\n",
        "import enum\n",
        "import functools\n",
        "import inspect\n",
        "import itertools\n",
        "import logging\n",
        "import os\n",
        "import re\n",
        "from typing import Any, Callable, Iterator, Optional, Tuple, Union\n",
        "\n",
        "import jax\n",
        "from jax import random\n",
        "from jax.experimental import multihost_utils\n",
        "import numpy as np\n",
        "import seqio\n",
        "import tensorflow as tf\n",
        "import tensorflow_datasets as tfds\n",
        "from t5.evaluation import metrics as t5_metrics\n",
        "import t5.data"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Mt7gxVc9sVjN"
      },
      "outputs": [],
      "source": [
        "import clu.data\n",
        "from t5x.examples.t5 import network\n",
        "import t5x\n",
        "from t5x import models\n",
        "from t5x import partitioning\n",
        "from t5x import trainer as trainer_lib\n",
        "from t5x import utils\n",
        "from t5x.infer import _extract_tokens_and_aux_values\n",
        "from t5x.infer import _Inferences\n",
        "from t5x.interactive_model import InteractiveModel\n",
        "from t5x.interactive_model import get_batches_from_seqio\n",
        "from t5x.interactive_model import get_dataset_from_natural_text_examples\n",
        "from t5x.interactive_model import get_gin_config_from_interactive_model\n",
        "from t5x.interactive_model import T5XScriptType\n",
        "from t5x.interactive_model import InferenceType"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "S5Lb-Z1fkF5a"
      },
      "source": [
        "Before we begin, let's initialize instances of the constructor arguments for the `InteractiveModel`. As mentioned previously, this will enable us to dive into how the `InteractiveModel` runs inference.\n",
        "\n",
        "If you don't understand the lines of code below, or have questions about how to initialize these parameters, please see the [first Colab](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/introduction.ipynb) in this tutorial series."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ne8U8qoWkX_r"
      },
      "outputs": [],
      "source": [
        "# Define a model. The configuration below corresponds to the T5 1.1 Small model.\n",
        "t5_config = network.T5Config(\n",
        "    vocab_size=32128,\n",
        "    dtype='bfloat16',\n",
        "    emb_dim=512,\n",
        "    num_heads=6,\n",
        "    num_encoder_layers=8,\n",
        "    num_decoder_layers=8,\n",
        "    head_dim=64,\n",
        "    mlp_dim=1024,\n",
        "    mlp_activations=('gelu', 'linear'),\n",
        "    dropout_rate=0.0,\n",
        "    logits_via_embedding=False)\n",
        "module = network.Transformer(config=t5_config)\n",
        "model = t5x.models.EncoderDecoderModel(\n",
        "    module=module,\n",
        "    input_vocabulary=t5.data.get_default_vocabulary(),\n",
        "    output_vocabulary=t5.data.get_default_vocabulary(),\n",
        "    optimizer_def=t5x.adafactor.Adafactor(decay_rate=0.8, step_offset=0))\n",
        "# Define checkpoint arguments.\n",
        "checkpoint_path='gs://t5-data/pretrained_models/cbqa/small_ssm_nq/model.ckpt-1110000'\n",
        "dtype='bfloat16'\n",
        "restore_mode='specific'\n",
        "# Define a partitioner.\n",
        "partitioner=partitioning.PjitPartitioner(num_partitions=2)\n",
        "# Define additional, miscellaneous constructor arguments.\n",
        "batch_size=8\n",
        "task_feature_lengths = {'inputs': 38, 'targets': 18}\n",
        "output_dir='/tmp/output_dir'\n",
        "input_shapes = {\n",
        "    'encoder_input_tokens': np.array([8, 38]),\n",
        "    'decoder_target_tokens': np.array([8, 18]),\n",
        "    'decoder_input_tokens': np.array([8, 18]),\n",
        "    'decoder_loss_weights': np.array([8, 18])\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EYwdg-fFTU8Q"
      },
      "source": [
        "In addition, we will run all code that is performed when we initialize the InteractiveModel. If you don't understand the lines of code below or have any additional questions about how/why we do the steps below, please see the [second Colab](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/training.ipynb) in our tutorial series."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YmGTJBAcTpMR"
      },
      "outputs": [],
      "source": [
        "# 1.) Configure the Output Directory\n",
        "output_dir = re.sub(r\"(?\u003c!gs:)([\\/]{2,})\", \"/\", output_dir)\n",
        "if not os.path.exists(output_dir):\n",
        "  os.mkdir(output_dir)\n",
        "\n",
        "# 2.) Initialize RNGs\n",
        "init_random_seed = 42\n",
        "random_seed = multihost_utils.broadcast_one_to_all(np.int32(init_random_seed))\n",
        "utils.set_hardware_rng_ops()\n",
        "rng = random.PRNGKey(random_seed)\n",
        "init_rng, trainer_rng = random.split(rng, 2)\n",
        "\n",
        "# 3.) Validate the Partitioner\n",
        "if partitioner._model_parallel_submesh:\n",
        "  num_partitions = np.prod(partitioner._model_parallel_submesh)\n",
        "else:\n",
        "  num_partitions = partitioner._num_partitions\n",
        "if jax.device_count() % num_partitions != 0:\n",
        "  raise ValueError(\n",
        "    \"The number of devices available must be a multiple of the number of\",\n",
        "    f\" partitions. There are {jax.device_count()} devices available, but\",\n",
        "    f\" the number of partitions is set to {num_partitions}. Please\",\n",
        "    \" provide a different number of partitions.\")\n",
        "\n",
        "# 4.) Create a Checkpoint Manager\n",
        "# a.) Define CheckpointCfg wrappers.\n",
        "save_checkpoint_cfg = utils.SaveCheckpointConfig(\n",
        "        dtype=dtype,\n",
        "        keep=5, # The number of checkpoints to keep in the output_dir.\n",
        "        save_dataset=False)\n",
        "restore_checkpoint_cfg = utils.RestoreCheckpointConfig(\n",
        "        dtype=dtype,\n",
        "        mode=restore_mode,\n",
        "        path=checkpoint_path)\n",
        "\n",
        "# b.) Define a train state initializer, which will help us get information about the\n",
        "# TrainState shape.\n",
        "train_state_initializer = utils.TrainStateInitializer(\n",
        "        optimizer_def=model.optimizer_def,\n",
        "        init_fn=model.get_initial_variables,\n",
        "        input_shapes=input_shapes,\n",
        "        input_types=None,\n",
        "        partitioner=partitioner)\n",
        "\n",
        "# c.) Define the checkpoint manager.\n",
        "checkpoint_manager = utils.LegacyCheckpointManager(\n",
        "        save_cfg=save_checkpoint_cfg,\n",
        "        restore_cfg=restore_checkpoint_cfg,\n",
        "        train_state_shape=train_state_initializer.global_train_state_shape,\n",
        "        partitioner=partitioner,\n",
        "        ds_iter=None,\n",
        "        model_dir=output_dir)\n",
        "\n",
        "### 5.) Restore the Model from a Checkpoint, or Initialize from Scratch ###\n",
        "def get_state(rng):\n",
        "  return train_state_initializer.from_scratch(rng).state_dict()\n",
        "\n",
        "# a.) Try to restore a model from a checkpoint.\n",
        "train_state = checkpoint_manager.restore(\n",
        "  [restore_checkpoint_cfg.path],\n",
        "  restore_checkpoint_cfg,\n",
        "  utils.get_fallback_state(restore_checkpoint_cfg, get_state, init_rng)\n",
        ")\n",
        "\n",
        "# b.) If no checkpoint to restore, init from scratch.\n",
        "if train_state is None:\n",
        "  train_state = train_state_initializer.from_scratch(init_rng)\n",
        "\n",
        "output_features = {\n",
        "        \"inputs\":\n",
        "            seqio.Feature(\n",
        "                vocabulary=model.input_vocabulary, add_eos=True),\n",
        "        \"targets\":\n",
        "            seqio.Feature(\n",
        "                vocabulary=model.output_vocabulary, add_eos=True)\n",
        "    }\n",
        "features = dict(sorted(output_features.items()))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ckw3l3Go_ZDL"
      },
      "source": [
        "Finally, the InteractiveModel defines a `self.infer_with_preprocessors` method that we will need to reference in order to run evaluation. However, we are breaking down the InteractiveModel functionality and do not actually use an instance of the InteractiveModel in this Colab. Thus, we will duplicate this class method below.\n",
        "\n",
        "If you don't understand the lines of code below or have any additional questions about how/why we do the steps below, please see the third Colab in our tutorial series: https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.mdx-colab-inference."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FpSheT1hATQ2"
      },
      "outputs": [],
      "source": [
        "def infer_with_preprocessors(\n",
        "  mode: InferenceType, examples: Sequence[Union[str, dict[str, str]]],\n",
        "  preprocessors: Sequence[Callable[..., tf.data.Dataset]]) -\u003e _Inferences:\n",
        "  \"\"\"Infer function.\n",
        "\n",
        "  Args:\n",
        "    mode: Either 'score' to compute the log likelihood of given targets, or\n",
        "      'predict_with_aux' to score and decode targets.\n",
        "    examples: a single batch of examples that should be transformed into a\n",
        "      tf.data.Dataset. The examples can either take the form of a string (ex:\n",
        "      a single input for inference), or a dictionary mapping \"input\"/\"target\"\n",
        "      to a string containing that element.\n",
        "    preprocessors: list(callable), an optional list of functions that receive\n",
        "      a tf.data.Dataset and return a tf.data.Dataset. These will be executed\n",
        "      sequentially and the final dataset must include features matching\n",
        "      `features`.\n",
        "\n",
        "  Returns:\n",
        "    Returns a tuple of predictions/scores and any auxiliary values.\n",
        "  \"\"\"\n",
        "  # --------------------------------------------------------------------------\n",
        "  # Parse Mode\n",
        "  # --------------------------------------------------------------------------\n",
        "  if mode == InferenceType.PREDICT_WITH_AUX:\n",
        "    infer_step = model.predict_batch_with_aux\n",
        "  elif mode == InferenceType.SCORE:\n",
        "    infer_step = model.score_batch\n",
        "  else:\n",
        "    raise ValueError(\"Mode must be `predict_with_aux`, or `score`,\"\n",
        "                      f\" but instead was {mode}.\")\n",
        "  infer_fn = functools.partial(\n",
        "      utils.get_infer_fn(\n",
        "          infer_step=infer_step,\n",
        "          batch_size=batch_size,\n",
        "          train_state_axes=train_state_initializer.train_state_axes,\n",
        "          partitioner=partitioner),\n",
        "      train_state=train_state)\n",
        "\n",
        "  # --------------------------------------------------------------------------\n",
        "  # Construct a dataset and dataset iterator.\n",
        "  # --------------------------------------------------------------------------\n",
        "  dataset = get_dataset_from_natural_text_examples(\n",
        "      examples,\n",
        "      preprocessors=preprocessors,\n",
        "      task_feature_lengths=task_feature_lengths,\n",
        "      features=features)\n",
        "  feature_converter = model.FEATURE_CONVERTER_CLS(pack=False)\n",
        "  model_dataset = feature_converter(\n",
        "      dataset, task_feature_lengths=task_feature_lengths)\n",
        "  # Zip task and model features.\n",
        "  infer_dataset = tf.data.Dataset.zip((dataset, model_dataset))\n",
        "  # Create batches and index them.\n",
        "  infer_dataset = infer_dataset.padded_batch(\n",
        "      batch_size, drop_remainder=False).enumerate()\n",
        "  infer_dataset_iter: Iterator[Tuple[int, Any]] = iter(\n",
        "      infer_dataset.prefetch(tf.data.experimental.AUTOTUNE))\n",
        "\n",
        "  # --------------------------------------------------------------------------\n",
        "  # Run inference\n",
        "  # --------------------------------------------------------------------------\n",
        "  # Main Loop over \"batches\".\n",
        "  all_inferences = []\n",
        "  all_aux_values = {}\n",
        "  for chunk, chunk_batch in infer_dataset_iter:\n",
        "    # Load the dataset for the next chunk. We can't use `infer_dataset_iter`\n",
        "    # directly since `infer_fn` needs to know the exact size of each chunk,\n",
        "    # which may be smaller for the final one.\n",
        "    chunk_dataset = tf.data.Dataset.from_tensor_slices(chunk_batch)\n",
        "    chunk_dataset.cache().prefetch(tf.data.experimental.AUTOTUNE)\n",
        "\n",
        "    # Unzip chunk dataset in to pretokenized and model datasets.\n",
        "    task_dataset = chunk_dataset.map(\n",
        "        lambda p, m: p, num_parallel_calls=tf.data.experimental.AUTOTUNE)\n",
        "    model_dataset = chunk_dataset.map(\n",
        "        lambda p, m: m, num_parallel_calls=tf.data.experimental.AUTOTUNE)\n",
        "\n",
        "    # Get a chunk-specific RNG key.\n",
        "    chunk_rng = jax.random.fold_in(jax.random.PRNGKey(0), chunk)\n",
        "\n",
        "    inferences = _extract_tokens_and_aux_values(\n",
        "        infer_fn(model_dataset.enumerate(), rng=chunk_rng))\n",
        "\n",
        "    predictions, aux_values = inferences\n",
        "    accumulated_inferences = []\n",
        "    for idx, inputs in task_dataset.enumerate().as_numpy_iterator():\n",
        "      prediction = predictions[idx]\n",
        "      # Decode predictions if applicable.\n",
        "      if mode == InferenceType.PREDICT_WITH_AUX:\n",
        "        prediction =features[\"targets\"].vocabulary.decode_tf(\n",
        "            tf.constant(prediction)).numpy()\n",
        "      accumulated_inferences.append((inputs, prediction))\n",
        "    all_inferences += accumulated_inferences\n",
        "    # Accumulate aux values over batches.\n",
        "    if not all_aux_values:\n",
        "      all_aux_values = aux_values\n",
        "    else:\n",
        "      for key, values in aux_values.items():\n",
        "        all_aux_values[key] += values\n",
        "\n",
        "  return all_inferences, all_aux_values"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ib9aOi2xaCKQ"
      },
      "source": [
        "# Evaluation Deep Dive"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ANqpfv0lAVqL"
      },
      "source": [
        "**Defining a Batch of Examples to Run Inference On**\\\n",
        "Let's start by defining a batch of examples that we will use to evaluate our model.\n",
        "\n",
        "These examples should be a list of dictionaries mapping 'target'/'input' keys to corresponding values, as shown below. For this Colab, we'll use a set of natural test questions and answers."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yhhR0yDcAn7w"
      },
      "outputs": [],
      "source": [
        "examples = [\n",
        "  {\n",
        "      'target': b'Ajay Tyagi',\n",
        "      'input':b'nq question: who has been appointed as the new chairman of sebi'\n",
        "  },\n",
        "  {\n",
        "      'target': b'C. S. Lewis',\n",
        "      'input': b'nq question: who wrote the book lion the witch and the wardrobe'},\n",
        "  {\n",
        "      'target': b'29',\n",
        "      'input': b'nq question: how many planes did japan lose at pearl harbor'},\n",
        "  {\n",
        "      'target': b'Jack Keil',\n",
        "      'input': b'nq question: who does the voice of mcgruff the dog'},\n",
        "  {\n",
        "      'target': b'Journey',\n",
        "      'input': b'nq question: who sings the wheels in the sky keep on turning'},\n",
        "  {\n",
        "      'target': b'Kumiko Watanabe',\n",
        "      'input': b'nq question: who voices regina in glitter force doki doki'},\n",
        "  {\n",
        "      'target': b'during World War II',\n",
        "      'input': b'nq question: when did the us become allies with britain'},\n",
        "  {\n",
        "      'target': b'the United States',\n",
        "      'input': b'nq question: who won the rugby 7 in las vegas'},\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WYV1LMS5taE9"
      },
      "source": [
        "We also define the required features of the examples. For this Colab, we will only require an `inputs` and `targets` entry, as defined below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nj5I7YMotb9U"
      },
      "outputs": [],
      "source": [
        "output_features = {\n",
        "        \"inputs\":\n",
        "            seqio.Feature(\n",
        "                vocabulary=model.input_vocabulary, add_eos=True),\n",
        "        \"targets\":\n",
        "            seqio.Feature(\n",
        "                vocabulary=model.output_vocabulary, add_eos=True)\n",
        "    }\n",
        "features = dict(sorted(output_features.items()))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hEG2_HbVGb4y"
      },
      "source": [
        "**Defining a Metrics Function**\\\n",
        "Next, we'll need to determine what metrics we want to use to evaluate our model, and we'll need to define a metrics function to produce these values.\n",
        "\n",
        "We support two types of metrics: \\\n",
        "1.)  *Prediction-based metrics*: these are metrics that depend on model predictions; the metric may also rely on additional auxiliary values. For example, if our model produces an output sequence, a valid prediction-based metric would be BLEU, which compares our output sequence to a target sequence. \\\n",
        "2.)  *Score-based metrics*: these are metrics the depend on model scores. For example, log likelihood of a target sequence (given an input sequence) would be a valid score-based metrics function.\n",
        "\n",
        "For more details on metrics function, please see this [Metrics](https://github.com/google/seqio/blob/main/README.md/index#metrics) documentation.\n",
        "\n",
        "It is rare that you will actually have to define your own metrics function; unless you are working on a custom/novel metric, you can likely find a predefined metrics function to call on. For example, many common language evaluation metrics are defined in [t5.evaluation.metrics](https://github.com/google-research/text-to-text-transfer-transformer/tree/main/t5/evaluation/metrics.py). For this Colab, we will evaluate our  natural question/answer pairs using the predefined SQuAD metrics from `t5.evaluation.metrics`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "vh3NQDwvI5xr"
      },
      "outputs": [],
      "source": [
        "metric_fns = [t5_metrics.squad]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2Y6ebToiR2iF"
      },
      "source": [
        "**Defining a Postprocessor Function** \\\n",
        "Some metrics functions require postprocessing targets before we are able to calculate the metrics. The InteractiveModel allows users to optionally provide a postprocessor to convert targets to the intended form; see this [Postprocessor](https://github.com/google/seqio/blob/main/README.md/index#postprocessor) documentation for more details.\n",
        "\n",
        "For this example, we will use a standard QA postprocessor, modeled after the [`t5.data.postprocessors.qa` method](https://github.com/google-research/text-to-text-transfer-transformer/tree/main/t5/data/postprocessors.py)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Mml43lATSH7D"
      },
      "outputs": [],
      "source": [
        "def qa(answer, example=None, is_target=False):\n",
        "  \"\"\"Returns answer, or all answers if the full example is provided.\"\"\"\n",
        "  if is_target:\n",
        "    return [tf.compat.as_text(a) for a in [example[\"targets_pretokenized\"]]]\n",
        "  return answer\n",
        "\n",
        "postprocessor = qa"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mixLzcBkQOT_"
      },
      "source": [
        "Now, let's break down what the interactive model does to run evaluation.\n",
        "\n",
        "The `InteractiveModel` `evaluate()` method  performs four actions:\n",
        "\n",
        "\n",
        "1.   Convert the natural text examples into a tf.Dataset.\n",
        "2.   Detect the metric function type. We analyze the metrics function signatures to determine if the metrics are prediction-based or score-based.\n",
        "3.   Run inference to generate predictions and/or scores depending on the metrics function types.\n",
        "4.   Run the metrics functions on the provided predictions/scores and return these metrics to the user.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ug0zJx2kQk6g"
      },
      "source": [
        "**Prepare the dataset** \\\n",
        "\n",
        "Preparing the data for evaluation is fairly straightforward; in fact, this is nearly the same data preparation that happens for training.\n",
        "\n",
        "First, we convert the natural text examples into a tf.Dataset and run any preprocessors; T5X has a helper function, `get_dataset_from_natural_text_examples`, that can do exactly that. For this example, the only preprocessing we will do is tokenization and appending an EOS token. If you are interested in learning more about preprocessors, please take a look at https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.mdx-colab-intro.\n",
        "\n",
        "Finally, we may optionally postprocess the targets (if a postprocessor has been provided)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "chPomDFxQ6r3"
      },
      "outputs": [],
      "source": [
        "preprocessors= [\n",
        "  seqio.preprocessors.tokenize,\n",
        "  seqio.preprocessors.append_eos\n",
        "]\n",
        "dataset = get_dataset_from_natural_text_examples(\n",
        "    examples,\n",
        "    preprocessors=preprocessors,\n",
        "    task_feature_lengths=task_feature_lengths,\n",
        "    features=features)\n",
        "\n",
        "# Postprocess targets if required.\n",
        "def postprocess_fn(decoded_model_output: Any, **postprocess_kwargs) -\u003e Any:\n",
        "  \"\"\"Returns the model output after applying the postprocess function.\"\"\"\n",
        "  if postprocessor:\n",
        "    return postprocessor(decoded_model_output, **postprocess_kwargs)\n",
        "  return decoded_model_output\n",
        "\n",
        "targets = []\n",
        "for ex in tfds.as_numpy(dataset):\n",
        "  targets.append(\n",
        "      postprocess_fn(\n",
        "          decoded_model_output=ex[\"targets_pretokenized\"],\n",
        "          example=ex,\n",
        "          is_target=True))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JEsB1uzfSi3L"
      },
      "source": [
        "**Parse Metrics Functions** \\\n",
        "Next, we inspect the function signature for all metrics functions to determine whether the metrics are prediction-based or score-based. Further, we also detect whether the prediction-based metrics require auxiliary values.\n",
        "\n",
        "This check is fairly rudimentary; we simply look at the arguments for the metrics functions and categorize the function based on whether \"scores\", \"predictions\", and/or \"aux_values\" appear as arguments to the functions.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wfLMdo6FS0aD"
      },
      "outputs": [],
      "source": [
        "predict_metric_fns = []\n",
        "predict_with_aux_metric_fns = []\n",
        "score_metric_fns = []\n",
        "for metric_fn in metric_fns:\n",
        "  pos_args = tuple(\n",
        "    key for key, param in inspect.signature(metric_fn).parameters.items()\n",
        "    if param.default == inspect.Parameter.empty)\n",
        "  if pos_args == (\"targets\", \"scores\"):\n",
        "    score_metric_fns.append(metric_fn)\n",
        "  elif pos_args == (\"targets\", \"predictions\"):\n",
        "    predict_metric_fns.append(metric_fn)\n",
        "  elif pos_args == (\"targets\", \"predictions\", \"aux_values\"):\n",
        "    predict_with_aux_metric_fns.append(metric_fn)\n",
        "  else:\n",
        "    raise ValueError(\n",
        "      \"Metric functions must have positional arguments matching either \"\n",
        "      \"('targets', 'scores'), ('targets', 'predictions') or \"\n",
        "      \"('targets', 'predictions', 'aux_values'). \"\n",
        "      f\"Got: {pos_args}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_Ynv_W3aS3ai"
      },
      "source": [
        "**Run Inference** \\\n",
        "Next, we extract predictions and/or scores depending on the types of our metrics functions. We simply use our `infer_with_preprocessors` helper (in the InteractiveModel, we use the `self.infer_with_preprocessors` class method). For more details on inference in the InteractiveModel, please see https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.mdx-colab-inference."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ACBEyqlWS8EF"
      },
      "outputs": [],
      "source": [
        "# Get predictions.\n",
        "predictions = []\n",
        "if predict_with_aux_metric_fns or predict_metric_fns:\n",
        "  predictions, aux_values = infer_with_preprocessors(\n",
        "    mode=InferenceType.PREDICT_WITH_AUX,\n",
        "    examples=examples,\n",
        "    preprocessors=preprocessors)\n",
        "  predictions = [\n",
        "    prediction.decode(\"utf-8\") for example, prediction in predictions\n",
        "  ]\n",
        "\n",
        "# Get scores.\n",
        "scores = []\n",
        "if score_metric_fns:\n",
        "  scores, _ = infer_with_preprocessors(\n",
        "    mode=InferenceType.SCORE,\n",
        "    examples=examples,\n",
        "    preprocessors=preprocessors)\n",
        "  scores = [score for example, score in scores]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hTKh1RNNS_BS"
      },
      "source": [
        "**Compute Metrics** \\\n",
        "Finally, we define and call a helper function to compute metrics given our inputs, predictions/scores, targets, and metrics functions.\n",
        "\n",
        "This core functionality of this helper is fairly straightforward and is defined in the inner `compute_metrics_fn`. This function simply iterates over all the metrics functions, passing the correct inputs (predictions, scores, and/or auxiliary values) to each metrics function to calculate the value of that metric. We then create a dictionary mapping the metric name to the value of that metric.\n",
        "\n",
        "There is a bit of logic that wraps around this `compute_metrics_fn` that enables us to run these computations in a multihost environment. In particular, we ensure that we only calculate metrics once, and appropriately wrap `compute_metrics_fn` in a TF computation graph if necessary."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1UyF8F47TCZI"
      },
      "outputs": [],
      "source": [
        "def compute_metrics(\n",
        "  targets: Sequence[Any], predictions: Sequence[Any],\n",
        "  aux_values: Sequence[Any], scores: Sequence[Any],\n",
        "  predict_metric_fns: Sequence[seqio.dataset_providers.MetricFnCallable],\n",
        "  predict_with_aux_metric_fns: Sequence[\n",
        "    seqio.dataset_providers.MetricFnCallable],\n",
        "  score_metric_fns: Sequence[seqio.dataset_providers.MetricFnCallable]):\n",
        "  \"\"\"Computes the metrics specified in the metric_fns lists.\"\"\"\n",
        "  # Only compute metrics once\n",
        "  if jax.process_index() != 0:\n",
        "    return {}\n",
        "\n",
        "  def compute_metrics_fn():\n",
        "    task_metrics = []\n",
        "    if predict_metric_fns:\n",
        "      task_metrics.extend([\n",
        "        metric_fn(targets, predictions) for metric_fn in predict_metric_fns\n",
        "      ])\n",
        "    if predict_with_aux_metric_fns:\n",
        "      task_metrics.extend([\n",
        "        metric_fn(targets, predictions, aux_values) for metric_fn in predict_with_aux_metric_fns\n",
        "      ])\n",
        "    if score_metric_fns:\n",
        "      is_tuple = isinstance(scores, tuple)\n",
        "      if ((not is_tuple and len(targets) != len(scores)) or\n",
        "          (is_tuple and len(targets) != len(scores[0]))):\n",
        "        raise ValueError(f\"len(targets)({len(targets)}) != \"\n",
        "                          f\"len(output_scores)({len(scores)})\")\n",
        "      task_metrics.extend([\n",
        "          metric_fn(targets, scores) for metric_fn in score_metric_fns\n",
        "        ])\n",
        "\n",
        "    all_metrics = {}\n",
        "    for k, v in itertools.chain(*[m.items() for m in task_metrics]):\n",
        "      if k in all_metrics:\n",
        "        raise ValueError(f\"Duplicate metric key '{k}' in Task.\")\n",
        "      all_metrics[k] = v\n",
        "    return all_metrics\n",
        "\n",
        "  if not tf.executing_eagerly():\n",
        "    def wrap_graph(fn):\n",
        "      graph = tf.compat.v1.get_default_graph()\n",
        "      def wrapped_fn():\n",
        "        with graph.as_default():\n",
        "          return fn()\n",
        "      return wrapped_fn\n",
        "    compute_metrics_fn = wrap_graph(compute_metrics_fn)\n",
        "\n",
        "  all_metrics = compute_metrics_fn()\n",
        "  # Wait until computations are done before continuing.\n",
        "  utils.sync_global_devices(\"Completed.\")\n",
        "  return all_metrics\n",
        "\n",
        "\n",
        "metrics = compute_metrics(\n",
        "        targets,\n",
        "        predictions,\n",
        "        aux_values,\n",
        "        scores,\n",
        "        predict_metric_fns,\n",
        "        predict_with_aux_metric_fns,\n",
        "        score_metric_fns)\n",
        "print(metrics)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AB0U_kfRNIyR"
      },
      "source": [
        "The code snippets above exactly replicate the `InteractiveModel` `evaluate()` method (see [source code](https://github.com/google-research/t5x/blob/main/t5x/interactive_model.py)); running the code snippets above is exactly equivalent to running `interactive_model.evaluate(examples, preprocessors=[seqio.preprocessors.tokenize, seqio.preprocessors.append_eos], metric_fns=[t5_metrics.squad], postprocessor=t5.data.postprocessors.qa)`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lcDwmp_AxnOG"
      },
      "source": [
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-QR5LnmN4ikp"
      },
      "source": [
        "# Advanced Topics"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CLstCKpP8Ge7"
      },
      "source": [
        "## T5X Evaluation Binaries and Other Advanced Features\n",
        "\n",
        "T5X offers evauation binaries that have the same functionality as the InteractiveModel, with additional features as well (more advanced compiling, etc.). Importantly, these binaries are configured using [Gin](https://github.com/google/gin-config/blob/main/README.md); if you are not familiar with Gin, please take a look at this [Gin Primer](https://github.com/google-research/t5x/blob/main/docs/usage.md/gin) to get started.\n",
        "\n",
        "If you are familiar with Gin and interested in using the T5X evaluation binaries, we have provided a helper function, `get_gin_config_from_interactive_model`, which will take an InteractiveModel instance and generate the gin config that you can use to run the T5X evaluation binaries; this gin config will exactly reproduce the InteractiveModel evaluation functionality we've described above. We've provided an example below.\n",
        "\n",
        "Importantly, the InteractiveModel takes in a model, partitioner, and data, so we cannot generate Gin configs for these components. You can pass Gin config strings for the model and partitioner components to the helper function, as demonstrated below. Additionally, you can pass a SeqIO task containing your data to the helper function. See the section below if you are unfamiliar with SeqIO."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rhgUZ0w6yQsE"
      },
      "outputs": [],
      "source": [
        "# Define an InteractiveModel instance, based on the `small` T5X EncoderDecoder model.\n",
        "t5_config = network.T5Config(\n",
        "    vocab_size=32128,\n",
        "    dtype='bfloat16',\n",
        "    emb_dim=512,\n",
        "    num_heads=6,\n",
        "    num_encoder_layers=8,\n",
        "    num_decoder_layers=8,\n",
        "    head_dim=64,\n",
        "    mlp_dim=1024,\n",
        "    mlp_activations=('gelu', 'linear'),\n",
        "    dropout_rate=0.0,\n",
        "    logits_via_embedding=False)\n",
        "module = network.Transformer(config=t5_config)\n",
        "model = t5x.models.EncoderDecoderModel(\n",
        "    module=module,\n",
        "    input_vocabulary=t5.data.get_default_vocabulary(),\n",
        "    output_vocabulary=t5.data.get_default_vocabulary(),\n",
        "    optimizer_def=t5x.adafactor.Adafactor(decay_rate=0.8, step_offset=0),\n",
        "    decode_fn=functools.partial(\n",
        "        t5x.decoding.temperature_sample, temperature=1.0, topk=40))\n",
        "interactive_model = InteractiveModel(\n",
        "    batch_size=8,\n",
        "    task_feature_lengths={'inputs': 32, 'targets': 32},\n",
        "    output_dir='/tmp/',\n",
        "    partitioner=partitioning.PjitPartitioner(\n",
        "      num_partitions=1,\n",
        "      model_parallel_submesh=None,\n",
        "      logical_axis_rules=partitioning.standard_logical_axis_rules()),\n",
        "    model=model,\n",
        "    dtype='float32',\n",
        "    restore_mode='specific',\n",
        "    checkpoint_path='',\n",
        "    input_shapes={\n",
        "      'encoder_input_tokens': np.array([8, 38]),\n",
        "      'decoder_target_tokens': np.array([8, 18]),\n",
        "      'decoder_input_tokens': np.array([8, 18]),\n",
        "      'decoder_loss_weights': np.array([8, 18])\n",
        "    },\n",
        "    input_types=None)\n",
        "\n",
        "# Define Gin Config strings for the model, partitioner, and any imports.\n",
        "imports_str = \"\"\"from t5x import models\n",
        "from t5x import partitioning\n",
        "import t5.data.mixtures\n",
        "include 't5x/examples/t5/t5_1_1/tiny.gin'\"\"\"\n",
        "partitioner_config = 'partitioning.PjitPartitioner.num_partitions = 2'\n",
        "model_config = \"\"\"models.EncoderDecoderModel:\n",
        "z_loss = 0.0\n",
        "label_smoothing = 0.0\n",
        "loss_normalizing_factor = None\"\"\"\n",
        "\n",
        "gin_config_str = get_gin_config_from_interactive_model(\n",
        "    interactive_model=interactive_model,\n",
        "    script_type=T5XScriptType.EVALUATION,\n",
        "    task_name='wmt19_ende_v003',\n",
        "    partitioner_config_str=partitioner_config,\n",
        "    model_config_str=model_config,\n",
        "    train_steps=0,\n",
        "    imports_str=imports_str,\n",
        ")\n",
        "print(gin_config_str)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uGd1DxDT3gB7"
      },
      "source": [
        "Once you have generated the `gin_config_str` as above, you can write this string to a file and launch your evaluation experiment locally by running the following on commandline:\n",
        "\n",
        "\n",
        "```\n",
        "EVAL_OUTPUT_DIR=\"/tmp/eval-model/\"\n",
        "python -m t5x.train_unfragmented \\\n",
        "  --gin_file=${GIN_FILE_PATH} \\\n",
        "  --gin.EVAL_OUTPUT_DIR=\\\"${EVAL_OUTPUT_DIR}\\\" \\\n",
        "  --alsologtostderr\n",
        "```\n",
        "\n",
        "For more details on evaluation using the T5X evaluation binaries, please see the [Evaluation](https://github.com/google-research/t5x/blob/main/docs/usage.md/eval) tutorial."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wi29fMdv4mSr"
      },
      "source": [
        "## SeqIO\n",
        "\n",
        "If you are interested in T5X, you may also be interested in, or have heard of, SeqIO. SeqIO is a library for processing sequential data to be fed into downstream sequence models. At a high level, SeqIO relies on user-defined `Tasks` and `Mixtures` that can be used to retrieve and evaluate datasets.\n",
        "\n",
        "We won't go into details about SeqIO here; we recommend checking out this [SeqIO Introductory guide](https://github.com/google/seqio/blob/main/README.md/index) and/or clicking below to run a SeqIO Introductory Colab. The rest of this section will assume a basic understanding of SeqIO.\n",
        "\n",
        "\u003ca href=\"https://colab.research.google.com/github/google-research/seqio/blob/main/seqio/notebooks/Basics_Task_and_Mixtures.ipynb\" target=\"_parent\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\u003c/a\u003e\n",
        "\n",
        "If you are already familiar with SeqIO and have a SeqIO task/mixture that you would like to use in this Colab, we do provide a SeqIO bridge that takes in a SeqIO task/mixture and produces batches of examples that can be processed by the code snippets above. We've provided an example of this bridge below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "DLSwblIQ7ZCC"
      },
      "outputs": [],
      "source": [
        "!git clone https://github.com/google-research/google-research.git google_research"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bM0nRIEFwyj_"
      },
      "outputs": [],
      "source": [
        "import google_research.t5_closed_book_qa.t5_cbqa.tasks\n",
        "batches = get_batches_from_seqio(\n",
        "        task_or_mixture_name='natural_questions_open',\n",
        "        split='validation',\n",
        "        batch_size=8,\n",
        "        num_batches=2,\n",
        "        seed=42)\n",
        "print(f\"Batches: {batches}\")\n",
        "# Train the interactive model on the provided batches.\n",
        "original_step = interactive_model.step\n",
        "_ = interactive_model.train_loop(num_steps=len(batches), train_batches=batches)\n",
        "print(f\"Original Step: {original_step}, Current Step: {interactive_model.step}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Elt08160w03X"
      },
      "source": [
        "The `get_batches_from_seqio` bridge can take several constructor arguments:\n",
        "\n",
        "\n",
        "1.   `task_or_mixture_name`: the name of the SeqIO task/mixture to read data from. It should be noted that your task/mixture must already be registered with SeqIO, and you must import the module that defines your task/mixture here (as seen above).\n",
        "2.   `split`: the split of the Task/Mixture to read data from.\n",
        "3.   `batch_size`: how many examples should appear in each batch.\n",
        "4.   `num_batches`: the total number of batches to return.\n",
        "5.   `get_pretokenized_examples`: optional. A boolean, defaulting to True, that determines whether we should read the `inputs_pretokenized`/`targets_pretokenized` elements from an example, or the `inputs`/`targets` elements. \\\n",
        "The `train_step`, `predict`, `predict_with_aux`, `score`, and `evaluate` methods of the InteractiveModel assume that we should run [tokenization](https://github.com/google/seqio/tree/main/seqio/preprocessors.py) and [appending an EOS token](https://github.com/google/seqio/tree/main/seqio/preprocessors.py) as the only preprocessors. To use these methods with this pre-defined list of preprocessors, you can set `get_pretokenized_examples=True` to retrieve examples that still need to be tokenized, and these InteractiveModel methods will handle running these preprocessors. This setting can also be helpful if you want to inspect the natural text inputs/targets of your SeqIO task. \\\n",
        "However, some SeqIO tasks do not use tokenization (ex: span corruption). You can set `get_pretokenized_examples=False`, and this bridge will read the fully preprocessed examples from the SeqIO task. You can then run `train_step_with_preprocessors`, `infer_with_preprocessors`, or `evaluate_with_preprocessors` and provide an empty preprocessors list (because all preprocessing has already been completed by this bridge) to run training/inference/evaluation. We have provided an example of using this bridge to retrieve fully preprocessed examples below.\n",
        "6.   `sequence_length`: optional. A dictionary mapping feature key to maximum length (int) for that feature. Used by SeqIO to retrieve the dataset/examples.\n",
        "7.   `**get_dataset_kwargs`: there are many [additional parameters](https://github.com/google/seqio/tree/main/seqio/dataset_providers.py) that can be set in the `SeqIO.get_dataset` function. If you would like to set any of these arguments, you can set them using this `kwargs` parameter.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fjKBCX39w0Xl"
      },
      "outputs": [],
      "source": [
        "import t5.data.tasks\n",
        "batches = get_batches_from_seqio(\n",
        "    task_or_mixture_name='c4_v220_span_corruption',\n",
        "    split='validation',\n",
        "    batch_size=8,\n",
        "    num_batches=1,\n",
        "    get_pretokenized_examples=False,\n",
        "    sequence_length=interactive_model._task_feature_lengths,\n",
        "    seed=42)\n",
        "batch = batches[0]  # We expect only a single batch.\n",
        "original_step = interactive_model.step\n",
        "interactive_model.train_step_with_preprocessors(\n",
        "        examples=batch, preprocessors=[])\n",
        "print(f\"Original Step: {original_step}, Current Step: {interactive_model.step}\")"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [
        "bqZYp90PIa1t",
        "lcDwmp_AxnOG"
      ],
      "last_runtime": {
        "build_target": "//learning/grp/tools/ml_python:ml_notebook",
        "kind": "private"
      },
      "name": "Welcome to T5X: Evaluation Deep Dive",
      "private_outputs": true,
      "provenance": [
        {
          "file_id": "18IRHbzIplnXwxF2ii10vFsqyRhPKBcWA",
          "timestamp": 1676344856728
        },
        {
          "file_id": "1hQO9MD6psZtTeqZyXPJIoUV0uzTa2qPg",
          "timestamp": 1662951508591
        },
        {
          "file_id": "1Akpc6pKlJB5rn5YYYFC9lw2OMk6oBzlQ",
          "timestamp": 1662754223629
        },
        {
          "file_id": "1rA8bgO2bJRoebAuS96Ji0RUhnawgBY4i",
          "timestamp": 1650477076639
        }
      ]
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
