{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ScitaPqhKtuW"
      },
      "source": [
        "##### Copyright 2022 The TensorFlow GNN Authors.\n",
        "\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\");"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hMqWDc_m6rUC"
      },
      "outputs": [],
      "source": [
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, eicther express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "udvGTpefWRE_"
      },
      "source": [
        "# Solving OGBN-MAG end-to-end with TF-GNN\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ev9vJpM94c3i"
      },
      "source": [
        "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
        "  \u003ctd\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/gnn/blob/master/examples/notebooks/ogbn_mag_e2e.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/gnn/blob/main/examples/notebooks/ogbn_mag_e2e.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView on GitHub\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "\u003c/table\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rEvXnZOrWRC2"
      },
      "source": [
        "### Abstract\n",
        "\n",
        "[Graph Neural Networks](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/intro.md) (GNNs) are a powerful tool for deep learning on relational data. This tutorial introduces the two main tools required to train GNNs at scale:\n",
        "\n",
        "1. *Graph Sampler*: The [graph sampler](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/sampler/graph_sampler.py) helps to efficiently sample subgraphs from huge graphs.\n",
        "2. *The Runner*: Also known as the Orchestrator, [the runner](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/runner.md) orchestrates the end to end training of GNNs with minimal coding. The Runner code is a high-level abstraction for training GNNs models provided by the TensorFlow GNN (TF-GNN) library.\n",
        "\n",
        "This tutorial is intended for ML practitioners with a basic idea of GNNs."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TkPEzhxOV_XF"
      },
      "source": [
        "## Colab set-up"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "of62D-OeBRZ6"
      },
      "outputs": [],
      "source": [
        "!pip install -q tensorflow-gnn || echo \"Ignoring package errors...\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "executionInfo": {
          "elapsed": 10039,
          "status": "ok",
          "timestamp": 1711472628551,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "oA4_zh0EyNHv",
        "outputId": "8b415cad-86b7-4169-cc9f-2dd9f6b02f2c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running TF-GNN 1.0.2 under TensorFlow 2.15.0.\n"
          ]
        }
      ],
      "source": [
        "import functools\n",
        "import itertools\n",
        "import os\n",
        "import re\n",
        "from typing import Mapping\n",
        "os.environ[\"TF_USE_LEGACY_KERAS\"] = \"1\"  # For TF2.16+.\n",
        "\n",
        "import tensorflow as tf\n",
        "import tensorflow_gnn as tfgnn\n",
        "from tensorflow_gnn import runner\n",
        "from tensorflow_gnn.experimental import sampler\n",
        "from tensorflow_gnn.models import mt_albis\n",
        "tf.get_logger().setLevel('ERROR')\n",
        "\n",
        "print(f\"Running TF-GNN {tfgnn.__version__} under TensorFlow {tf.__version__}.\")\n",
        "\n",
        "NUM_TRAINING_SAMPLES = 629571\n",
        "NUM_VALIDATION_SAMPLES = 64879"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-WzKtCIdys-I"
      },
      "source": [
        "## Introduction"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B0Thnw2VYjsi"
      },
      "source": [
        "### Problem statement and dataset\n",
        "\n",
        "OGBN-MAG is [Open Graph Benchmark](https://ogb.stanford.edu)'s Node classification task on a subset of the [Microsoft Academic Graph](https://www.microsoft.com/en-us/research/publication/microsoft-academic-graph-when-experts-are-not-enough/).\n",
        "\n",
        "The OGBN-MAG dataset is one big heterogeneous graph. The graph has four sets (or types) of nodes.\n",
        "\n",
        "  * Node set \"paper\" contains 736,389 published academic papers, each with a 128-dimensional word2vec feature vector computed by averaging the embeddings of the words in its title and abstract.\n",
        "  * Node set \"field_of_study\" contains 59,965 fields of study, with no associated features.\n",
        "  * Node set \"author\" contains the 1,134,649 distinct authors of the papers, with no associated features.\n",
        "  * Node set \"institution\" contains 8740 institutions listed as affiliations of authors, with no associated features.\n",
        "\n",
        "The graph has four sets (or types) of directed edges, with no associated features on any of them.\n",
        "\n",
        "  * Edge set \"cites\" contains 5,416,217 edges from papers to the papers they cite.\n",
        "  * Edge set \"has_topic\" contains 7,505,078 edges from papers to their zero or more fields of study.\n",
        "  * Edge set \"writes\" contains 7,145,660 edges from authors to the papers that list them as authors.\n",
        "  * Edge set \"affiliated_with\" contains 1,043,998 edges from authors to the zero or more institutions that have been listed as their affiliation(s) on any paper.\n",
        "\n",
        "The task is to **predict the venue** (journal or conference) at which each of the papers has been published. There are 349 distinct venues, not represented in the graph itself. The benchmark metric is the accuracy of the predicted venue.\n",
        "\n",
        "Results for this benchmark confirm that the graph structure provides a lot of relevant but \"latent\" information. Baseline models that only use the one explicit input feature (the word2vec embedding of a paper's title and abstract) perform less well.\n",
        "\n",
        "OGBN-MAG defines a split of node set \"papers\" into **train, validation and test nodes**, based on its \"year\" feature:\n",
        "\n",
        "  * \"train\" has the 629,571 papers with `year\u003c=2017`,\n",
        "  *  \"validation\" has the 64,879 papers with `year==2018`, and\n",
        "  * \"test\" has the 41,939 papers with `year==2019`.\n",
        "\n",
        "However, under OGB rules, training may happen on the full graph, just restricted to predictions on the \"train\" nodes. We follow that for consistency in benchmarking. However, users working on their own datasets may wish to validate and test with a more realistic separation between training data from the past and evaluation data representative of future inputs for prediction."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fGC9V4AZfXhs"
      },
      "source": [
        "### Approach\n",
        "\n",
        "OGBN-MAG asks to classify each of the \"paper\" nodes. The number of nodes is on the order of a million, and we intuit that the most informative other nodes are found just a few hops away (cited papers, papers with a common author, etc.).\n",
        "\n",
        "Therefore, and to stay scalable for even bigger datasets, we approach this task with **graph sampling**: Each \"paper\" node becomes one training example, expressed by a subgraph that has the node to be classified as its root and stores a sample of its neighborhood in the original graph. The sample is taken by going out a fixed number of steps along specific edge sets, and randomly downsampling the edges in each step if they are too numerous.\n",
        "\n",
        "The actual **TensorFlow model** runs on batches of these sampled subgraphs, applies a Graph Neural Network to propagate information from related nodes towards the root node of each batch, and then applies a softmax classifier to predict one of 349 classes (each venue is a class).\n",
        "\n",
        "The exponential fan-out of graph sampling quickly gets expensive. Sampling and model should be designed together to make the most of the available information in carefully sampled subgraphs.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XKDrbUvvyx4u"
      },
      "source": [
        "## Data preparation and graph sampling\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PH6ntgDMzKy2"
      },
      "source": [
        "### Preparing the graph"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kFF1w8sGzM6m"
      },
      "source": [
        "We provide the entire OGBN-MAG graph data cast as a TF-GNN graph tensor as input to the graph sampler. The command below loads the entire OGBN-MAG as a single graph tensor from the already-saved serialized Tensorflow Example message (subject to this [license](https://storage.googleapis.com/download.tensorflow.org/data/ogbn-mag/npz/LICENSE.txt)). Additionally, it loads the supporting OGBN-MAG graph schema.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tf827zVTzlke"
      },
      "outputs": [],
      "source": [
        "GRAPH_TENSOR_FILE = 'gs://download.tensorflow.org/data/ogbn-mag/sampled/v2/graph_tensor.example.pb'\n",
        "SCHEMA_FILE = 'gs://download.tensorflow.org/data/ogbn-mag/sampled/v2/graph_schema.pbtxt'\n",
        "\n",
        "graph_schema = tfgnn.read_schema(SCHEMA_FILE)\n",
        "serialized_ogbn_mag_graph_tensor_string = tf.io.read_file(GRAPH_TENSOR_FILE)\n",
        "\n",
        "full_ogbn_mag_graph_tensor = tfgnn.parse_single_example(\n",
        "    tfgnn.create_graph_spec_from_schema_pb(graph_schema, indices_dtype=tf.int64),\n",
        "    serialized_ogbn_mag_graph_tensor_string)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dWzU2OMH1CAi"
      },
      "source": [
        "### Sampling"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2KMfvO5334aQ"
      },
      "source": [
        "As OGBN-MAG dataset as a graph is huge, we sample from the graph to facilitate training on batches of subgraphs.\n",
        "\n",
        "The sampling we have chosen for OGBN-MAG proceeds as follows:\n",
        "\n",
        "  1. Start from all \"paper\" (seed) nodes.\n",
        "  2. For each paper from 1, follow a random sample of \"cites\" edges to other \"paper\" nodes.\n",
        "  3. For each paper from 1 or 2, follow a random sample of \"rev_writes\" edges to \"author\" nodes.\n",
        "  4. For each author from 3, follow a random sample of \"writes\" edges to more \"paper\" nodes.\n",
        "  5. For each author from 3, follow a random sample of \"affiliated_with\" edges to \"institution\" nodes.\n",
        "  6. For each paper from 1, 2 or 4, follow a random sample of \"has_topic\" edges to \"field_of_study\" nodes.\n",
        "\n",
        "Below, we spell out the above sampling strategy in an easy-to-read python code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "galyedlOokoh"
      },
      "outputs": [],
      "source": [
        "train_sampling_sizes = {\n",
        "    \"cites\": 8,\n",
        "    \"rev_writes\": 8,\n",
        "    \"writes\": 8,\n",
        "    \"affiliated_with\": 8,\n",
        "    \"has_topic\": 8,\n",
        "}\n",
        "validation_sample_sizes = train_sampling_sizes.copy()\n",
        "\n",
        "def create_sampling_model(\n",
        "    full_graph_tensor: tfgnn.GraphTensor, sizes: Mapping[str, int]\n",
        ") -\u003e tf.keras.Model:\n",
        "\n",
        "  def edge_sampler(sampling_op: tfgnn.sampler.SamplingOp):\n",
        "    edge_set_name = sampling_op.edge_set_name\n",
        "    sample_size = sizes[edge_set_name]\n",
        "    return sampler.InMemUniformEdgesSampler.from_graph_tensor(\n",
        "        full_graph_tensor, edge_set_name, sample_size=sample_size\n",
        "    )\n",
        "\n",
        "  def get_features(node_set_name: tfgnn.NodeSetName):\n",
        "    return sampler.InMemIndexToFeaturesAccessor.from_graph_tensor(\n",
        "        full_graph_tensor, node_set_name\n",
        "    )\n",
        "\n",
        "  # Spell out the sampling procedure in python\n",
        "  sampling_spec_builder = tfgnn.sampler.SamplingSpecBuilder(graph_schema)\n",
        "  seed = sampling_spec_builder.seed(\"paper\")\n",
        "  papers_cited_from_seed = seed.sample(sizes[\"cites\"], \"cites\")\n",
        "  authors_of_papers = papers_cited_from_seed.join([seed]).sample(sizes[\"rev_writes\"], \"rev_writes\")\n",
        "  papers_by_authors = authors_of_papers.sample(sizes[\"writes\"], \"writes\")\n",
        "  institutions = authors_of_papers.sample(sizes[\"affiliated_with\"], \"affiliated_with\")\n",
        "  fields_of_study = (seed.join([papers_cited_from_seed, papers_by_authors]).sample(sizes[\"has_topic\"], \"has_topic\"))\n",
        "  sampling_spec = sampling_spec_builder.build()\n",
        "\n",
        "  model = sampler.create_sampling_model_from_spec(\n",
        "      graph_schema, sampling_spec, edge_sampler, get_features,\n",
        "      seed_node_dtype=tf.int64)\n",
        "\n",
        "  return model\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lEJO_1P_qszq"
      },
      "source": [
        "\n",
        "Notice how our sampler allows sampling edge sets in the reverse direction by setting `add_reverse_edge_sets=True` while loading `full_ogbn_mag_graph_tensor`. The `rev_writes` is the derived from the one edge set `writes` of the original OGBN-MAG graph which goes in the direction from node set `papers` to node set `authors`.\n",
        "\n",
        "The sampling output contains all nodes and edges traversed by sampling, in their respective node/edge sets and with their associated features. An edge between two sampled nodes that exists in the input graph but has not been traversed by sampling is not included in the sampled output. For example, we get the `cites` edges followed in step 2, but no edges for citations between the papers discovered in step 4.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hZpeVtalnkHc"
      },
      "source": [
        "### Data Split Preparation\n",
        "\n",
        "Under [OGB rules](https://ogb.stanford.edu/docs/leader_rules/), we can sample subgraphs for the training, validation and test dataset from the full graph, just with different seed nodes, selected by the year of publication. We define the `seed_dataset` responsible for providing the seeds for the different splits. (Models for production systems should probably use separate validation and test data, to prevent leakage of their seed nodes into the sampled subgraphs of other splits.)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BTxuIge-7tIi"
      },
      "outputs": [],
      "source": [
        "def seed_dataset(years: tf.Tensor, split_name: str) -\u003e tf.data.Dataset:\n",
        "  \"\"\"Seed dataset as indices of papers within split years.\"\"\"\n",
        "  if split_name == \"train\":\n",
        "    mask = years \u003c= 2017  # 629,571 examples\n",
        "  elif split_name == \"validation\":\n",
        "    mask = years == 2018  # 64,879 examples\n",
        "  elif split_name == \"test\":\n",
        "    mask = years == 2019  # 41,939 examples\n",
        "  else:\n",
        "    raise ValueError(f\"Unknown split_name: '{split_name}'\")\n",
        "  seed_indices = tf.squeeze(tf.where(mask), axis=-1)\n",
        "  return tf.data.Dataset.from_tensor_slices(seed_indices)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ocesc1MwsKbE"
      },
      "source": [
        "Next, we combine the `seed_dataset` with the sampling model to obtain the `SubgraphDatasetProvider`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tWn0oQZlZf_u"
      },
      "outputs": [],
      "source": [
        "class SubgraphDatasetProvider(runner.DatasetProvider):\n",
        "  \"\"\"Dataset Provider based on Sampler V2.\"\"\"\n",
        "\n",
        "  def __init__(self,\n",
        "               full_graph_tensor: tfgnn.GraphTensor,\n",
        "               sizes: Mapping[str, int],\n",
        "               split_name: str):\n",
        "    super().__init__()\n",
        "    # Extract years of publication of all papers for determining seeds.\n",
        "    self._years = tf.squeeze(full_graph_tensor.node_sets[\"paper\"][\"year\"], axis=-1)\n",
        "    self._sampling_model = create_sampling_model(full_graph_tensor, sizes)\n",
        "    self._split_name = split_name\n",
        "    self.input_graph_spec = self._sampling_model.output.spec\n",
        "\n",
        "  def get_dataset(self, context: tf.distribute.InputContext) -\u003e tf.data.Dataset:\n",
        "    \"\"\"Creates TF dataset.\"\"\"\n",
        "    self._seed_dataset = seed_dataset(self._years, self._split_name)\n",
        "    ds = self._seed_dataset.shard(\n",
        "        num_shards=context.num_input_pipelines, index=context.input_pipeline_id)\n",
        "    if self._split_name == \"train\":\n",
        "      ds = ds.shuffle(NUM_TRAINING_SAMPLES).repeat()\n",
        "    # samples 128 subgraphs in parallel. Larger is better, but could cause OOM.\n",
        "    ds = ds.batch(128)\n",
        "    ds = ds.map(\n",
        "        functools.partial(self.sample),\n",
        "        num_parallel_calls=tf.data.AUTOTUNE,\n",
        "        deterministic=False,\n",
        "    )\n",
        "    return ds.unbatch().prefetch(tf.data.AUTOTUNE)\n",
        "\n",
        "  def sample(self, seeds: tf.Tensor) -\u003e tfgnn.GraphTensor:\n",
        "    seeds = tf.cast(seeds, tf.int64)\n",
        "    batch_size = tf.size(seeds)\n",
        "    # samples subgraphs for each seed independently as [[seed1], [seed2], ...]\n",
        "    seeds_ragged = tf.RaggedTensor.from_row_lengths(\n",
        "        seeds, tf.ones([batch_size], tf.int64),\n",
        "    )\n",
        "    return self._sampling_model(seeds_ragged)\n",
        "\n",
        "train_ds_provider = SubgraphDatasetProvider(full_ogbn_mag_graph_tensor, train_sampling_sizes, \"train\")\n",
        "valid_ds_provider = SubgraphDatasetProvider(full_ogbn_mag_graph_tensor, validation_sample_sizes, \"validation\")\n",
        "example_input_graph_spec = train_ds_provider.input_graph_spec._unbatch()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Us-JmYuj0BuX"
      },
      "source": [
        "## Distributed Training\n",
        "\n",
        "We use TensorFlow's [Distribution Strategy](https://www.tensorflow.org/guide/distributed_training) API to write a model that can run on multiple TPUs, multiple GPUs, or maybe just locally on CPU.\n",
        "\n",
        "For CloudTPU, the following code assumes the Colab runtime type \"TPU v2\", that is, a TPU VM. Do not use the runtime type \"TPU (deprecated)\", which uses a TPU Node on a separate VM."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "executionInfo": {
          "elapsed": 26820,
          "status": "ok",
          "timestamp": 1711472717800,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "2oBuJEZ3izQm",
        "outputId": "db98cb52-837a-4552-cf63-606cb88ffa25"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Using TPUStrategy\n",
            "Found 8 replicas in sync\n"
          ]
        }
      ],
      "source": [
        "if tf.config.list_physical_devices(\"TPU\"):\n",
        "  print(f\"Using TPUStrategy\")\n",
        "  min_nodes_per_component = {\"paper\": 1}\n",
        "  strategy = runner.TPUStrategy(\"local\")\n",
        "  train_padding = runner.FitOrSkipPadding(example_input_graph_spec, train_ds_provider, min_nodes_per_component)\n",
        "  valid_padding = runner.TightPadding(example_input_graph_spec, valid_ds_provider, min_nodes_per_component)\n",
        "elif tf.config.list_physical_devices(\"GPU\"):\n",
        "  print(f\"Using MirroredStrategy for GPUs\")\n",
        "  gpu_list = !nvidia-smi -L\n",
        "  print(\"\\n\".join(gpu_list))\n",
        "  strategy = tf.distribute.MirroredStrategy()\n",
        "  train_padding = None\n",
        "  valid_padding = None\n",
        "else:\n",
        "  print(f\"Using default strategy\")\n",
        "  strategy = tf.distribute.get_strategy()\n",
        "  train_padding = None\n",
        "  valid_padding = None\n",
        "print(f\"Found {strategy.num_replicas_in_sync} replicas in sync\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fpoYdZZvm_q5"
      },
      "source": [
        "As you might have noticed above, we need to provide a padding strategy when we want to train on TPUs. Next, we explain the need for paddings on TPU and the different padding strategies employed during training and validation."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZpAI2o77me97"
      },
      "source": [
        "### Padding (for TPUs)\n",
        "\n",
        "\n",
        "Training on TPUs involves just-in-time compilation of a TensorFlow model to TPU code, and requires *fixed shapes* for all Tensors involved. To achieve that for graph data with variable numbers of nodes and edges, we need to pad each input Tensor to some fixed maximum size. For training on GPUs or CPU, this extra step is not necessary."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "07yxbjE4zBkJ"
      },
      "source": [
        "#### TightPadding: Padding for the validation dataset\n",
        "\n",
        "For the validation dataset, we need to make sure that every batch of examples fits within the fixed size, no matter how the parallelism in the input pipeline ends up combining examples into batches. Therefore, we use a rather generous estimate, basically scaling each Tensor's observed maximum size by a factor of `batch_size`. If that were to run into limitations of accelerator memory, we'd rather shrink the batch size than lose examples.\n",
        "\n",
        "The dataset in this example is not too big, so we can scan it within a few minutes to determine constraints large enough for all inputs. (For huge datasets under your control, it may be worth inferring an upper bound from the sampling spec instead.)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mK91KBlwzCa2"
      },
      "source": [
        "\n",
        "#### FitOrSkipPadding: Padding for the training dataset\n",
        "\n",
        "For the training dataset, TF-GNN allows you to optimize more aggressively for large batch sizes: size constraints satisfied by 100% of the inputs have to accommodate the rare combination of many large examples in one batch.\n",
        "\n",
        "Instead, we use size constraints that will fit *close to* 100% of the randomly drawn training batches. This is not covered by the theory supporting stochastic gradient descent (which calls for examples drawn independently at random), but in practice, it often works, and allows larger batch sizes within the limits of accelerator memory, and hence faster convergence of the training."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f885-5yS0xKk"
      },
      "source": [
        "## Model Building and Training\n",
        "\n",
        "We build a model on sampled subgraphs that predicts one of 349 classes (venues) for the subgraph's root node. We use a Graph Neural Network (GNN) to propagate information along edge sets towards the subgraph's root node."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0vTTJFMN0xKw"
      },
      "source": [
        "Observe how the various node sets play different roles:\n",
        "\n",
        "  * Node set \"paper\" has many nodes. It contains the node to predict on. Some of its nodes are linked by \"cites\" edges, which seem relevant for the prediction task. Its nodes also carry the only input feature besides adjacency, namely the word2vec embedding of title and abstract.\n",
        "  * Node set \"author\" also has many nodes. Authors have no features of their own, but having an author in common provides a seemingly relevant relation between papers.\n",
        "  * Node set \"field_of_study\" has relatively few nodes. They have no features by themselves, but having a common field of study provides a seemingly relevant relation between papers.\n",
        "  * Node set \"institution\" has relatively few nodes. It provides an additional relation on authors.\n",
        "\n",
        "For node sets \"paper\" and \"author\", we follow the standard GNN approach to maintain a hidden state for each node and update it several times with information from the inbound edges. Notice how sampling has equipped each \"paper\" or \"author\" adjacent to the root node with a 1-hop neighborhood of its own. Our model does 4 rounds of updates, which covers the longest possible path in a sampled subgraph: a seed paper \"cites\" a paper that was written by (\"rev_writes\") an author who \"writes\" another paper that \"has_topic\" in some field of study.\n",
        "\n",
        "For node sets \"field_of_study\" and \"institution\", a GNN on the full graph could produce meaningful hidden states for their few elements in the same way. However, in the sampled approach, it seems wasteful to do that from scratch for every subgraph. Instead, our model reads hidden states for them out of an embedding table. This way, the GNN can treat them as read-only nodes with outgoing edges only; the writing happens implicitly by gradient updates to their embeddings. (We choose to maintain a single embedding shared between the rounds of GNN updates.) – Notice how this modeling decision directly influences the sampling spec."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4YrmILM-0xKw"
      },
      "source": [
        "## Process Features\n",
        "\n",
        "Usually in TensorFlow, the non-trainable transformations of the input features are split off into a `Dataset.map()` call while the main model consists of the trainable and accelerator-compatible parts. However, even this non-trainable part is put into a Keras model, which is a convenient way to track resources (such as lookup tables) for exporting to a SavedModel."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4Zmv7lc18JTF"
      },
      "source": [
        "### Feature Preprocessing\n",
        "\n",
        "Typically, feature preprocessing happens locally on nodes and edges. TF-GNN strives to reuse standard Keras implementations for this.  The `tfgnn.keras.layers.MapFeatures` layer lets you express feature transformations on the graph as a collection of feature transformations for the various graph pieces (node sets, edge sets, and context).\n",
        "\n",
        "At this stage, the eventual training label is still a feature on the `GraphTensor`. If necessary, it could also be preprocessed (e.g., turn a string-valued class label into a numeric id), but that's not the case here.\n",
        "The training `Task` (defined below) splits the label out of the `GraphTensor`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VoCArT3s8aFv"
      },
      "outputs": [],
      "source": [
        "# For nodes\n",
        "def process_node_features(node_set: tfgnn.NodeSet, node_set_name: str):\n",
        "  if node_set_name == \"field_of_study\":\n",
        "    return {\"hashed_id\": tf.keras.layers.Hashing(50_000)(node_set[\"#id\"])}\n",
        "  if node_set_name == \"institution\":\n",
        "    return {\"hashed_id\": tf.keras.layers.Hashing(6_500)(node_set[\"#id\"])}\n",
        "  if node_set_name == \"paper\":\n",
        "    # Keep `labels` for eventual extraction.\n",
        "    return {\"feat\": node_set[\"feat\"], \"label\": node_set[\"label\"]}\n",
        "  if node_set_name == \"author\":\n",
        "    return {\"empty_state\": tfgnn.keras.layers.MakeEmptyFeature()(node_set)}\n",
        "  raise KeyError(f\"Unexpected node_set_name='{node_set_name}'\")\n",
        "\n",
        "# For context and edges, in this example, we drop all features.\n",
        "def drop_all_features(_, **unused_kwargs):\n",
        "  return {}\n",
        "\n",
        "# The combined feature preprocessing of context, edges and nodes.\n",
        "process_features = tfgnn.keras.layers.MapFeatures(\n",
        "    context_fn=drop_all_features,\n",
        "    node_sets_fn=process_node_features,\n",
        "    edge_sets_fn=drop_all_features)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Tju6tg3swpI5"
      },
      "source": [
        "### Readout structure and labels\n",
        "\n",
        "GNNs can be applied to a wide range of problems, and it depends on the problem\n",
        "which nodes have the hidden state(s) from which a prediction can be made. For node classification on sampled subgraphs, we want to read out the final hidden state from the seed node of each subgraph. By convention, the sampler stores the seed as the first `\"paper\"` node in each subgraph, but recall there are multiple of them in a training batch.\n",
        "\n",
        "The `AddReadoutFromFirstNode` helper lets us express readout from seeds by adding the following **readout structure**  (explained further in the [Schema](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/schema.md#about-labels-and-reading-out-the-final-gnn-states) and [Data Preparation](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/data_prep.md#readout) guides):\n",
        "\n",
        "  * a node set `\"_readout\"` with as many nodes as there are sampled subgraphs in the training batch;\n",
        "  * an edge set `\"_readout/seed\"` that connects the seed of the *i*-th sampled subgraph to the *i*-th readout node.\n",
        "\n",
        "The GNN model itself ignores auxiliary graph pieces like these whose names starts with an underscore.\n",
        "\n",
        "The readout structure is also useful for handling labels: Originally provided as node features, the labels need to be read out from the seed nodes as well, and they need to be removed from the node features seen by the model. The `StructuredReadoutIntoFeature` helper does just that: it creates a new feature with the read-out labels on the `\"_readout\"` node set (conveniently aligned with the eventual predictions) and optionally deletes the original label feature.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VQJgy_ypHfBe"
      },
      "outputs": [],
      "source": [
        "add_readout = tfgnn.keras.layers.AddReadoutFromFirstNode(\n",
        "    \"seed\", node_set_name=\"paper\")\n",
        "move_label_to_readout = tfgnn.keras.layers.StructuredReadoutIntoFeature(\n",
        "    \"seed\", feature_name=\"label\", new_feature_name=\"paper_venue\",\n",
        "    remove_input_feature=True)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_btPx4aEHe_E"
      },
      "outputs": [],
      "source": [
        "# The complete list of feature processors.\n",
        "feature_processors = [\n",
        "    process_features,\n",
        "    add_readout,\n",
        "    move_label_to_readout,\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p5xYcdg21UrW"
      },
      "source": [
        "## Model Architecture\n",
        "\n",
        "Typically, a model with a GNN architecture as its base consists of three parts:\n",
        "\n",
        "1. The initialization of hidden states on nodes (and possibly also edges and/or the graph context) from their respective preprocessed features.\n",
        "2. The base Graph Neural Network: several rounds of updating hidden states from neighboring items in the graph.\n",
        "3.  A prediction head, such as a linear classifier, applied to the final hidden states read out from the nodes of interest.\n",
        "\n",
        "We are going to use one model for training, validation, and export for inference, so we need to build it from an input type spec with generic tensor shapes. (For TPUs, it's good enough if each *dataset* it gets called on has fixed-size elements.) Before defining the base Graph Neural Network, we show how to initialize the hidden states of all the necessary components (nodes, edges and context) given the pre-processed features."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UV0EYzwH1UrX"
      },
      "source": [
        "### Initialization of Hidden States\n",
        "\n",
        "The hidden states on nodes are created by mapping a dict of (preprocessed) features to fixed-size hidden states for nodes.  It often makes sense to send input features through a small encoder network, like the `Dense` layer applied below to the `\"feat\"` of paper nodes.\n",
        "\n",
        "Similarly to feature preprocessing, the `tfgnn.keras.layers.MapFeatures` layer lets you specify such a transformation as a callback function that transforms feature dicts, with GraphTensor mechanics taken off your shoulders."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mIYE-MZreZhg"
      },
      "outputs": [],
      "source": [
        "# Hyperparameters\n",
        "node_state_dim = 128\n",
        "\n",
        "def set_initial_node_states(node_set: tfgnn.NodeSet, node_set_name: str):\n",
        "  if node_set_name == \"field_of_study\":\n",
        "    return tf.keras.layers.Embedding(50_000, 32)(node_set[\"hashed_id\"])\n",
        "  if node_set_name == \"institution\":\n",
        "    return tf.keras.layers.Embedding(6_500, 16)(node_set[\"hashed_id\"])\n",
        "  if node_set_name == \"paper\":\n",
        "    return tf.keras.layers.Dense(node_state_dim, \"relu\")(node_set[\"feat\"])\n",
        "  if node_set_name == \"author\":\n",
        "    return node_set[\"empty_state\"]\n",
        "  raise KeyError(f\"Unexpected node_set_name='{node_set_name}'\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VJw0eMdn-Qjb"
      },
      "source": [
        "It is important to understand the distinction between feature pre-processing and hidden state intialization despite the fact that both of the steps are defined using `tfgnn.keras.layers.MapFeatures`. Feature pre-processing step is non-trainable and occurs asynchronous to the training loop. On the other hand, hidden state initialization is trainable and occurs on the corresponding accelerator."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ntB2gdCSOgfK"
      },
      "source": [
        "### Base Graph Neural Network\n",
        "\n",
        "After the hidden states have been initialized, we pass the graph through the base Graph Neural Network, which is a sequence of GraphUpdates. Each GraphUpdate inputs a GraphTensor and returns a GraphTensor with the same graph structure, but the hidden states of nodes have been updated using the information of the neighbor nodes. In our example, the input examples are sampled subgraphs with up to 4 hops, so we perform 4 rounds of graph updates which suffice to bring all information into the root node.\n",
        "\n",
        "Here, we use TF-GNN's Model Template version A, code-named [MtAlbis](https://github.com/tensorflow/gnn/tree/main/tensorflow_gnn/models/mt_albis). It provides a curated shortlist of modeling options, and we invite our users to try this one before exploring the other choices offered in [tensorflow_gnn/models](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/models/README.md) or building their own as described in the [Modeling guide](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/gnn_modeling.md).\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oPGXAH5-gI-X"
      },
      "outputs": [],
      "source": [
        "# Hyperparameters\n",
        "num_graph_updates = 4\n",
        "message_dim = 128\n",
        "state_dropout_rate = 0.2\n",
        "l2_regularization = 1e-5\n",
        "\n",
        "def model_fn(graph_tensor_spec: tfgnn.GraphTensorSpec):\n",
        "  graph = inputs = tf.keras.layers.Input(type_spec=graph_tensor_spec)\n",
        "  graph = tfgnn.keras.layers.MapFeatures(\n",
        "      node_sets_fn=set_initial_node_states)(graph)\n",
        "  for i in range(num_graph_updates):\n",
        "    graph = mt_albis.MtAlbisGraphUpdate(\n",
        "        units=node_state_dim,\n",
        "        message_dim=message_dim,\n",
        "        receiver_tag=tfgnn.SOURCE,\n",
        "        node_set_names=None if i \u003c num_graph_updates-1 else [\"paper\"],\n",
        "        simple_conv_reduce_type=\"mean|sum\",\n",
        "        state_dropout_rate=state_dropout_rate,\n",
        "        l2_regularization=l2_regularization,\n",
        "        normalization_type=\"layer\",\n",
        "        next_state_type=\"residual\",\n",
        "    )(graph)\n",
        "  return tf.keras.Model(inputs, graph)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3ShrnDjwP1Yi"
      },
      "source": [
        "An important parameter to set in the GraphUpdate layer is the `receiver_tag`. To determine this tag, it is important to understand the difference between `tfgnn.SOURCE` and `tfgnn.TARGET`. *Source* indictates the node from where an edge originates while *Target* indicates the node to which an edge points to.\n",
        "\n",
        "The graph sampler starts sampling from the root node (one can think of the root node as the main source of the subgraph) and stores edges in the direction of their discovery while sampling. Given this construct, the GNN needs to send information in the reverse direction towards the root. In other words, the information needs to be propagated towards the `SOURCE` of each edge, so that it can reach and update the hidden state of the root. Thus, we set the `receiver_tag` to be `tfgnn.SOURCE`.\n",
        "\n",
        "An interesting observation arising from the fact that `receiver_tag=tfgnn.SOURCE` is that since the node sets `\"field_of_study\"` and `\"institution\"` have no outgoing edge sets, the `MtAlbisGraphUpdate` does not change their hidden states: these remain the embedding tables from node state initialization. The other node sets have their hidden states computed in a GraphUpdate: `\"paper\"` in all four rounds, `\"author\"` in all rounds but the last (because that hidden state has no opportunity to influence the final state of `\"paper\"`).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zvmDEwiSgLmn"
      },
      "source": [
        "### The Task\n",
        "\n",
        "A `Task` object defines the learning objective for a GNN and defines the model pieces that are needed around the `model_fn` to adapt it to the task at hand.\n",
        "\n",
        "The library defines `Task` subclasses for a variety of standard prediction tasks, including the suitable training loss and matching metrics. Here, we use `NodeMulticlassClassification`, because our task is to predict one of many mutually exclusive classes (venues) for the nodes (papers) of interest. The `Node*` tasks rely on the readout structure in model's input graph to identify the nodes of interest and to provide the labels for them. (Recall how the code above set up structured readout from the root nodes of sampled subgraphs and moved their labels there.)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BEG5ydF-gVbV"
      },
      "outputs": [],
      "source": [
        "task = runner.NodeMulticlassClassification(\n",
        "    num_classes=349,\n",
        "    label_feature_name=\"paper_venue\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Zv2B5cGNgYSs"
      },
      "source": [
        "## The Trainer\n",
        "\n",
        "A Trainer provides any training and validation loops. These may be uses of `tf.keras.Model.fit` or arbitrary custom training loops. The Trainer provides accesors to training properties (like its `tf.distribute.Strategy` and model_dir) and is expected to return a trained tf.keras.Model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0_xTJ5MqgXkE"
      },
      "outputs": [],
      "source": [
        "# Hyperparameters\n",
        "global_batch_size = 128\n",
        "epochs = 10\n",
        "initial_learning_rate = 0.001\n",
        "if tf.config.list_physical_devices(\"TPU\"):\n",
        "  # Training on TPU takes ~130 secs / epoch, so we train for the entire epoch.\n",
        "  epoch_divisor = 1\n",
        "else:\n",
        "  # Training on GPU / CPU is slower, so we train for 1/100th of a true epoch.\n",
        "  # Feel free to edit the `epoch_divisor` according to your patience and ambition. ;-)\n",
        "  epoch_divisor = 100\n",
        "steps_per_epoch = NUM_TRAINING_SAMPLES // global_batch_size // epoch_divisor\n",
        "validation_steps = NUM_VALIDATION_SAMPLES // global_batch_size // epoch_divisor\n",
        "learning_rate = tf.keras.optimizers.schedules.CosineDecay(\n",
        "    initial_learning_rate, steps_per_epoch*epochs)\n",
        "optimizer_fn = functools.partial(tf.keras.optimizers.Adam,\n",
        "                                  learning_rate=learning_rate)\n",
        "\n",
        "# Trainer\n",
        "trainer = runner.KerasTrainer(\n",
        "    strategy=strategy,\n",
        "    model_dir=\"/tmp/gnn_model/\",\n",
        "    callbacks=None,\n",
        "    steps_per_epoch=steps_per_epoch,\n",
        "    validation_steps=validation_steps,  # \u003c\u003c\u003c Remove if not training for real.\n",
        "    restore_best_weights=False,\n",
        "    checkpoint_every_n_steps=\"never\",\n",
        "    summarize_every_n_steps=\"never\",\n",
        "    backup_and_restore=False,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_RegDhFQCgzZ"
      },
      "source": [
        "## Export options for inference\n",
        "\n",
        "For inference, a SavedModel must be exported by the runner at the end of training. C++ inference environments like TF Serving do not support input of extension types like GraphTensor, so the `KerasModelExporter` exports the model with a SavedModel Signature that accepts a batch of serialized tf.Examples and preprocesses them like training did."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1d0USa1dCRgX"
      },
      "outputs": [],
      "source": [
        "model_exporter = runner.KerasModelExporter(output_names=\"paper_venue_logits\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XMV2m59sgyLX"
      },
      "source": [
        "## Let the Runner do its magic!\n",
        "\n",
        "Orchestration (a term for the composition, wiring and execution of the above abstractions) happens via a single run method with following signature shown below.\n",
        "\n",
        "Training for 10 epochs of sampled subgraphs takes a few hours on a free colab with one GPU (T4) and should achieve an accuracy above 0.50. Training with the free Cloud TPU runtime is *much* faster, and completes the entire training within 20 mins.\n",
        "\n",
        "You can drive accuracy even higher by training a bigger model for longer: setting `node_state_dim = 256; message_dim = 256; epochs = 20` should take your val_sparse_categorical_accuracy above 0.52.\n",
        "\n",
        "NOTE: It take ~4 minutes before training starts on TPU to learn optimal TPU padding constraints."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "executionInfo": {
          "elapsed": 427499,
          "status": "ok",
          "timestamp": 1711474246342,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "Ay2hhL3d0dZz",
        "outputId": "70fa9f6c-a2c5-4bfa-a0ef-c8ad295c50cb"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Epoch 1/10\n",
            "4918/4918 [==============================] - 170s 34ms/step - loss: 2.6135 - sparse_categorical_accuracy: 0.3245 - sparse_categorical_crossentropy: 2.7248 - val_loss: 2.0817 - val_sparse_categorical_accuracy: 0.4239 - val_sparse_categorical_crossentropy: 2.1490\n",
            "Epoch 2/10\n",
            "4918/4918 [==============================] - 129s 26ms/step - loss: 2.1057 - sparse_categorical_accuracy: 0.4225 - sparse_categorical_crossentropy: 2.1676 - val_loss: 2.0053 - val_sparse_categorical_accuracy: 0.4330 - val_sparse_categorical_crossentropy: 2.0561\n",
            "Epoch 3/10\n",
            "4918/4918 [==============================] - 128s 26ms/step - loss: 1.9673 - sparse_categorical_accuracy: 0.4541 - sparse_categorical_crossentropy: 2.0124 - val_loss: 1.8902 - val_sparse_categorical_accuracy: 0.4703 - val_sparse_categorical_crossentropy: 1.9283\n",
            "Epoch 4/10\n",
            "4918/4918 [==============================] - 130s 26ms/step - loss: 1.8787 - sparse_categorical_accuracy: 0.4740 - sparse_categorical_crossentropy: 1.9149 - val_loss: 1.8447 - val_sparse_categorical_accuracy: 0.4803 - val_sparse_categorical_crossentropy: 1.8784\n",
            "Epoch 5/10\n",
            "4918/4918 [==============================] - 129s 26ms/step - loss: 1.8062 - sparse_categorical_accuracy: 0.4904 - sparse_categorical_crossentropy: 1.8378 - val_loss: 1.8227 - val_sparse_categorical_accuracy: 0.4787 - val_sparse_categorical_crossentropy: 1.8559\n",
            "Epoch 6/10\n",
            "4918/4918 [==============================] - 130s 26ms/step - loss: 1.7416 - sparse_categorical_accuracy: 0.5043 - sparse_categorical_crossentropy: 1.7708 - val_loss: 1.7801 - val_sparse_categorical_accuracy: 0.4919 - val_sparse_categorical_crossentropy: 1.8128\n",
            "Epoch 7/10\n",
            "4918/4918 [==============================] - 133s 27ms/step - loss: 1.6856 - sparse_categorical_accuracy: 0.5167 - sparse_categorical_crossentropy: 1.7136 - val_loss: 1.7456 - val_sparse_categorical_accuracy: 0.4999 - val_sparse_categorical_crossentropy: 1.7787\n",
            "Epoch 8/10\n",
            "4918/4918 [==============================] - 130s 26ms/step - loss: 1.6424 - sparse_categorical_accuracy: 0.5263 - sparse_categorical_crossentropy: 1.6700 - val_loss: 1.7497 - val_sparse_categorical_accuracy: 0.4955 - val_sparse_categorical_crossentropy: 1.7849\n",
            "Epoch 9/10\n",
            "4918/4918 [==============================] - 131s 27ms/step - loss: 1.6112 - sparse_categorical_accuracy: 0.5332 - sparse_categorical_crossentropy: 1.6382 - val_loss: 1.7343 - val_sparse_categorical_accuracy: 0.5013 - val_sparse_categorical_crossentropy: 1.7693\n",
            "Epoch 10/10\n",
            "4918/4918 [==============================] - 132s 27ms/step - loss: 1.5959 - sparse_categorical_accuracy: 0.5372 - sparse_categorical_crossentropy: 1.6224 - val_loss: 1.7417 - val_sparse_categorical_accuracy: 0.4991 - val_sparse_categorical_crossentropy: 1.7773\n"
          ]
        },
        {
          "data": {
            "text/plain": [
              "RunResult(preprocess_model=\u003ckeras.src.engine.functional.Functional object at 0x7fed485eee90\u003e, base_model=\u003ckeras.src.engine.sequential.Sequential object at 0x7febfa0b3280\u003e, trained_model=\u003ckeras.src.engine.functional.Functional object at 0x7fec8811a9e0\u003e)"
            ]
          },
          "execution_count": 17,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "runner.run(\n",
        "    gtspec=example_input_graph_spec,\n",
        "    train_ds_provider=train_ds_provider,\n",
        "    train_padding=train_padding,\n",
        "    valid_ds_provider=valid_ds_provider,  # \u003c\u003c\u003c Remove if not training for real.\n",
        "    valid_padding=valid_padding,  # \u003c\u003c\u003c Remove if not training for real.\n",
        "    global_batch_size=global_batch_size,\n",
        "    epochs=epochs,\n",
        "    feature_processors=feature_processors,\n",
        "    model_fn=model_fn,\n",
        "    task=task,\n",
        "    optimizer_fn=optimizer_fn,\n",
        "    trainer=trainer,\n",
        "    model_exporters=[model_exporter],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6B825Ydn6Vn5"
      },
      "source": [
        "## Inference using Exported Model\n",
        "At the end of training, a SavedModel is exported by the Runner for inference. For demonstration, let's call the exported model on the validation dataset from above, but without labels. We load it as a SavedModel, like TF Serving would.\n",
        "\n",
        "NOTE: TF Serving usually expects examples in form of serialized strings, therefore we explicitly convert the graph tensors to serialized string format and pass it to the loaded model.\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "executionInfo": {
          "elapsed": 51166,
          "status": "ok",
          "timestamp": 1711474297507,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "ki33s9EpsQnF",
        "outputId": "8e6a7ba6-514e-4dda-f96f-deea16d185b1"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "The predicted class for input 0 is   9 with predicted probability 0.3137\n",
            "The predicted class for input 1 is 281 with predicted probability 0.2777\n",
            "The predicted class for input 2 is 189 with predicted probability 0.4749\n",
            "The predicted class for input 3 is 158 with predicted probability 0.9535\n",
            "The predicted class for input 4 is  82 with predicted probability 0.3277\n",
            "The predicted class for input 5 is 247 with predicted probability 0.299\n",
            "The predicted class for input 6 is 209 with predicted probability 0.4056\n",
            "The predicted class for input 7 is 247 with predicted probability 0.593\n",
            "The predicted class for input 8 is 192 with predicted probability 0.5478\n",
            "The predicted class for input 9 is 311 with predicted probability 0.7335\n"
          ]
        }
      ],
      "source": [
        "# Load model.\n",
        "saved_model = tf.saved_model.load(os.path.join(trainer.model_dir, \"export\"))\n",
        "signature_fn = saved_model.signatures[\n",
        "    tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]\n",
        "\n",
        "def _clean_example_for_serving(graph_tensor):\n",
        "  graph_tensor = graph_tensor.remove_features(node_sets={\"paper\": [\"label\"]})\n",
        "  serialized_example = tfgnn.write_example(graph_tensor)\n",
        "  return serialized_example.SerializeToString()\n",
        "\n",
        "# Convert 10 examples to serialized string format.\n",
        "num_examples = 10\n",
        "demo_ds = valid_ds_provider.get_dataset(tf.distribute.InputContext())\n",
        "serialized_examples = [_clean_example_for_serving(gt)\n",
        "                       for gt in itertools.islice(demo_ds, num_examples)]\n",
        "\n",
        "# Inference on 10 examples\n",
        "ds = tf.data.Dataset.from_tensor_slices(serialized_examples)\n",
        "# The name \"examples\" for serialized tf.Example protos is defined by the runner.\n",
        "input_dict = {\"examples\": next(iter(ds.batch(10)))}\n",
        "\n",
        "# Outputs are in the form of logits.\n",
        "output_dict = signature_fn(**input_dict)\n",
        "logits = output_dict[\"paper_venue_logits\"]  # As configured above.\n",
        "probabilities = tf.math.softmax(logits).numpy()\n",
        "classes = probabilities.argmax(axis=1)\n",
        "\n",
        "# Print the predicted classes\n",
        "for i, c in enumerate(classes):\n",
        "  print(f\"The predicted class for input {i} is {c:3} \"\n",
        "        f\"with predicted probability {probabilities[i, c]:.4}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mtM1CfDO0wBF"
      },
      "source": [
        "## Next steps"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s89Zwsuk0yXE"
      },
      "source": [
        "This tutorial has shown how to solve a node classification problem in a large graph with TF-GNN using\n",
        "  * the graph sampler tool to obtain manageable-sized inputs for each classification target,\n",
        "  * the Runner for training GNNs with minimal coding.\n",
        "\n",
        "The [Data Preparation and Sampling](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/data_prep.md) guide describes how you can create training data for other datasets.\n",
        "\n",
        "The colab notebook [An in-depth look at TF-GNN](https://colab.research.google.com/github/tensorflow/gnn/blob/main/examples/notebooks/ogbn_mag_indepth.ipynb) solves OGBN-MAG again, but without the abstractions provided by the Runner and the ready-to-use MtAlbis model. Take a look if you like to know more, or want more control in designing GNNs for your own task.\n",
        "\n",
        "For more complete documentation, please check out the [TF-GNN documentation](https://github.com/tensorflow/gnn/blob/main/tensorflow_gnn/docs/guide/overview.md).\n",
        "\n"
      ]
    }
  ],
  "metadata": {
    "accelerator": "TPU",
    "colab": {
      "collapsed_sections": [
        "ScitaPqhKtuW"
      ],
      "name": "Solving OGBN-MAG end-to-end with TF-GNN",
      "gpuType": "V28",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
