{
  "cells": [
    {
      "metadata": {
        "id": "GRVFIVqME6-y"
      },
      "cell_type": "markdown",
      "source": [
        "## Shardy:MPMD intro for JAX users\n",
        "\n",
        "Shardy:MPMD is a new MPMD partitioning system, built in MLIR and integrated on top of JAX.\n",
        "\n",
        "This colab demonstrates how to use MPMD pipelining for JAX users who use `jax.jit`. See our RFC for more details.\n",
        "\n",
        "**Note**: This colab is purely read-only, and cannot be executed until we\n",
        "fully open source all the components.\n",
        "\n",
        "## Overview\n",
        "This colab starts by\n",
        "1. Defining a simplified Transformer (without the encode and decode stages) in SPMD with jax.jit and some sharding, and then\n",
        "2. Demonstrates how to pipeline it using MPMD using different schedules.\n",
        "\n",
        "\n",
        "\n",
        "### Set up\n",
        "We connect to the Pathways server, inspect the devices, and load each slice into its own mesh. Mesh names are \"m0\", \"m1\", ...\n",
        "\n",
        "This colab assumes we have 8 devices."
      ]
    },
    {
      "metadata": {
        "cellView": "form",
        "colab": {
          "height": 34
        },
        "executionInfo": {
          "elapsed": 9633,
          "status": "ok",
          "timestamp": 1750238530459,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "wi_JwPCj7sIQ",
        "outputId": "b0de7a9a-a4f6-44a2-ccac-ee79b1be9179"
      },
      "cell_type": "code",
      "source": [
        "# @title Imports and connect to Pathways server { form-width: \"80px\" }\n",
        "pathways_server_xid = 171321046  # @param {type: \"number\"}\n",
        "\n",
        "from pprint import pprint\n",
        "\n",
        "import jax\n",
        "import jax.numpy as jnp\n",
        "import numpy as np\n",
        "\n",
        "import mpmd # Shardy MPMD python lib\n",
        "import pathways_launch\n",
        "\n",
        "jax.config.update('jax_use_shardy_partitioner', True)\n",
        "\n",
        "# Mock API to connect to Pathways on Cloud TPUs.\n",
        "pathways_launch.connect(pathways_server_xid)"
      ],
      "outputs": [
        {
          "data": {
            "application/javascript": [
              "window[\"b6151d44-4c25-11f0-bd64-088bc8784d85\"] = document.createElement(\"div\");\n",
              "//# sourceURL=js_058079fe8e"
            ],
            "text/plain": [
              "\u003cIPython.core.display.Javascript object\u003e"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/javascript": [
              "window[\"b6151d44-4c25-11f0-bd64-088bc8784d85\"].innerHTML = \"Pathways status will be updated here.\";\n",
              "//# sourceURL=js_0ef992c96d"
            ],
            "text/plain": [
              "\u003cIPython.core.display.Javascript object\u003e"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/javascript": [
              "window[\"b61583b7-4c25-11f0-a077-088bc8784d85\"] = window[\"b6151d44-4c25-11f0-bd64-088bc8784d85\"].setAttribute(\"id\", \"pathways-status-bar\");\n",
              "//# sourceURL=js_35a7a4e4be"
            ],
            "text/plain": [
              "\u003cIPython.core.display.Javascript object\u003e"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/javascript": [
              "window[\"b615acb5-4c25-11f0-8481-088bc8784d85\"] = document.body.appendChild(window[\"b6151d44-4c25-11f0-bd64-088bc8784d85\"]);\n",
              "//# sourceURL=js_461e80b98d"
            ],
            "text/plain": [
              "\u003cIPython.core.display.Javascript object\u003e"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/javascript": [
              "window[\"b615d788-4c25-11f0-b2ba-088bc8784d85\"] = colab.registerListener(\"pathways-colab-status-update\", function (m, p) {\n",
              "        document.getElementById(\"pathways-status-bar\").innerHTML = p.value\n",
              "      });\n",
              "//# sourceURL=js_0d0de3a76c"
            ],
            "text/plain": [
              "\u003cIPython.core.display.Javascript object\u003e"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "cellView": "form",
        "executionInfo": {
          "elapsed": 142,
          "status": "ok",
          "timestamp": 1750238530840,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "gL1zSm_bHa_a",
        "outputId": "e62cef22-be07-4452-ce02-0239cece5727"
      },
      "cell_type": "code",
      "source": [
        "# @title Check devices:\n",
        "print(f\"Total num devices: {len(jax.devices())}\")\n",
        "mesh = jax.sharding.Mesh(np.array(jax.devices()).reshape(4,2), (\"stage\", \"data\"))\n",
        "print(\"Base mesh: \", mesh)"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Total num devices: 8\n",
            "Base mesh:  Mesh('stage': 4, 'data': 2, axis_types=(Auto, Auto))\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "cellView": "form",
        "executionInfo": {
          "elapsed": 6,
          "status": "ok",
          "timestamp": 1750242951228,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "RnGKKfAo3Gui",
        "outputId": "85681e68-f0cb-4e15-9788-b35cba9fe1fa"
      },
      "cell_type": "code",
      "source": [
        "# @title Set up basic topology and assignment\n",
        "topology = {}\n",
        "for i in range(mesh.devices.shape[0]):\n",
        "  topology[f\"m{i}\"] = jax.sharding.Mesh(\n",
        "      mesh.devices[i].reshape(1, 2), (\"stage\", \"data\")\n",
        "  )\n",
        "\n",
        "\n",
        "print(\"MPMD topology: \")\n",
        "pprint(topology)"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "MPMD topology: \n",
            "{'m0': Mesh(device_ids=array([[0, 1]]), axis_names=('stage', 'data'), axis_types=(Auto, Auto)),\n",
            " 'm1': Mesh(device_ids=array([[2, 3]]), axis_names=('stage', 'data'), axis_types=(Auto, Auto)),\n",
            " 'm2': Mesh(device_ids=array([[4, 5]]), axis_names=('stage', 'data'), axis_types=(Auto, Auto)),\n",
            " 'm3': Mesh(device_ids=array([[6, 7]]), axis_names=('stage', 'data'), axis_types=(Auto, Auto))}\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "b78de054"
      },
      "cell_type": "markdown",
      "source": [
        "### Define a basic Transformer and util functions"
      ]
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 1467,
          "status": "ok",
          "timestamp": 1750238532801,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "f7edc83b",
        "outputId": "4bda094c-9502-4ac8-fd71-45e9fb4b65ee"
      },
      "cell_type": "code",
      "source": [
        "import flax.linen as nn\n",
        "\n",
        "# Define model parameters\n",
        "BATCH_SIZE = 2\n",
        "SEQ_LEN = 8 * 1024\n",
        "D_MODEL = 1024\n",
        "MLP_DIM = 4 * D_MODEL\n",
        "NUM_LAYERS = 24\n",
        "\n",
        "\n",
        "class Block(nn.Module):\n",
        "\n",
        "  @nn.remat\n",
        "  @nn.jit\n",
        "  @nn.compact\n",
        "  def __call__(self, x):\n",
        "    attn_output = nn.MultiHeadDotProductAttention(num_heads=8, qkv_features=16)(\n",
        "        x\n",
        "    )\n",
        "    x = x + attn_output\n",
        "    x = nn.LayerNorm()(x)\n",
        "\n",
        "    # Feed-forward network\n",
        "    mlp_output = nn.Dense(features=MLP_DIM)(x)\n",
        "    mlp_output = nn.gelu(mlp_output)\n",
        "    mlp_output = nn.Dense(features=x.shape[-1])(mlp_output)\n",
        "    x = x + mlp_output\n",
        "    x = nn.LayerNorm()(x)\n",
        "\n",
        "    return x\n",
        "\n",
        "\n",
        "class Transformer(nn.Module):\n",
        "\n",
        "  @nn.compact\n",
        "  def __call__(self, x):\n",
        "    for i in range(NUM_LAYERS):\n",
        "      x = Block(name=f\"block_{i}\")(x)\n",
        "    return x\n",
        "\n",
        "\n",
        "# Initialize the model's parameters\n",
        "dummy_input = jnp.ones((BATCH_SIZE, SEQ_LEN, D_MODEL))\n",
        "transformer = Transformer()\n",
        "key = jax.random.PRNGKey(0)\n",
        "params = transformer.init(key, dummy_input)[\"params\"]\n",
        "\n",
        "print(\"Model initialized successfully!\")"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Model initialized successfully!\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 860,
          "status": "ok",
          "timestamp": 1750238533880,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "fSeY2hlmiHcf",
        "outputId": "e39c9d0f-106b-457f-efcc-3410e52ec6a2"
      },
      "cell_type": "code",
      "source": [
        "from flax.training import train_state\n",
        "import optax\n",
        "import time\n",
        "\n",
        "optimizer = optax.adamw(learning_rate=0.001)\n",
        "state = train_state.TrainState.create(\n",
        "    apply_fn=transformer.apply, params=params, tx=optimizer\n",
        ")\n",
        "\n",
        "\n",
        "# Define the training step.\n",
        "def train_step(state, xs, targets):\n",
        "  \"\"\"Trains the model for one step.\"\"\"\n",
        "\n",
        "  def loss_fn(params, x, targets):\n",
        "    predictions = state.apply_fn({\"params\": params}, x)\n",
        "    return jnp.mean((predictions - targets) ** 2)\n",
        "\n",
        "  loss_acc, grads_acc = None, None\n",
        "  for x in xs:\n",
        "    loss, grads = jax.value_and_grad(loss_fn)(state.params, x, targets)\n",
        "\n",
        "    loss_acc = loss if loss_acc is None else loss_acc + loss\n",
        "    grads_acc = (\n",
        "        grads\n",
        "        if grads_acc is None\n",
        "        else jax.tree.map(lambda x, y: x + y, grads_acc, grads)\n",
        "    )\n",
        "\n",
        "  state = state.apply_gradients(grads=grads_acc)\n",
        "  return state, loss_acc\n",
        "\n",
        "\n",
        "def train_with_progress(train_step, inputs, num_steps=3):\n",
        "  updated_state, x, targets = inputs\n",
        "  training_loss = None\n",
        "  # Warmup\n",
        "  jax.block_until_ready(train_step(updated_state, x, targets))\n",
        "\n",
        "  start_time = time.perf_counter()\n",
        "  for i in range(num_steps):\n",
        "    updated_state, training_loss = train_step(updated_state, x, targets)\n",
        "    if i % 2 == 1:\n",
        "      print(f\"Training loss after step {i+1}: {training_loss}\")\n",
        "\n",
        "  jax.block_until_ready(updated_state)\n",
        "  end_time = time.perf_counter()\n",
        "  print(f\"Final training loss: {training_loss}\")\n",
        "  print(f\"Training took: {end_time - start_time:.2f} seconds\")\n",
        "\n",
        "print(\"Model util functions initialized.\")"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Model util functions initialized.\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "egkwmZ_F-5jv"
      },
      "cell_type": "markdown",
      "source": [
        "### Run the Transformer"
      ]
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 73,
          "status": "ok",
          "timestamp": 1750238534167,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "taljasTD6jaW",
        "outputId": "f30be200-25b3-4219-b659-7770a4beb978"
      },
      "cell_type": "code",
      "source": [
        "# Set up inputs.\n",
        "# We set the microbatches to num_pipeline stages as that's what we'll\n",
        "# use for the pipelining.\n",
        "NUM_PIPELINE_STAGE = len(topology)\n",
        "NUM_MB = NUM_PIPELINE_STAGE\n",
        "print(\"Num microbatches: \", NUM_MB)\n",
        "\n",
        "xs = tuple([jnp.ones_like(dummy_input)] * NUM_MB)\n",
        "inputs = (state, xs, dummy_input)\n",
        "\n",
        "def get_param_sharding(x):\n",
        "  if len(getattr(x, \"shape\", [])) \u003e 0:\n",
        "    return jax.sharding.NamedSharding(\n",
        "        mesh,\n",
        "        jax.sharding.PartitionSpec((\"stage\", \"data\")),\n",
        "    )\n",
        "  else:\n",
        "    return jax.sharding.NamedSharding(\n",
        "        mesh,\n",
        "        jax.sharding.PartitionSpec(),\n",
        "    )\n",
        "\n",
        "# Data parallel + ZeRO 3 sharding on stage + data.\n",
        "in_shardings = (\n",
        "    jax.tree.map(get_param_sharding, state),\n",
        "    jax.sharding.NamedSharding(\n",
        "        mesh,\n",
        "        jax.sharding.PartitionSpec(\"data\"),\n",
        "    ),\n",
        "    jax.sharding.NamedSharding(\n",
        "        mesh,\n",
        "        jax.sharding.PartitionSpec(\"data\"),\n",
        "    ),\n",
        ")"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Num microbatches:  4\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 28092,
          "status": "ok",
          "timestamp": 1750238562476,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "n0ihV0Rrlopd",
        "outputId": "9adbc0cb-40c5-46a0-b90d-94099ca1ce3e"
      },
      "cell_type": "code",
      "source": [
        "# Simple SPMD training with micro-batching.\n",
        "jitted_train_step = jax.jit(train_step, in_shardings=in_shardings)\n",
        "compiled = jitted_train_step.lower(*inputs).compile()\n",
        "sharded_inputs = jax.device_put(inputs, in_shardings)\n",
        "\n",
        "train_with_progress(compiled, sharded_inputs)"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Training loss after step 2: 7.991689205169678\n",
            "Final training loss: 7.980075836181641\n",
            "Training took: 10.06 seconds\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "i6OGKboWRU_p"
      },
      "cell_type": "markdown",
      "source": [
        "Profile:\n",
        "\n",
        "![spmd_xprof](https://raw.githubusercontent.com/openxla/shardy/main/rfcs/images/2025-06-18-mpmd-rfc/spmd_xprof.png)"
      ]
    },
    {
      "metadata": {
        "id": "BASIbmr4-80Q"
      },
      "cell_type": "markdown",
      "source": [
        "### Pipeline the transformer"
      ]
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 274,
          "status": "ok",
          "timestamp": 1750238562980,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "BgeDHfpvphxh",
        "outputId": "743bd5ce-f6b8-4359-fcab-54c097339971"
      },
      "cell_type": "code",
      "source": [
        "# To use MPMD, annotate the transformer and use mpmd.call instead of a for loop.\n",
        "class AnnotatedTransformer(nn.Module):\n",
        "\n",
        "  @nn.compact\n",
        "  def __call__(self, x):\n",
        "    for i in range(NUM_LAYERS):\n",
        "      x = mpmd.flax.named_computation(Block, name=f\"block_{i}\")()(x)\n",
        "    return x\n",
        "\n",
        "\n",
        "def mpmd_train_step(state, xs, targets):\n",
        "  \"\"\"Trains the model for one step with mpmd microbatching.\"\"\"\n",
        "\n",
        "  def loss_fn(params, x):\n",
        "    predictions = state.apply_fn({\"params\": params}, x)\n",
        "    return jnp.mean((predictions - targets) ** 2)\n",
        "\n",
        "  carry = jnp.zeros(()), jax.tree.map(jnp.zeros_like, state.params)\n",
        "\n",
        "  # Accumulation is inside the mpmd.call, to ensure that the accumulation\n",
        "  # is done as we go along. E.g. instead of at the end, which would be bad\n",
        "  # for memory.\n",
        "  def microbatch_step(carry, params ,x):\n",
        "    val_and_grad = jax.value_and_grad(loss_fn)(params, x)\n",
        "    carry = jax.tree.map(lambda x, y: x + y, carry, val_and_grad)\n",
        "    return carry\n",
        "\n",
        "  for i, x in enumerate(xs):\n",
        "    # Note the mpmd.call here, with call counter, wrapping the accumulation\n",
        "    # function.\n",
        "    carry = mpmd.call(microbatch_step, call_counter=i)(carry, state.params, x)\n",
        "\n",
        "  loss_acc, grads_acc = carry\n",
        "  state = state.apply_gradients(grads=grads_acc)\n",
        "  return state, loss_acc\n",
        "\n",
        "\n",
        "annotated_transformer = AnnotatedTransformer()\n",
        "annotated_params = annotated_transformer.init(key, dummy_input)[\"params\"]\n",
        "annotated_state = train_state.TrainState.create(\n",
        "    apply_fn=annotated_transformer.apply, params=annotated_params, tx=optimizer\n",
        ")\n",
        "annotated_placeholder_inputs = (annotated_state, xs, dummy_input)\n",
        "\n",
        "basic_assignment = {}\n",
        "for i in range(NUM_LAYERS):\n",
        "  layers_per_mesh = NUM_LAYERS // len(topology)\n",
        "  mesh_idx = min(i // layers_per_mesh, len(topology) - 1)\n",
        "  basic_assignment[f\"block_{i}\"] = f\"m{mesh_idx}\"\n",
        "\n",
        "\n",
        "print(\"Name to mesh assignment:\")\n",
        "pprint(basic_assignment)\n",
        "\n",
        "mpmd_config = mpmd.make_config(\n",
        "    topology=topology,\n",
        "    name_to_mesh_assignment=basic_assignment,\n",
        "    partitioning_options=mpmd.make_partitioning_options({\n",
        "        \"mpmd_pipeline_schedule\": \"ONE_FWD_ONE_BWD\",\n",
        "    }),\n",
        ")"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Name to mesh assignment:\n",
            "{'block_0': 'm0',\n",
            " 'block_1': 'm0',\n",
            " 'block_10': 'm1',\n",
            " 'block_11': 'm1',\n",
            " 'block_12': 'm2',\n",
            " 'block_13': 'm2',\n",
            " 'block_14': 'm2',\n",
            " 'block_15': 'm2',\n",
            " 'block_16': 'm2',\n",
            " 'block_17': 'm2',\n",
            " 'block_18': 'm3',\n",
            " 'block_19': 'm3',\n",
            " 'block_2': 'm0',\n",
            " 'block_20': 'm3',\n",
            " 'block_21': 'm3',\n",
            " 'block_22': 'm3',\n",
            " 'block_23': 'm3',\n",
            " 'block_3': 'm0',\n",
            " 'block_4': 'm0',\n",
            " 'block_5': 'm0',\n",
            " 'block_6': 'm1',\n",
            " 'block_7': 'm1',\n",
            " 'block_8': 'm1',\n",
            " 'block_9': 'm1'}\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "1Kr73s7g5Tl2"
      },
      "cell_type": "code",
      "source": [
        "# Similar to before, except the stage axis is now used for pipelining,\n",
        "# so we don't shard on it.\n",
        "def get_sharding_for_pipeline_state(x):\n",
        "  if len(getattr(x, \"shape\", [])) \u003e 0:\n",
        "    return jax.sharding.NamedSharding(\n",
        "        mpmd_config.sharding_mesh,\n",
        "        jax.sharding.PartitionSpec(\"data\"),\n",
        "    )\n",
        "  else:\n",
        "    return jax.sharding.NamedSharding(\n",
        "        mpmd_config.sharding_mesh,\n",
        "        jax.sharding.PartitionSpec(),\n",
        "    )\n",
        "\n",
        "\n",
        "# Data parallel.\n",
        "in_shardings = (\n",
        "    jax.tree.map(get_sharding_for_pipeline_state, annotated_state),\n",
        "    jax.sharding.NamedSharding(\n",
        "        mpmd_config.sharding_mesh,\n",
        "        jax.sharding.PartitionSpec(\"data\"),\n",
        "    ),\n",
        "    jax.sharding.NamedSharding(\n",
        "        mpmd_config.sharding_mesh,\n",
        "        jax.sharding.PartitionSpec(\"data\"),\n",
        "    ),\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "mKbfY4Gy8Joi"
      },
      "cell_type": "markdown",
      "source": [
        "#### Running the pipelined transformer\n",
        "\n",
        "Now, we execute the transformer.\n",
        "\n",
        "Note that we've not had to annotate other parts\n",
        "of our program, e.g. we've not had to annotate the optimizer, nor the loss.\n",
        "We've also not had to do anything with the gradient computations. These are\n",
        "handled by the compiler and merged into an appropriate program.\n",
        "\n",
        "Furthermore, we've not introduced any cross-mesh transfers explicitly. These\n",
        "are automatically created on the name-to-name boundaries, e.g. when going from\n",
        "\"layer{i}\" to \"layer{i+1}\", if they are assigned to different meshes, we create\n",
        "the cross-mesh transfer.\n",
        "\n",
        "We execute the transformer with various schedules, with the schedule applied at\n",
        "jit-time. This can also be manually orchestrated with `mpmd.jit`, but we've\n",
        "found the flexibility to be beneficial.\n",
        "\n",
        "Note in the profile below, that some of the blocks have been compiled to\n",
        "multiple programs. E.g. the backward computation of blocks 0..5 have programs\n",
        "p7, p10 and p14. This is because of how we've merged in the unannotated ops.\n",
        "The first backward computation p7 will have the gradient accumulators initialized, and the last one will have the param updates, which is why they\n",
        "are different."
      ]
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 24166,
          "status": "ok",
          "timestamp": 1750238587663,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "nVOfvxpS_JGL",
        "outputId": "d217f4e6-dcc6-4b0c-ab30-1abd381631aa"
      },
      "cell_type": "code",
      "source": [
        "mpmd_jitted_train = mpmd.jit(\n",
        "    mpmd_train_step,\n",
        "    mpmd_config=mpmd_config,\n",
        "    in_shardings=in_shardings,\n",
        "    # Partitioning API is a work-in-progress. For now we've hardcoded the\n",
        "    # schedule, but in the future we'll expose fine-grained control as in the\n",
        "    # RFC.\n",
        "    partitioning_options=mpmd.make_partitioning_options({\n",
        "        \"mpmd_pipeline_schedule\": \"ONE_FWD_ONE_BWD\",\n",
        "    }),\n",
        ").lower(*annotated_placeholder_inputs)\n",
        "mpmd_compiled = mpmd_jitted_train.compile()\n",
        "\n",
        "# With MPMD, we need to be more careful with state, and make sure it's on the\n",
        "# right devices.\n",
        "pipelined_inputs = jax.device_put(\n",
        "    annotated_placeholder_inputs,\n",
        "    mpmd_jitted_train.function_named_shardings.input_specs,\n",
        ")\n",
        "\n",
        "print(\"Running program with schedule: ONE_FWD_ONE_BWD\")\n",
        "train_with_progress(mpmd_compiled, pipelined_inputs)"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running program with schedule: ONE_FWD_ONE_BWD\n",
            "Training loss after step 2: 7.992943286895752\n",
            "Final training loss: 7.976408004760742\n",
            "Training took: 11.30 seconds\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "DMhln-EFRYdd"
      },
      "cell_type": "markdown",
      "source": [
        "Profile:\n",
        "\n",
        "![mpmd_1f1b_xprof](https://raw.githubusercontent.com/openxla/shardy/main/rfcs/images/2025-06-18-mpmd-rfc/mpmd_1f1b_xprof.png)"
      ]
    },
    {
      "metadata": {
        "id": "HvuDT8a9AYvz"
      },
      "cell_type": "code",
      "source": [
        "def run_xprof_with_schedule(schedule, assignment, stage_assignment=None):\n",
        "  print(f\"Running program with schedule: {schedule}\")\n",
        "\n",
        "  options = {\"mpmd_pipeline_schedule\": schedule}\n",
        "  mpmd_jitted_train = mpmd.jit(\n",
        "      mpmd_train_step,\n",
        "      mpmd_config=mpmd.make_config(\n",
        "          topology=topology,\n",
        "          name_to_mesh_assignment=assignment,\n",
        "          name_to_stage_assignment=stage_assignment,\n",
        "          partitioning_options=mpmd.make_partitioning_options(options),\n",
        "      ),\n",
        "      in_shardings=in_shardings,\n",
        "  ).lower(*annotated_placeholder_inputs)\n",
        "  mpmd_compiled = mpmd_jitted_train.compile()\n",
        "\n",
        "  pipelined_inputs = jax.device_put(\n",
        "      annotated_placeholder_inputs,\n",
        "      mpmd_jitted_train.function_named_shardings.input_specs,\n",
        "  )\n",
        "\n",
        "  train_with_progress(mpmd_compiled, pipelined_inputs)\n"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 19768,
          "status": "ok",
          "timestamp": 1750238607951,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "X-iu5F5gBgts",
        "outputId": "39290776-aceb-4e97-ba41-0983c3650e74"
      },
      "cell_type": "code",
      "source": [
        "run_xprof_with_schedule(\"GPIPE\", basic_assignment)"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running program with schedule: GPIPE\n",
            "Training loss after step 2: 7.992943286895752\n",
            "Final training loss: 7.976408004760742\n",
            "Training took: 11.35 seconds\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "Iwj8kBCdRlby"
      },
      "cell_type": "markdown",
      "source": [
        "Profile:\n",
        "\n",
        "![mpmd_gpipe_xprof](https://raw.githubusercontent.com/openxla/shardy/main/rfcs/images/2025-06-18-mpmd-rfc/mpmd_gpipe_xprof.png)"
      ]
    },
    {
      "metadata": {
        "executionInfo": {
          "elapsed": 14719,
          "status": "ok",
          "timestamp": 1750238622933,
          "user": {
            "displayName": "",
            "userId": ""
          },
          "user_tz": -60
        },
        "id": "YnSORTFqJy4w",
        "outputId": "a1275527-bf46-4485-8baf-3056f9fe3a2f"
      },
      "cell_type": "code",
      "source": [
        "circular_assignment = {}\n",
        "stage_assignment = {}\n",
        "for i in range(NUM_LAYERS):\n",
        "  circular_assignment[f\"block_{i}\"] = f\"m{i % len(topology)}\"\n",
        "  stage_assignment[f\"block_{i}\"] = i // 2\n",
        "\n",
        "run_xprof_with_schedule(\"CIRCULAR\", circular_assignment, stage_assignment)"
      ],
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running program with schedule: CIRCULAR\n",
            "Training loss after step 2: 7.992943286895752\n",
            "Final training loss: 7.976408004760742\n",
            "Training took: 7.46 seconds\n"
          ]
        }
      ],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "kho_NvQ4Rqiu"
      },
      "cell_type": "markdown",
      "source": [
        "Profile:\n",
        "\n",
        "![mpmd_circular_xprof](https://raw.githubusercontent.com/openxla/shardy/main/rfcs/images/2025-06-18-mpmd-rfc/mpmd_circular_xprof.png)"
      ]
    },
    {
      "metadata": {
        "id": "vdI-y3O0GHPB"
      },
      "cell_type": "code",
      "source": [
        "# Print the main func body of the original MPMD program (1F1B)\n",
        "mlir_module = mpmd_jitted_train.as_text(\"mpmd\")\n",
        "truncated_mlir_module = mlir_module.split(\"func.func\")[1]\n",
        "print(\"func.func\" + truncated_mlir_module)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "hbi6Y0S3Q8Xt"
      },
      "cell_type": "markdown",
      "source": [
        "Note: Printing the output of above in a text cell with some values truncated, \n",
        "to make it more readable.\n",
        "\n",
        "```\n",
        "func.func public @main(%arg0: !mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e, \u003c...truncated...\u003e, %arg1158: !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) \n",
        "    -\u003e (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e {jax.result_info = \"result[0].step\"}, \u003c...truncated...\u003e, !mpmd.mesh_tensor\u003c\"m3\", tensor\u003cf32\u003e, sharding=\u003c@mesh, []\u003e\u003e {jax.result_info = \"result[1]\"}) \n",
        "    attributes {topology = #mpmd.topology\u003c\u003c\"m0\" : \u003c\"stage\"=1, \"data\"=2\u003e\u003e, \u003c\"m1\" : \u003c\"stage\"=1, \"data\"=2\u003e\u003e, \u003c\"m2\" : \u003c\"stage\"=1, \"data\"=2\u003e\u003e, \u003c\"m3\" : \u003c\"stage\"=1, \"data\"=2\u003e\u003e\u003e} {\n",
        "\n",
        "    %0 = mpmd.transfer %arg385 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e\n",
        "    %1 = mpmd.transfer %arg385 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m3\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e\n",
        "    %2 = mpmd.transfer %arg385 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e\n",
        "\n",
        "    %3:6 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_0\", \"block_1\", \"block_2\", \"block_3\", \"block_4\", \"block_5\"]\u003e @\"p0_block_0:5_fwd_calls0to3.mpmd_train_step\"(%arg1154, ..., %arg320) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %4 = mpmd.transfer %3#5 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %5:6 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_6\", \"block_7\", \"block_8\", \"block_9\", \"block_10\", \"block_11\"]\u003e @\"p1_block_6:11_fwd_calls0to3.mpmd_train_step\"(%4, ..., %arg64) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %6 = mpmd.transfer %5#5 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %7:6 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_12\", \"block_13\", \"block_14\", \"block_15\", \"block_16\", \"block_17\"]\u003e @\"p2_block_12:17_fwd_calls0to3.mpmd_train_step\"(%6, ..., %arg160) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %8 = mpmd.transfer %7#5 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %9:98 = mpmd.fragment_call\u003cmesh=\"m3\", origin=[\"block_18\", \"block_19\", \"block_20\", \"block_21\", \"block_22\", \"block_23\", \"block_23\"(1), \"block_22\"(1), \"block_21\"(1), \"block_20\"(1), \"block_19\"(1), \"block_18\"(1)]\u003e @\"p3_block_18:23_fwd_bwd_call0.mpmd_train_step\"(%8, ..., %arg1158) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %10 = mpmd.transfer %9#81 : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "\n",
        "    %11:6 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_0\", \"block_1\", \"block_2\", \"block_3\", \"block_4\", \"block_5\"]\u003e @\"p0_block_0:5_fwd_calls0to3.mpmd_train_step\"(%arg1155, ..., %arg320) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %12 = mpmd.transfer %11#5 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %13:6 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_6\", \"block_7\", \"block_8\", \"block_9\", \"block_10\", \"block_11\"]\u003e @\"p1_block_6:11_fwd_calls0to3.mpmd_train_step\"(%12, ..., %arg64) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %14 = mpmd.transfer %13#5 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %15:6 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_12\", \"block_13\", \"block_14\", \"block_15\", \"block_16\", \"block_17\"]\u003e @\"p2_block_12:17_fwd_calls0to3.mpmd_train_step\"(%14, ..., %arg160) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %16 = mpmd.transfer %15#5 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %17:98 = mpmd.fragment_call\u003cmesh=\"m3\", origin=[\"block_18\", \"block_19\", \"block_20\", \"block_21\", \"block_22\", \"block_23\", \"block_23\"(1), \"block_22\"(1), \"block_21\"(1), \"block_20\"(1), \"block_19\"(1), \"block_18\"(1)]\u003e @\"p4_block_18:23_fwd_bwd_calls1to2.mpmd_train_step\"(%16, ..., %9#97) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %18 = mpmd.transfer %17#81 : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        " \n",
        "    %19:6 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_0\", \"block_1\", \"block_2\", \"block_3\", \"block_4\", \"block_5\"]\u003e @\"p0_block_0:5_fwd_calls0to3.mpmd_train_step\"(%arg1156, ..., %arg320) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %20 = mpmd.transfer %19#5 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %21:6 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_6\", \"block_7\", \"block_8\", \"block_9\", \"block_10\", \"block_11\"]\u003e @\"p1_block_6:11_fwd_calls0to3.mpmd_train_step\"(%20, ..., %arg64) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %22 = mpmd.transfer %21#5 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        " \n",
        "    %23:6 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_0\", \"block_1\", \"block_2\", \"block_3\", \"block_4\", \"block_5\"]\u003e @\"p0_block_0:5_fwd_calls0to3.mpmd_train_step\"(%arg1157, ..., %arg320) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %24 = mpmd.transfer %23#5 : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        " \n",
        "    %25:97 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_17\"(1), \"block_16\"(1), \"block_15\"(1), \"block_14\"(1), \"block_13\"(1), \"block_12\"(1)]\u003e @\"p5_block_17:12_bwd_call0.mpmd_train_step\"(%arg145, ..., %6) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %26 = mpmd.transfer %25#80 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %27:97 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_11\"(1), \"block_10\"(1), \"block_9\"(1), \"block_8\"(1), \"block_7\"(1), \"block_6\"(1)]\u003e @\"p6_block_11:6_bwd_call0.mpmd_train_step\"(%arg49, ..., %4) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %28 = mpmd.transfer %27#80 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %29:96 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_5\"(1), \"block_4\"(1), \"block_3\"(1), \"block_2\"(1), \"block_1\"(1), \"block_0\"(1)]\u003e @\"p7_block_5:0_bwd_call0.mpmd_train_step\"(%arg305, ..., %arg1154) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %30:6 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_12\", \"block_13\", \"block_14\", \"block_15\", \"block_16\", \"block_17\"]\u003e @\"p2_block_12:17_fwd_calls0to3.mpmd_train_step\"(%22, ..., %arg160) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %31 = mpmd.transfer %30#5 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        " \n",
        "    %32:98 = mpmd.fragment_call\u003cmesh=\"m3\", origin=[\"block_18\", \"block_19\", \"block_20\", \"block_21\", \"block_22\", \"block_23\", \"block_23\"(1), \"block_22\"(1), \"block_21\"(1), \"block_20\"(1), \"block_19\"(1), \"block_18\"(1)]\u003e @\"p4_block_18:23_fwd_bwd_calls1to2.mpmd_train_step\"(%31, ..., %17#97) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %33 = mpmd.transfer %32#81 : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %34:6 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_6\", \"block_7\", \"block_8\", \"block_9\", \"block_10\", \"block_11\"]\u003e @\"p1_block_6:11_fwd_calls0to3.mpmd_train_step\"(%24, ..., %arg64) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %35 = mpmd.transfer %34#5 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        " \n",
        "    %36:97 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_17\"(1), \"block_16\"(1), \"block_15\"(1), \"block_14\"(1), \"block_13\"(1), \"block_12\"(1)]\u003e @\"p8_block_17:12_bwd_calls1to2.mpmd_train_step\"(%arg145, ..., %25#96) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %37 = mpmd.transfer %36#80 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %38:97 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_11\"(1), \"block_10\"(1), \"block_9\"(1), \"block_8\"(1), \"block_7\"(1), \"block_6\"(1)]\u003e @\"p9_block_11:6_bwd_calls1to2.mpmd_train_step\"(%arg49, ..., %27#96) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %39 = mpmd.transfer %38#80 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %40:96 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_5\"(1), \"block_4\"(1), \"block_3\"(1), \"block_2\"(1), \"block_1\"(1), \"block_0\"(1)]\u003e @\"p10_block_5:0_bwd_calls1to2.mpmd_train_step\"(%arg305, ..., %29#95) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        " \n",
        "    %41:6 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_12\", \"block_13\", \"block_14\", \"block_15\", \"block_16\", \"block_17\"]\u003e @\"p2_block_12:17_fwd_calls0to3.mpmd_train_step\"(%35, ..., %arg160) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %42 = mpmd.transfer %41#5 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %43:290 = mpmd.fragment_call\u003cmesh=\"m3\", origin=[\"block_18\", \"block_19\", \"block_20\", \"block_21\", \"block_22\", \"block_23\", \"block_23\"(1), \"block_22\"(1), \"block_21\"(1), \"block_20\"(1), \"block_19\"(1), \"block_18\"(1)]\u003e @\"p11_block_18:23_fwd_bwd_call3.mpmd_train_step\"(%42, ..., %1) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m3\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %44 = mpmd.transfer %43#1 : (!mpmd.mesh_tensor\u003c\"m3\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        " \n",
        "    %45:97 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_17\"(1), \"block_16\"(1), \"block_15\"(1), \"block_14\"(1), \"block_13\"(1), \"block_12\"(1)]\u003e @\"p8_block_17:12_bwd_calls1to2.mpmd_train_step\"(%arg145, ..., %36#96) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %46 = mpmd.transfer %45#80 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %47:97 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_11\"(1), \"block_10\"(1), \"block_9\"(1), \"block_8\"(1), \"block_7\"(1), \"block_6\"(1)]\u003e @\"p9_block_11:6_bwd_calls1to2.mpmd_train_step\"(%arg49, ..., %38#96) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %48 = mpmd.transfer %47#80 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %49:96 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_5\"(1), \"block_4\"(1), \"block_3\"(1), \"block_2\"(1), \"block_1\"(1), \"block_0\"(1)]\u003e @\"p10_block_5:0_bwd_calls1to2.mpmd_train_step\"(%arg305, ..., %40#95) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        " \n",
        "    %50:289 = mpmd.fragment_call\u003cmesh=\"m2\", origin=[\"block_17\"(1), \"block_16\"(1), \"block_15\"(1), \"block_14\"(1), \"block_13\"(1), \"block_12\"(1)]\u003e @\"p12_block_17:12_bwd_call3.mpmd_train_step\"(%arg145, ..., %arg151) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m2\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %51 = mpmd.transfer %50#0 : (!mpmd.mesh_tensor\u003c\"m2\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %52:289 = mpmd.fragment_call\u003cmesh=\"m1\", origin=[\"block_11\"(1), \"block_10\"(1), \"block_9\"(1), \"block_8\"(1), \"block_7\"(1), \"block_6\"(1)]\u003e @\"p13_block_11:6_bwd_call3.mpmd_train_step\"(%arg49, ..., %arg375) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m1\", tensor\u003c1024x8x2xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e)\n",
        "    %53 = mpmd.transfer %52#0 : (!mpmd.mesh_tensor\u003c\"m1\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e) -\u003e !mpmd.mesh_tensor\u003c\"m0\", tensor\u003c2x8192x1024xf32\u003e, sharding=\u003c@mesh, [{\"data\"}, {}, {}]\u003e\u003e\n",
        "    %54:290 = mpmd.fragment_call\u003cmesh=\"m0\", origin=[\"block_5\"(1), \"block_4\"(1), \"block_3\"(1), \"block_2\"(1), \"block_1\"(1), \"block_0\"(1)]\u003e @\"p14_block_5:0_bwd_call3.mpmd_train_step\"(%arg305, ..., %arg0) {mpmd.is_sdy_partitioned} : (!mpmd.mesh_tensor\u003c\"m0\", tensor\u003c4096xf32\u003e, sharding=\u003c@mesh, [{\"data\"}]\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e)\n",
        "\n",
        "    return %54#289, ..., %43#0 : !mpmd.mesh_tensor\u003c\"m0\", tensor\u003ci32\u003e, sharding=\u003c@mesh, []\u003e\u003e, ..., !mpmd.mesh_tensor\u003c\"m3\", tensor\u003cf32\u003e, sharding=\u003c@mesh, []\u003e\u003e\n",
        "}\n",
        "```"
      ]
    },
    {
      "metadata": {
        "id": "GWeHqVE2RyKk"
      },
      "cell_type": "code",
      "source": [],
      "outputs": [],
      "execution_count": null
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
