{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "header",
   "metadata": {},
   "source": [
    "# Kubeflow Trainer: Local Training\n",
    "\n",
    "This notebook demonstrates how to run single-node training using the **Local Process Backend**.\n",
    "\n",
    "## Local Process Backend\n",
    "\n",
    "- **Container Runtime**: None (native Python subprocess)\n",
    "- **Use Case**: Quick testing, debugging, rapid iteration\n",
    "- **Prerequisites**: Python 3.9+ only\n",
    "\n",
    "This example trains a CNN on the classic [MNIST](http://yann.lecun.com/exdb/mnist/) handwritten digit dataset using PyTorch."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "install",
   "metadata": {},
   "source": [
    "## Install the Kubeflow SDK\n",
    "\n",
    "You need to install the Kubeflow SDK to interact with Kubeflow Trainer APIs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "pip-install",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Uncomment to install\n",
    "# %pip install -U kubeflow"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "training-function",
   "metadata": {},
   "source": [
    "## Define the Training Function\n",
    "\n",
    "The first step is to create a function to train CNN model using MNIST data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "train-function",
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_mnist():\n",
    "    import torch\n",
    "    import torch.nn.functional as F\n",
    "    from torch import nn, optim\n",
    "    from torch.utils.data import DataLoader\n",
    "    from torchvision import datasets, transforms\n",
    "\n",
    "    # Define the PyTorch CNN model to be trained\n",
    "    class Net(nn.Module):\n",
    "        def __init__(self):\n",
    "            super(Net, self).__init__()\n",
    "            self.conv1 = nn.Conv2d(1, 20, 5, 1)\n",
    "            self.conv2 = nn.Conv2d(20, 50, 5, 1)\n",
    "            self.fc1 = nn.Linear(4 * 4 * 50, 500)\n",
    "            self.fc2 = nn.Linear(500, 10)\n",
    "\n",
    "        def forward(self, x):\n",
    "            x = F.relu(self.conv1(x))\n",
    "            x = F.max_pool2d(x, 2, 2)\n",
    "            x = F.relu(self.conv2(x))\n",
    "            x = F.max_pool2d(x, 2, 2)\n",
    "            x = x.view(-1, 4 * 4 * 50)\n",
    "            x = F.relu(self.fc1(x))\n",
    "            x = self.fc2(x)\n",
    "            return F.log_softmax(x, dim=1)\n",
    "\n",
    "    # Create the model\n",
    "    if torch.cuda.is_available():\n",
    "        device = torch.device(\"cuda\")\n",
    "    elif torch.backends.mps.is_available():\n",
    "        device = torch.device(\"mps\")\n",
    "    else:\n",
    "        device = torch.device(\"cpu\")\n",
    "    model = Net().to(device)\n",
    "    \n",
    "    # Load MNIST dataset\n",
    "    dataset = datasets.MNIST(\n",
    "        './data',\n",
    "        train=True,\n",
    "        download=True,\n",
    "        transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])\n",
    "    )\n",
    "    train_loader = DataLoader(dataset, batch_size=64, shuffle=True)\n",
    "    optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)\n",
    "    \n",
    "    for epoch in range(1, 3):\n",
    "        model.train()\n",
    "        \n",
    "        # Iterate over mini-batches from the training set\n",
    "        for batch_idx, (data, target) in enumerate(train_loader):\n",
    "            # Forward pass\n",
    "            data, target = data.to(device), target.to(device)\n",
    "            outputs = model(data)\n",
    "            loss = F.nll_loss(outputs, target)\n",
    "            # Backward pass\n",
    "            optimizer.zero_grad()\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "            \n",
    "            if batch_idx % 100 == 0:\n",
    "                print(\n",
    "                    \"Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}\".format(\n",
    "                        epoch,\n",
    "                        batch_idx * len(data),\n",
    "                        len(train_loader.dataset),\n",
    "                        100.0 * batch_idx / len(train_loader),\n",
    "                        loss.item(),\n",
    "                    )\n",
    "                )\n",
    "\n",
    "    torch.save(model.state_dict(), \"mnist_cnn.pt\")\n",
    "    print(\"Training is finished\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "configure-backend",
   "metadata": {},
   "source": [
    "## Configure Local Process Backend\n",
    "\n",
    "Initialize the Local Process Backend configuration:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "backend-config",
   "metadata": {},
   "outputs": [],
   "source": [
    "from kubeflow.trainer import TrainerClient, LocalProcessBackendConfig\n",
    "\n",
    "# Configure Local Process Backend\n",
    "backend_config = LocalProcessBackendConfig(\n",
    "    cleanup_venv=True  # Auto-cleanup virtual environments after job completes\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "init-client",
   "metadata": {},
   "source": [
    "## Initialize Client\n",
    "\n",
    "Initialize the TrainerClient with the Local Process Backend:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "create-client",
   "metadata": {},
   "outputs": [],
   "source": [
    "client = TrainerClient(backend_config=backend_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "list-runtimes",
   "metadata": {},
   "source": [
    "## List the Training Runtimes\n",
    "\n",
    "You can get the list of available Training Runtimes to start your TrainJob."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "get-runtimes",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Runtime(name='torch-distributed', trainer=RuntimeTrainer(trainer_type=<TrainerType.CUSTOM_TRAINER: 'CustomTrainer'>, framework='torch', image='local', num_nodes=1, device='Unknown', device_count='Unknown'), pretrained_model=None)\n"
     ]
    }
   ],
   "source": [
    "for runtime in client.list_runtimes():\n",
    "    print(runtime)\n",
    "    if runtime.name == \"torch-distributed\":\n",
    "        torch_runtime = runtime"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "start-training",
   "metadata": {},
   "source": [
    "## Run the TrainJob\n",
    "\n",
    "Submit the training job to the Local Process Backend:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "train-job",
   "metadata": {},
   "outputs": [],
   "source": [
    "from kubeflow.trainer import CustomTrainer\n",
    "\n",
    "job_name = client.train(\n",
    "    trainer=CustomTrainer(\n",
    "        func=train_mnist,\n",
    "        packages_to_install=[\"pip-system-certs\", \"torch\", \"torchvision\"],\n",
    "    ),\n",
    "    runtime=torch_runtime,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "check-status",
   "metadata": {},
   "source": [
    "## Check the TrainJob Status\n",
    "\n",
    "You can check the status of the TrainJob that's created."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "job-status",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Job: u61c13e8364f, Status: Running\n"
     ]
    }
   ],
   "source": [
    "job = client.get_job(job_name)\n",
    "print(\"Job: {}, Status: {}\".format(job.name, job.status))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "stream-logs",
   "metadata": {},
   "source": [
    "## Watch the TrainJob Logs\n",
    "\n",
    "We can use the `get_job_logs()` API to get the TrainJob logs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "logs",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Operating inside /var/folders/tx/51dj585d29d554dgxchlcnvh0000gn/T/u61c13e8364f2amq0j64\n",
      "Looking in links: /tmp/tmpq35odxzx\n",
      "Processing /tmp/tmpq35odxzx/setuptools-65.5.0-py3-none-any.whl\n",
      "Processing /tmp/tmpq35odxzx/pip-24.0-py3-none-any.whl\n",
      "Installing collected packages: setuptools, pip\n",
      "Successfully installed pip-24.0 setuptools-65.5.0\n",
      "Collecting pip-system-certs\n",
      "  Using cached pip_system_certs-5.3-py3-none-any.whl.metadata (3.9 kB)\n",
      "Collecting torch\n",
      "  Downloading torch-2.9.1-cp311-none-macosx_11_0_arm64.whl.metadata (30 kB)\n",
      "Collecting torchvision\n",
      "  Downloading torchvision-0.24.1-cp311-cp311-macosx_11_0_arm64.whl.metadata (5.9 kB)\n",
      "Collecting pip>=24.2 (from pip-system-certs)\n",
      "  Using cached pip-25.3-py3-none-any.whl.metadata (4.7 kB)\n",
      "Collecting filelock (from torch)\n",
      "  Using cached filelock-3.20.0-py3-none-any.whl.metadata (2.1 kB)\n",
      "Collecting typing-extensions>=4.10.0 (from torch)\n",
      "  Using cached typing_extensions-4.15.0-py3-none-any.whl.metadata (3.3 kB)\n",
      "Collecting sympy>=1.13.3 (from torch)\n",
      "  Using cached sympy-1.14.0-py3-none-any.whl.metadata (12 kB)\n",
      "Collecting networkx>=2.5.1 (from torch)\n",
      "  Downloading networkx-3.6-py3-none-any.whl.metadata (6.8 kB)\n",
      "Collecting jinja2 (from torch)\n",
      "  Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)\n",
      "Collecting fsspec>=0.8.5 (from torch)\n",
      "  Using cached fsspec-2025.10.0-py3-none-any.whl.metadata (10 kB)\n",
      "Collecting numpy (from torchvision)\n",
      "  Downloading numpy-2.3.5-cp311-cp311-macosx_14_0_arm64.whl.metadata (62 kB)\n",
      "     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.1/62.1 kB 7.8 MB/s eta 0:00:00\n",
      "Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)\n",
      "  Using cached pillow-12.0.0-cp311-cp311-macosx_11_0_arm64.whl.metadata (8.8 kB)\n",
      "Collecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch)\n",
      "  Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)\n",
      "Collecting MarkupSafe>=2.0 (from jinja2->torch)\n",
      "  Using cached markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl.metadata (2.7 kB)\n",
      "Using cached pip_system_certs-5.3-py3-none-any.whl (6.9 kB)\n",
      "Downloading torch-2.9.1-cp311-none-macosx_11_0_arm64.whl (74.5 MB)\n",
      "   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 MB 45.7 MB/s eta 0:00:00\n",
      "Downloading torchvision-0.24.1-cp311-cp311-macosx_11_0_arm64.whl (1.9 MB)\n",
      "   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 38.5 MB/s eta 0:00:00\n",
      "Using cached fsspec-2025.10.0-py3-none-any.whl (200 kB)\n",
      "Downloading networkx-3.6-py3-none-any.whl (2.1 MB)\n",
      "   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 42.0 MB/s eta 0:00:00\n",
      "Using cached pillow-12.0.0-cp311-cp311-macosx_11_0_arm64.whl (4.7 MB)\n",
      "Using cached pip-25.3-py3-none-any.whl (1.8 MB)\n",
      "Using cached sympy-1.14.0-py3-none-any.whl (6.3 MB)\n",
      "Using cached typing_extensions-4.15.0-py3-none-any.whl (44 kB)\n",
      "Using cached filelock-3.20.0-py3-none-any.whl (16 kB)\n",
      "Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)\n",
      "Downloading numpy-2.3.5-cp311-cp311-macosx_14_0_arm64.whl (5.4 MB)\n",
      "   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.4/5.4 MB 50.0 MB/s eta 0:00:00\n",
      "Using cached markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl (12 kB)\n",
      "Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)\n",
      "Installing collected packages: mpmath, typing-extensions, sympy, pip, pillow, numpy, networkx, MarkupSafe, fsspec, filelock, pip-system-certs, jinja2, torch, torchvision\n",
      "  Attempting uninstall: pip\n",
      "    Found existing installation: pip 24.0\n",
      "    Uninstalling pip-24.0:\n",
      "      Successfully uninstalled pip-24.0\n",
      "Successfully installed MarkupSafe-3.0.3 filelock-3.20.0 fsspec-2025.10.0 jinja2-3.1.6 mpmath-1.3.0 networkx-3.6 numpy-2.3.5 pillow-12.0.0 pip-25.3 pip-system-certs-5.3 sympy-1.14.0 torch-2.9.1 torchvision-0.24.1 typing-extensions-4.15.0\n",
      "W1126 15:02:00.438000 97773 torch/distributed/elastic/multiprocessing/redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.\n",
      "\n",
      "0.3%\n",
      "0.7%\n",
      "1.0%\n",
      "1.3%\n",
      "1.7%\n",
      "2.0%\n",
      "2.3%\n",
      "2.6%\n",
      "3.0%\n",
      "3.3%\n",
      "3.6%\n",
      "4.0%\n",
      "4.3%\n",
      "4.6%\n",
      "5.0%\n",
      "5.3%\n",
      "5.6%\n",
      "6.0%\n",
      "6.3%\n",
      "6.6%\n",
      "6.9%\n",
      "7.3%\n",
      "7.6%\n",
      "7.9%\n",
      "8.3%\n",
      "8.6%\n",
      "8.9%\n",
      "9.3%\n",
      "9.6%\n",
      "9.9%\n",
      "10.2%\n",
      "10.6%\n",
      "10.9%\n",
      "11.2%\n",
      "11.6%\n",
      "11.9%\n",
      "12.2%\n",
      "12.6%\n",
      "12.9%\n",
      "13.2%\n",
      "13.6%\n",
      "13.9%\n",
      "14.2%\n",
      "14.5%\n",
      "14.9%\n",
      "15.2%\n",
      "15.5%\n",
      "15.9%\n",
      "16.2%\n",
      "16.5%\n",
      "16.9%\n",
      "17.2%\n",
      "17.5%\n",
      "17.9%\n",
      "18.2%\n",
      "18.5%\n",
      "18.8%\n",
      "19.2%\n",
      "19.5%\n",
      "19.8%\n",
      "20.2%\n",
      "20.5%\n",
      "20.8%\n",
      "21.2%\n",
      "21.5%\n",
      "21.8%\n",
      "22.1%\n",
      "22.5%\n",
      "22.8%\n",
      "23.1%\n",
      "23.5%\n",
      "23.8%\n",
      "24.1%\n",
      "24.5%\n",
      "24.8%\n",
      "25.1%\n",
      "25.5%\n",
      "25.8%\n",
      "26.1%\n",
      "26.4%\n",
      "26.8%\n",
      "27.1%\n",
      "27.4%\n",
      "27.8%\n",
      "28.1%\n",
      "28.4%\n",
      "28.8%\n",
      "29.1%\n",
      "29.4%\n",
      "29.8%\n",
      "30.1%\n",
      "30.4%\n",
      "30.7%\n",
      "31.1%\n",
      "31.4%\n",
      "31.7%\n",
      "32.1%\n",
      "32.4%\n",
      "32.7%\n",
      "33.1%\n",
      "33.4%\n",
      "33.7%\n",
      "34.0%\n",
      "34.4%\n",
      "34.7%\n",
      "35.0%\n",
      "35.4%\n",
      "35.7%\n",
      "36.0%\n",
      "36.4%\n",
      "36.7%\n",
      "37.0%\n",
      "37.4%\n",
      "37.7%\n",
      "38.0%\n",
      "38.3%\n",
      "38.7%\n",
      "39.0%\n",
      "39.3%\n",
      "39.7%\n",
      "40.0%\n",
      "40.3%\n",
      "40.7%\n",
      "41.0%\n",
      "41.3%\n",
      "41.7%\n",
      "42.0%\n",
      "42.3%\n",
      "42.6%\n",
      "43.0%\n",
      "43.3%\n",
      "43.6%\n",
      "44.0%\n",
      "44.3%\n",
      "44.6%\n",
      "45.0%\n",
      "45.3%\n",
      "45.6%\n",
      "45.9%\n",
      "46.3%\n",
      "46.6%\n",
      "46.9%\n",
      "47.3%\n",
      "47.6%\n",
      "47.9%\n",
      "48.3%\n",
      "48.6%\n",
      "48.9%\n",
      "49.3%\n",
      "49.6%\n",
      "49.9%\n",
      "50.2%\n",
      "50.6%\n",
      "50.9%\n",
      "51.2%\n",
      "51.6%\n",
      "51.9%\n",
      "52.2%\n",
      "52.6%\n",
      "52.9%\n",
      "53.2%\n",
      "53.6%\n",
      "53.9%\n",
      "54.2%\n",
      "54.5%\n",
      "54.9%\n",
      "55.2%\n",
      "55.5%\n",
      "55.9%\n",
      "56.2%\n",
      "56.5%\n",
      "56.9%\n",
      "57.2%\n",
      "57.5%\n",
      "57.9%\n",
      "58.2%\n",
      "58.5%\n",
      "58.8%\n",
      "59.2%\n",
      "59.5%\n",
      "59.8%\n",
      "60.2%\n",
      "60.5%\n",
      "60.8%\n",
      "61.2%\n",
      "61.5%\n",
      "61.8%\n",
      "62.1%\n",
      "62.5%\n",
      "62.8%\n",
      "63.1%\n",
      "63.5%\n",
      "63.8%\n",
      "64.1%\n",
      "64.5%\n",
      "64.8%\n",
      "65.1%\n",
      "65.5%\n",
      "65.8%\n",
      "66.1%\n",
      "66.4%\n",
      "66.8%\n",
      "67.1%\n",
      "67.4%\n",
      "67.8%\n",
      "68.1%\n",
      "68.4%\n",
      "68.8%\n",
      "69.1%\n",
      "69.4%\n",
      "69.8%\n",
      "70.1%\n",
      "70.4%\n",
      "70.7%\n",
      "71.1%\n",
      "71.4%\n",
      "71.7%\n",
      "72.1%\n",
      "72.4%\n",
      "72.7%\n",
      "73.1%\n",
      "73.4%\n",
      "73.7%\n",
      "74.0%\n",
      "74.4%\n",
      "74.7%\n",
      "75.0%\n",
      "75.4%\n",
      "75.7%\n",
      "76.0%\n",
      "76.4%\n",
      "76.7%\n",
      "77.0%\n",
      "77.4%\n",
      "77.7%\n",
      "78.0%\n",
      "78.3%\n",
      "78.7%\n",
      "79.0%\n",
      "79.3%\n",
      "79.7%\n",
      "80.0%\n",
      "80.3%\n",
      "80.7%\n",
      "81.0%\n",
      "81.3%\n",
      "81.7%\n",
      "82.0%\n",
      "82.3%\n",
      "82.6%\n",
      "83.0%\n",
      "83.3%\n",
      "83.6%\n",
      "84.0%\n",
      "84.3%\n",
      "84.6%\n",
      "85.0%\n",
      "85.3%\n",
      "85.6%\n",
      "85.9%\n",
      "86.3%\n",
      "86.6%\n",
      "86.9%\n",
      "87.3%\n",
      "87.6%\n",
      "87.9%\n",
      "88.3%\n",
      "88.6%\n",
      "88.9%\n",
      "89.3%\n",
      "89.6%\n",
      "89.9%\n",
      "90.2%\n",
      "90.6%\n",
      "90.9%\n",
      "91.2%\n",
      "91.6%\n",
      "91.9%\n",
      "92.2%\n",
      "92.6%\n",
      "92.9%\n",
      "93.2%\n",
      "93.6%\n",
      "93.9%\n",
      "94.2%\n",
      "94.5%\n",
      "94.9%\n",
      "95.2%\n",
      "95.5%\n",
      "95.9%\n",
      "96.2%\n",
      "96.5%\n",
      "96.9%\n",
      "97.2%\n",
      "97.5%\n",
      "97.9%\n",
      "98.2%\n",
      "98.5%\n",
      "98.8%\n",
      "99.2%\n",
      "99.5%\n",
      "99.8%\n",
      "100.0%\n",
      "\n",
      "100.0%\n",
      "\n",
      "2.0%\n",
      "4.0%\n",
      "6.0%\n",
      "7.9%\n",
      "9.9%\n",
      "11.9%\n",
      "13.9%\n",
      "15.9%\n",
      "17.9%\n",
      "19.9%\n",
      "21.9%\n",
      "23.8%\n",
      "25.8%\n",
      "27.8%\n",
      "29.8%\n",
      "31.8%\n",
      "33.8%\n",
      "35.8%\n",
      "37.8%\n",
      "39.7%\n",
      "41.7%\n",
      "43.7%\n",
      "45.7%\n",
      "47.7%\n",
      "49.7%\n",
      "51.7%\n",
      "53.7%\n",
      "55.6%\n",
      "57.6%\n",
      "59.6%\n",
      "61.6%\n",
      "63.6%\n",
      "65.6%\n",
      "67.6%\n",
      "69.6%\n",
      "71.5%\n",
      "73.5%\n",
      "75.5%\n",
      "77.5%\n",
      "79.5%\n",
      "81.5%\n",
      "83.5%\n",
      "85.5%\n",
      "87.4%\n",
      "89.4%\n",
      "91.4%\n",
      "93.4%\n",
      "95.4%\n",
      "97.4%\n",
      "99.4%\n",
      "100.0%\n",
      "\n",
      "100.0%\n",
      "Train Epoch: 1 [0/60000 (0%)]\tLoss: 2.329208\n",
      "Train Epoch: 1 [6400/60000 (11%)]\tLoss: 0.649992\n",
      "Train Epoch: 1 [12800/60000 (21%)]\tLoss: 0.395347\n",
      "Train Epoch: 1 [19200/60000 (32%)]\tLoss: 0.249918\n",
      "Train Epoch: 1 [25600/60000 (43%)]\tLoss: 0.252508\n",
      "Train Epoch: 1 [32000/60000 (53%)]\tLoss: 0.131510\n",
      "Train Epoch: 1 [38400/60000 (64%)]\tLoss: 0.107853\n",
      "Train Epoch: 1 [44800/60000 (75%)]\tLoss: 0.153738\n",
      "Train Epoch: 1 [51200/60000 (85%)]\tLoss: 0.132199\n",
      "Train Epoch: 1 [57600/60000 (96%)]\tLoss: 0.038577\n",
      "Train Epoch: 2 [0/60000 (0%)]\tLoss: 0.162379\n",
      "Train Epoch: 2 [6400/60000 (11%)]\tLoss: 0.093486\n",
      "Train Epoch: 2 [12800/60000 (21%)]\tLoss: 0.046460\n",
      "Train Epoch: 2 [19200/60000 (32%)]\tLoss: 0.057404\n",
      "Train Epoch: 2 [25600/60000 (43%)]\tLoss: 0.089871\n",
      "Train Epoch: 2 [32000/60000 (53%)]\tLoss: 0.030527\n",
      "Train Epoch: 2 [38400/60000 (64%)]\tLoss: 0.126130\n",
      "Train Epoch: 2 [44800/60000 (75%)]\tLoss: 0.125177\n",
      "Train Epoch: 2 [51200/60000 (85%)]\tLoss: 0.214734\n",
      "Train Epoch: 2 [57600/60000 (96%)]\tLoss: 0.132520\n",
      "Training is finished\n",
      "[u61c13e8364f-train] Completed with code 0 in 0:00:45.117982 seconds.Operating inside /var/folders/tx/51dj585d29d554dgxchlcnvh0000gn/T/u61c13e8364f2amq0j64Looking in links: /tmp/tmpq35odxzxProcessing /tmp/tmpq35odxzx/setuptools-65.5.0-py3-none-any.whlProcessing /tmp/tmpq35odxzx/pip-24.0-py3-none-any.whlInstalling collected packages: setuptools, pipSuccessfully installed pip-24.0 setuptools-65.5.0Collecting pip-system-certs  Using cached pip_system_certs-5.3-py3-none-any.whl.metadata (3.9 kB)Collecting torch  Downloading torch-2.9.1-cp311-none-macosx_11_0_arm64.whl.metadata (30 kB)Collecting torchvision  Downloading torchvision-0.24.1-cp311-cp311-macosx_11_0_arm64.whl.metadata (5.9 kB)Collecting pip>=24.2 (from pip-system-certs)  Using cached pip-25.3-py3-none-any.whl.metadata (4.7 kB)Collecting filelock (from torch)  Using cached filelock-3.20.0-py3-none-any.whl.metadata (2.1 kB)Collecting typing-extensions>=4.10.0 (from torch)  Using cached typing_extensions-4.15.0-py3-none-any.whl.metadata (3.3 kB)Collecting sympy>=1.13.3 (from torch)  Using cached sympy-1.14.0-py3-none-any.whl.metadata (12 kB)Collecting networkx>=2.5.1 (from torch)  Downloading networkx-3.6-py3-none-any.whl.metadata (6.8 kB)Collecting jinja2 (from torch)  Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)Collecting fsspec>=0.8.5 (from torch)  Using cached fsspec-2025.10.0-py3-none-any.whl.metadata (10 kB)Collecting numpy (from torchvision)  Downloading numpy-2.3.5-cp311-cp311-macosx_14_0_arm64.whl.metadata (62 kB)     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.1/62.1 kB 7.8 MB/s eta 0:00:00Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)  Using cached pillow-12.0.0-cp311-cp311-macosx_11_0_arm64.whl.metadata (8.8 kB)Collecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch)  Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)Collecting MarkupSafe>=2.0 (from jinja2->torch)  Using cached markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl.metadata (2.7 kB)Using cached pip_system_certs-5.3-py3-none-any.whl (6.9 kB)Downloading torch-2.9.1-cp311-none-macosx_11_0_arm64.whl (74.5 MB)   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 MB 45.7 MB/s eta 0:00:00Downloading torchvision-0.24.1-cp311-cp311-macosx_11_0_arm64.whl (1.9 MB)   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 38.5 MB/s eta 0:00:00Using cached fsspec-2025.10.0-py3-none-any.whl (200 kB)Downloading networkx-3.6-py3-none-any.whl (2.1 MB)   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 42.0 MB/s eta 0:00:00Using cached pillow-12.0.0-cp311-cp311-macosx_11_0_arm64.whl (4.7 MB)Using cached pip-25.3-py3-none-any.whl (1.8 MB)Using cached sympy-1.14.0-py3-none-any.whl (6.3 MB)Using cached typing_extensions-4.15.0-py3-none-any.whl (44 kB)Using cached filelock-3.20.0-py3-none-any.whl (16 kB)Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)Downloading numpy-2.3.5-cp311-cp311-macosx_14_0_arm64.whl (5.4 MB)   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.4/5.4 MB 50.0 MB/s eta 0:00:00Using cached markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl (12 kB)Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)Installing collected packages: mpmath, typing-extensions, sympy, pip, pillow, numpy, networkx, MarkupSafe, fsspec, filelock, pip-system-certs, jinja2, torch, torchvision  Attempting uninstall: pip    Found existing installation: pip 24.0    Uninstalling pip-24.0:      Successfully uninstalled pip-24.0Successfully installed MarkupSafe-3.0.3 filelock-3.20.0 fsspec-2025.10.0 jinja2-3.1.6 mpmath-1.3.0 networkx-3.6 numpy-2.3.5 pillow-12.0.0 pip-25.3 pip-system-certs-5.3 sympy-1.14.0 torch-2.9.1 torchvision-0.24.1 typing-extensions-4.15.0W1126 15:02:00.438000 97773 torch/distributed/elastic/multiprocessing/redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.mps0.3%0.7%1.0%1.3%1.7%2.0%2.3%2.6%3.0%3.3%3.6%4.0%4.3%4.6%5.0%5.3%5.6%6.0%6.3%6.6%6.9%7.3%7.6%7.9%8.3%8.6%8.9%9.3%9.6%9.9%10.2%10.6%10.9%11.2%11.6%11.9%12.2%12.6%12.9%13.2%13.6%13.9%14.2%14.5%14.9%15.2%15.5%15.9%16.2%16.5%16.9%17.2%17.5%17.9%18.2%18.5%18.8%19.2%19.5%19.8%20.2%20.5%20.8%21.2%21.5%21.8%22.1%22.5%22.8%23.1%23.5%23.8%24.1%24.5%24.8%25.1%25.5%25.8%26.1%26.4%26.8%27.1%27.4%27.8%28.1%28.4%28.8%29.1%29.4%29.8%30.1%30.4%30.7%31.1%31.4%31.7%32.1%32.4%32.7%33.1%33.4%33.7%34.0%34.4%34.7%35.0%35.4%35.7%36.0%36.4%36.7%37.0%37.4%37.7%38.0%38.3%38.7%39.0%39.3%39.7%40.0%40.3%40.7%41.0%41.3%41.7%42.0%42.3%42.6%43.0%43.3%43.6%44.0%44.3%44.6%45.0%45.3%45.6%45.9%46.3%46.6%46.9%47.3%47.6%47.9%48.3%48.6%48.9%49.3%49.6%49.9%50.2%50.6%50.9%51.2%51.6%51.9%52.2%52.6%52.9%53.2%53.6%53.9%54.2%54.5%54.9%55.2%55.5%55.9%56.2%56.5%56.9%57.2%57.5%57.9%58.2%58.5%58.8%59.2%59.5%59.8%60.2%60.5%60.8%61.2%61.5%61.8%62.1%62.5%62.8%63.1%63.5%63.8%64.1%64.5%64.8%65.1%65.5%65.8%66.1%66.4%66.8%67.1%67.4%67.8%68.1%68.4%68.8%69.1%69.4%69.8%70.1%70.4%70.7%71.1%71.4%71.7%72.1%72.4%72.7%73.1%73.4%73.7%74.0%74.4%74.7%75.0%75.4%75.7%76.0%76.4%76.7%77.0%77.4%77.7%78.0%78.3%78.7%79.0%79.3%79.7%80.0%80.3%80.7%81.0%81.3%81.7%82.0%82.3%82.6%83.0%83.3%83.6%84.0%84.3%84.6%85.0%85.3%85.6%85.9%86.3%86.6%86.9%87.3%87.6%87.9%88.3%88.6%88.9%89.3%89.6%89.9%90.2%90.6%90.9%91.2%91.6%91.9%92.2%92.6%92.9%93.2%93.6%93.9%94.2%94.5%94.9%95.2%95.5%95.9%96.2%96.5%96.9%97.2%97.5%97.9%98.2%98.5%98.8%99.2%99.5%99.8%100.0%100.0%2.0%4.0%6.0%7.9%9.9%11.9%13.9%15.9%17.9%19.9%21.9%23.8%25.8%27.8%29.8%31.8%33.8%35.8%37.8%39.7%41.7%43.7%45.7%47.7%49.7%51.7%53.7%55.6%57.6%59.6%61.6%63.6%65.6%67.6%69.6%71.5%73.5%75.5%77.5%79.5%81.5%83.5%85.5%87.4%89.4%91.4%93.4%95.4%97.4%99.4%100.0%100.0%Train Epoch: 1 [0/60000 (0%)]\tLoss: 2.329208Train Epoch: 1 [6400/60000 (11%)]\tLoss: 0.649992Train Epoch: 1 [12800/60000 (21%)]\tLoss: 0.395347Train Epoch: 1 [19200/60000 (32%)]\tLoss: 0.249918Train Epoch: 1 [25600/60000 (43%)]\tLoss: 0.252508Train Epoch: 1 [32000/60000 (53%)]\tLoss: 0.131510Train Epoch: 1 [38400/60000 (64%)]\tLoss: 0.107853Train Epoch: 1 [44800/60000 (75%)]\tLoss: 0.153738Train Epoch: 1 [51200/60000 (85%)]\tLoss: 0.132199Train Epoch: 1 [57600/60000 (96%)]\tLoss: 0.038577Train Epoch: 2 [0/60000 (0%)]\tLoss: 0.162379Train Epoch: 2 [6400/60000 (11%)]\tLoss: 0.093486Train Epoch: 2 [12800/60000 (21%)]\tLoss: 0.046460Train Epoch: 2 [19200/60000 (32%)]\tLoss: 0.057404Train Epoch: 2 [25600/60000 (43%)]\tLoss: 0.089871Train Epoch: 2 [32000/60000 (53%)]\tLoss: 0.030527Train Epoch: 2 [38400/60000 (64%)]\tLoss: 0.126130Train Epoch: 2 [44800/60000 (75%)]\tLoss: 0.125177Train Epoch: 2 [51200/60000 (85%)]\tLoss: 0.214734Train Epoch: 2 [57600/60000 (96%)]\tLoss: 0.132520Training is finished[u61c13e8364f-train] Completed with code 0 in 0:00:45.117982 seconds."
     ]
    }
   ],
   "source": [
    "for logline in client.get_job_logs(job_name, follow=True):\n",
    "    print(logline, end='')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cleanup",
   "metadata": {},
   "source": [
    "## Delete the TrainJob\n",
    "\n",
    "When the TrainJob is finished, you can delete the resource."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "delete",
   "metadata": {},
   "outputs": [],
   "source": [
    "client.delete_job(job_name)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
