{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GzMT0d7XRdQ3"
      },
      "source": [
        "# Supervised Fine-tuning Gemini 2.5 Flash for Predictive Maintenance\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Ftuning%2Fsft_gemini_predictive_maintenance.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_uco5wDNcIRq"
      },
      "source": [
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_predictive_maintenance.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "M04y-KnqcSCq"
      },
      "source": [
        "| Author |\n",
        "| --- |\n",
        "| [Aniket Agrawal](https://github.com/aniketagrawal2012) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WURYK3ZRRdQ5"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook demonstrates how to perform **supervised fine-tuning** on a Gemini model for a predictive maintenance task within an industrial infrastructure context. We will use the `google-genai` SDK integrated with Vertex AI to train the model to classify equipment status based on simulated sensor readings.\n",
        "\n",
        "### Use Case: Classifying Equipment Status from Sensor Data\n",
        "\n",
        "Instead of predicting exact time-to-failure, we'll fine-tune Gemini to classify the operational state of equipment (e.g., \"Normal\", \"Warning\", \"Critical\") based on recent sensor trends. This simplifies the task into a text-generation problem suitable for LLM fine-tuning.\n",
        "\n",
        "**Workflow:**\n",
        "1.  **Load/Generate Data**: Create simulated sensor readings and maintenance/failure logs.\n",
        "2.  **Prepare Tuning Data (JSONL)**: Convert time-series data snippets and corresponding status labels into the JSON Lines format required for Gemini supervised tuning.\n",
        "3.  **Upload to GCS**: Store the formatted tuning data in a Google Cloud Storage bucket.\n",
        "4.  **Launch Fine-tuning Job**: Use the `google-genai` SDK client (configured for Vertex AI) to start the supervised tuning job.\n",
        "5.  **Monitor Job**: Track the progress of the fine-tuning job.\n",
        "6.  **Evaluate Tuned Model**: Make predictions on new sensor data prompts using the fine-tuned model endpoint and compare qualitatively.\n",
        "7.  **Integrate Gemini for Reporting**: Use a base Gemini model to summarize the tuning job results."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NTmKZlmIRdQ5"
      },
      "source": [
        "## Setup\n",
        "\n",
        "### Install required packages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PdrVBfYMRdQ5"
      },
      "outputs": [],
      "source": [
        "import sys  # noqa: F401\n",
        "\n",
        "# Install necessary libraries\n",
        "# gcsfs is added to allow pandas to write directly to GCS\n",
        "!{sys.executable} -m pip install --upgrade --user --quiet pandas numpy google-cloud-aiplatform google-genai google-cloud-storage gcsfs"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Yqy4-3cBRdQ6"
      },
      "source": [
        "**⚠️ Important:** Restart the kernel after installation."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KS9rUufMRdQ6"
      },
      "source": [
        "### Authenticate and Initialize Vertex AI\n",
        "\n",
        "Set your project, region, and GCS bucket information. We configure the notebook for Vertex AI fine-tuning and reporting."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PyQnTYGlRdQ6"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "import vertexai\n",
        "from google.genai import (\n",
        "    Client as VertexClient,  # This is for Vertex AI tuning/models client\n",
        ")\n",
        "\n",
        "# --- Vertex AI Configuration (Required for Fine-tuning Job) ---\n",
        "PROJECT_ID = \"\"  # @param {type: \"string\", placeholder: \"your-gcp-project-id\"}\n",
        "REGION = \"\"  # @param {type:\"string\"}\n",
        "BUCKET_NAME = \"\"  # @param {type:\"string\", placeholder: \"your-gcs-bucket-name\"}\n",
        "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
        "\n",
        "# --- Authentication (Colab/Workbench for Vertex AI) ---\n",
        "if not PROJECT_ID or PROJECT_ID == \"\":\n",
        "    try:\n",
        "        from google.colab import auth\n",
        "\n",
        "        auth.authenticate_user()\n",
        "        import subprocess\n",
        "\n",
        "        PROJECT_ID = (\n",
        "            subprocess.check_output([\"gcloud\", \"config\", \"get-value\", \"project\"])\n",
        "            .decode(\"utf-8\")\n",
        "            .strip()\n",
        "        )\n",
        "        print(f\"Retrieved Project ID: {PROJECT_ID}\")\n",
        "    except Exception as e:\n",
        "        print(\n",
        "            f\"Could not automatically retrieve Project ID. Please set it manually. Error: {e}\"\n",
        "        )\n",
        "\n",
        "# Ensure BUCKET_NAME is set, and attempt to create the bucket\n",
        "if not BUCKET_NAME or BUCKET_NAME == \"\":\n",
        "    if PROJECT_ID:\n",
        "        BUCKET_NAME = f\"{PROJECT_ID}-gemini-tuning-bucket\"\n",
        "        BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
        "        print(f\"Bucket name not provided. Using default: {BUCKET_NAME}\")\n",
        "    else:\n",
        "        raise ValueError(\n",
        "            \"Please provide a valid GCS Bucket name or ensure PROJECT_ID is set for default bucket creation.\"\n",
        "        )\n",
        "\n",
        "print(f\"Checking/Creating bucket: {BUCKET_URI}\")\n",
        "# Use '!' for shell commands in notebooks. `gsutil mb` creates if it doesn't exist.\n",
        "try:\n",
        "    # The '||' syntax works in shell to execute the second command only if the first fails\n",
        "    # `gsutil ls` returns 0 if bucket exists, non-zero if not.\n",
        "    # `gsutil mb` creates the bucket.\n",
        "    creation_command = f\"gsutil ls {BUCKET_URI} > /dev/null 2>&1 || gsutil mb -l {REGION} -p {PROJECT_ID} {BUCKET_URI}\"\n",
        "    print(f\"Running: {creation_command}\")\n",
        "    # Using os.system as '!' might behave differently depending on the environment.\n",
        "    # os.system returns the exit status of the command.\n",
        "    exit_code = os.system(creation_command)\n",
        "    if exit_code != 0:\n",
        "        print(\n",
        "            f\"Warning: Bucket command finished with exit code {exit_code}. Check GCS permissions or bucket status.\"\n",
        "        )\n",
        "    else:\n",
        "        print(f\"Bucket {BUCKET_URI} ensured to exist.\")\n",
        "except Exception as bucket_e:\n",
        "    print(f\"Error checking/creating bucket: {bucket_e}\")\n",
        "    raise ValueError(\"Bucket check/creation failed.\") from bucket_e\n",
        "\n",
        "\n",
        "if PROJECT_ID:\n",
        "    print(\n",
        "        f\"Initializing Vertex AI for project: {PROJECT_ID} in {REGION} using bucket {BUCKET_URI}\"\n",
        "    )\n",
        "    # Initialize Vertex AI SDK (needed for launching the tuning job)\n",
        "    vertexai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)\n",
        "    # Initialize the genai client specifically for Vertex AI operations (like tuning)\n",
        "    vertex_client = VertexClient(vertexai=True, project=PROJECT_ID, location=REGION)\n",
        "    print(\"Vertex AI SDK Initialized.\")\n",
        "else:\n",
        "    raise ValueError(\"PROJECT_ID must be set for Vertex AI operations.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fQ3ECMSARdQ6"
      },
      "source": [
        "### Imports and Global Configuration"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bV74HX1oRdQ6"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "import random\n",
        "import time\n",
        "import warnings\n",
        "from typing import Any\n",
        "\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "from google.genai import types as genai_types\n",
        "\n",
        "# --- Global Settings ---\n",
        "warnings.filterwarnings(\"ignore\", category=UserWarning)\n",
        "warnings.filterwarnings(\"ignore\", category=FutureWarning)\n",
        "np.random.seed(42)\n",
        "random.seed(42)\n",
        "\n",
        "# --- Constants ---\n",
        "BASE_MODEL_ID = \"gemini-2.5-flash\"  # Tunable model ID on Vertex AI\n",
        "TUNED_MODEL_DISPLAY_NAME = f\"pred-maint-gemini-tuned-{int(time.time())}\"\n",
        "DATA_DIR_GCS = f\"{BUCKET_URI}/pred_maint_tuning_data\"\n",
        "TRAIN_JSONL_GCS_URI = f\"{DATA_DIR_GCS}/train_data.jsonl\"\n",
        "VALIDATION_JSONL_GCS_URI = f\"{DATA_DIR_GCS}/validation_data.jsonl\"\n",
        "TEST_JSONL_GCS_URI = f\"{DATA_DIR_GCS}/test_data.jsonl\"  # For qualitative eval later\n",
        "\n",
        "SEQUENCE_LENGTH = 12  # Use 12 hours of data for context\n",
        "FAILURE_PREDICTION_HORIZON_HOURS = 24\n",
        "WARNING_HORIZON_HOURS = 72  # Issue warning if failure is within 72 hours\n",
        "\n",
        "print(f\"Base model for tuning: {BASE_MODEL_ID}\")\n",
        "print(f\"Tuning data GCS path: {DATA_DIR_GCS}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TEUFLjeYRdQ7"
      },
      "source": [
        "## Step 1: Generate Simulated Data"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8RqK0Td6RdQ7"
      },
      "outputs": [],
      "source": [
        "# Reusing the data generation function from the previous notebook\n",
        "def generate_maintenance_data(\n",
        "    filename=\"equipment_sensor_data.csv\",\n",
        "    log_filename=\"maintenance_failure_logs.csv\",\n",
        "    num_rows=2000,\n",
        "    equipment_id=\"EQ-001\",\n",
        ") -> tuple[pd.DataFrame, pd.DataFrame]:\n",
        "    \"\"\"Generates or loads simulated sensor data and maintenance/failure logs.\"\"\"\n",
        "    if os.path.exists(filename) and os.path.exists(log_filename):\n",
        "        print(\n",
        "            f\"Data files '{filename}' and '{log_filename}' already exist. Loading data.\"\n",
        "        )\n",
        "        sensor_df = pd.read_csv(filename, parse_dates=[\"timestamp\"])\n",
        "        log_df = pd.read_csv(log_filename, parse_dates=[\"timestamp\"])\n",
        "        return sensor_df, log_df\n",
        "\n",
        "    print(\"Generating new sensor and maintenance log data...\")\n",
        "    # Generate timestamps with timezone awareness, matching typical sensor data\n",
        "    timestamps = pd.date_range(\n",
        "        end=pd.Timestamp.now(tz=\"UTC\"), periods=num_rows, freq=\"h\"\n",
        "    )\n",
        "\n",
        "    data = {\"timestamp\": timestamps, \"equipment_id\": equipment_id}\n",
        "    data[\"temperature_c\"] = np.random.normal(\n",
        "        loc=60, scale=5, size=num_rows\n",
        "    ) + np.linspace(0, 15, num_rows)\n",
        "    data[\"vibration_hz\"] = np.random.normal(\n",
        "        loc=50, scale=2, size=num_rows\n",
        "    ) + np.random.normal(0, np.linspace(0, 5, num_rows))\n",
        "    data[\"pressure_psi\"] = np.random.normal(\n",
        "        loc=100, scale=10, size=num_rows\n",
        "    ) - np.linspace(0, 5, num_rows)\n",
        "    sensor_df = pd.DataFrame(data)\n",
        "\n",
        "    log_data = []\n",
        "    maintenance_indices = np.random.choice(num_rows, size=num_rows // 50, replace=False)\n",
        "    for idx in maintenance_indices:\n",
        "        # Check index bounds\n",
        "        if idx < len(timestamps):\n",
        "            log_data.append(\n",
        "                {\n",
        "                    \"timestamp\": timestamps[idx],\n",
        "                    \"equipment_id\": equipment_id,\n",
        "                    \"event_type\": \"Maintenance\",\n",
        "                    \"details\": \"Routine Check\",\n",
        "                }\n",
        "            )\n",
        "\n",
        "    failure_indices = np.linspace(num_rows * 0.9, num_rows - 1, num=5).astype(int)\n",
        "    for idx in failure_indices:\n",
        "        # Ensure index and timestamp exist before adding log\n",
        "        if idx < len(timestamps):\n",
        "            log_data.append(\n",
        "                {\n",
        "                    \"timestamp\": timestamps[idx],\n",
        "                    \"equipment_id\": equipment_id,\n",
        "                    \"event_type\": \"Failure\",\n",
        "                    \"details\": \"Component Failure\",\n",
        "                }\n",
        "            )\n",
        "            # Introduce anomalies around failures - ensure indices are valid\n",
        "            start_anomaly = max(0, idx - 10)\n",
        "            end_anomaly = min(num_rows, idx + 2)  # Correct upper bound exclusive issue\n",
        "            anomaly_size = (end_anomaly - start_anomaly, 2)\n",
        "            # Ensure anomaly size is valid before applying\n",
        "            if start_anomaly < end_anomaly and anomaly_size[0] > 0:\n",
        "                sensor_df.loc[\n",
        "                    start_anomaly : end_anomaly - 1, [\"temperature_c\", \"vibration_hz\"]\n",
        "                ] *= np.random.uniform(1.05, 1.25, size=anomaly_size)\n",
        "\n",
        "    log_df = pd.DataFrame(log_data)\n",
        "    # Ensure timestamp column exists and sort\n",
        "    if \"timestamp\" in log_df.columns and not log_df.empty:\n",
        "        # Convert to UTC if not already, to ensure consistency before sorting\n",
        "        if log_df[\"timestamp\"].dt.tz is None:\n",
        "            log_df[\"timestamp\"] = log_df[\"timestamp\"].dt.tz_localize(\"UTC\")\n",
        "        else:\n",
        "            log_df[\"timestamp\"] = log_df[\"timestamp\"].dt.tz_convert(\"UTC\")\n",
        "        log_df = log_df.sort_values(\"timestamp\").reset_index(drop=True)\n",
        "    else:\n",
        "        print(\"Warning: Log data is empty or missing 'timestamp' column.\")\n",
        "        # Create an empty df with expected columns if needed\n",
        "        log_df = pd.DataFrame(\n",
        "            columns=[\"timestamp\", \"equipment_id\", \"event_type\", \"details\"]\n",
        "        )\n",
        "        log_df[\"timestamp\"] = pd.to_datetime(log_df[\"timestamp\"]).dt.tz_localize(\n",
        "            \"UTC\"\n",
        "        )  # Ensure dtype even if empty\n",
        "\n",
        "    # Ensure sensor data timestamp is also UTC for consistent comparison later\n",
        "    if sensor_df[\"timestamp\"].dt.tz is None:\n",
        "        sensor_df[\"timestamp\"] = sensor_df[\"timestamp\"].dt.tz_localize(\"UTC\")\n",
        "    else:\n",
        "        sensor_df[\"timestamp\"] = sensor_df[\"timestamp\"].dt.tz_convert(\"UTC\")\n",
        "\n",
        "    sensor_df.to_csv(filename, index=False)\n",
        "    log_df.to_csv(log_filename, index=False)\n",
        "    print(f\"Generated {len(sensor_df)} sensor records to '{filename}'.\")\n",
        "    print(f\"Generated {len(log_df)} log entries to '{log_filename}'.\")\n",
        "\n",
        "    return sensor_df, log_df\n",
        "\n",
        "\n",
        "# Load or generate data\n",
        "sensor_data_df, log_data_df = generate_maintenance_data()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9jbGD8gFRdQ7"
      },
      "source": [
        "## Step 2: Prepare Tuning Data (JSONL)\n",
        "\n",
        "We convert the raw data into sequences and format them as JSON Lines, where each line represents a prompt (sensor data summary) and the expected completion (equipment status)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "w9pXnVJARdQ7"
      },
      "outputs": [],
      "source": [
        "def create_tuning_jsonl(\n",
        "    sensor_df: pd.DataFrame,\n",
        "    log_df: pd.DataFrame,\n",
        "    sequence_length: int,\n",
        "    failure_horizon_h: int,\n",
        "    warning_horizon_h: int,\n",
        ") -> list[dict[str, Any]]:\n",
        "    \"\"\"Creates JSONL data for Gemini supervised tuning.\"\"\"\n",
        "    print(\"\\n--- Preparing JSONL Tuning Data ---\")\n",
        "    df = sensor_df.copy()\n",
        "    # Ensure log_df has timestamps before proceeding\n",
        "    if log_df.empty or \"timestamp\" not in log_df.columns:\n",
        "        print(\n",
        "            \"Warning: Log DataFrame is empty or missing 'timestamp'. Cannot determine failure times.\"\n",
        "        )\n",
        "        failure_times = pd.Series(dtype=\"datetime64[ns, UTC]\")  # Empty series\n",
        "    else:\n",
        "        # Ensure log_df timestamps are UTC\n",
        "        if log_df[\"timestamp\"].dt.tz is None:\n",
        "            log_df[\"timestamp\"] = log_df[\"timestamp\"].dt.tz_localize(\"UTC\")\n",
        "        else:\n",
        "            log_df[\"timestamp\"] = log_df[\"timestamp\"].dt.tz_convert(\"UTC\")\n",
        "        failure_times = log_df[log_df[\"event_type\"] == \"Failure\"][\"timestamp\"]\n",
        "\n",
        "    # Define Status based on proximity to failure\n",
        "    df[\"status\"] = \"Status: Normal\"\n",
        "    fail_horizon = pd.Timedelta(hours=failure_horizon_h)\n",
        "    warn_horizon = pd.Timedelta(hours=warning_horizon_h)\n",
        "\n",
        "    # Ensure df timestamps are UTC\n",
        "    if df[\"timestamp\"].dt.tz is None:\n",
        "        df[\"timestamp\"] = df[\"timestamp\"].dt.tz_localize(\"UTC\")\n",
        "    else:\n",
        "        df[\"timestamp\"] = df[\"timestamp\"].dt.tz_convert(\"UTC\")\n",
        "\n",
        "    for f_time in failure_times:\n",
        "        # Ensure f_time is timezone-aware (should be UTC from previous step)\n",
        "        if f_time.tzinfo is None:\n",
        "            f_time = f_time.tz_localize(\"UTC\")\n",
        "\n",
        "        # Critical within failure horizon\n",
        "        crit_mask = (df[\"timestamp\"] >= f_time - fail_horizon) & (\n",
        "            df[\"timestamp\"] < f_time\n",
        "        )\n",
        "        df.loc[crit_mask, \"status\"] = \"Status: Critical - Failure imminent\"\n",
        "        # Warning within warning horizon (but not critical)\n",
        "        warn_mask = (df[\"timestamp\"] >= f_time - warn_horizon) & (\n",
        "            df[\"timestamp\"] < f_time - fail_horizon\n",
        "        )\n",
        "        df.loc[warn_mask, \"status\"] = \"Status: Warning - Elevated risk detected\"\n",
        "\n",
        "    print(f\"Status distribution:\\n{df['status'].value_counts()}\")\n",
        "\n",
        "    feature_columns = [\"temperature_c\", \"vibration_hz\", \"pressure_psi\"]\n",
        "\n",
        "    jsonl_data = []\n",
        "    # Iterate through possible end points for sequences\n",
        "    for i in range(sequence_length, len(df)):\n",
        "        sequence_df = df.iloc[i - sequence_length : i]\n",
        "        # Check if the sequence is valid (e.g., no NaNs introduced by iloc edge cases)\n",
        "        if sequence_df.isnull().values.any() or sequence_df.empty:\n",
        "            continue\n",
        "\n",
        "        target_status = df.iloc[i][\"status\"]\n",
        "        current_equipment_id = df.iloc[i][\"equipment_id\"]  # Get ID for the prompt\n",
        "\n",
        "        # Create a text prompt summarizing the sequence\n",
        "        prompt = f\"Equipment {current_equipment_id} sensor data for the last {sequence_length} hours:\\n\"\n",
        "        for col in feature_columns:\n",
        "            mean_val = sequence_df[col].mean()\n",
        "            std_val = sequence_df[col].std()\n",
        "            # Calculate trend more robustly\n",
        "            diff_mean = sequence_df[col].diff().mean()\n",
        "            trend = (\n",
        "                \"stable\"\n",
        "                if pd.isna(diff_mean) or abs(diff_mean) < 0.1\n",
        "                else (\"rising\" if diff_mean > 0 else \"falling\")\n",
        "            )\n",
        "            prompt += f\"- {col}: Average {mean_val:.1f}, StdDev {std_val:.1f}, Trend {trend}\\n\"\n",
        "        prompt += \"\\nClassify the equipment status based on this data (Normal, Warning, or Critical).\"\n",
        "\n",
        "        # Format according to Gemini tuning requirements\n",
        "        instance = {\n",
        "            \"contents\": [\n",
        "                {\"role\": \"user\", \"parts\": [{\"text\": prompt}]},\n",
        "                {\"role\": \"model\", \"parts\": [{\"text\": target_status}]},\n",
        "            ]\n",
        "        }\n",
        "        jsonl_data.append(instance)\n",
        "\n",
        "    print(f\"Generated {len(jsonl_data)} JSONL instances.\")\n",
        "    return jsonl_data\n",
        "\n",
        "\n",
        "# Create JSONL data\n",
        "tuning_data_jsonl = create_tuning_jsonl(\n",
        "    sensor_data_df,\n",
        "    log_data_df,\n",
        "    sequence_length=SEQUENCE_LENGTH,\n",
        "    failure_horizon_h=FAILURE_PREDICTION_HORIZON_HOURS,\n",
        "    warning_horizon_h=WARNING_HORIZON_HOURS,\n",
        ")\n",
        "\n",
        "# Shuffle and Split data\n",
        "if tuning_data_jsonl:\n",
        "    random.shuffle(tuning_data_jsonl)\n",
        "    split_idx_val = int(len(tuning_data_jsonl) * 0.8)  # 80% train\n",
        "    split_idx_test = int(len(tuning_data_jsonl) * 0.9)  # 10% validation, 10% test\n",
        "\n",
        "    train_split = tuning_data_jsonl[:split_idx_val]\n",
        "    validation_split = tuning_data_jsonl[split_idx_val:split_idx_test]\n",
        "    test_split = tuning_data_jsonl[split_idx_test:]\n",
        "\n",
        "    print(\n",
        "        f\"Split sizes: Train={len(train_split)}, Validation={len(validation_split)}, Test={len(test_split)}\"\n",
        "    )\n",
        "\n",
        "    # Display a sample\n",
        "    print(\"\\n--- Sample JSONL Instance ---\")\n",
        "    print(json.dumps(train_split[0], indent=2))\n",
        "else:\n",
        "    print(\n",
        "        \"\\nWarning: No tuning data generated, possibly due to short data sequence or lack of failure events.\"\n",
        "    )\n",
        "    # Initialize splits as empty lists to prevent errors later\n",
        "    train_split, validation_split, test_split = [], [], []"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-OARmnqbRdQ7"
      },
      "source": [
        "## Step 3: Upload Tuning Data to GCS\n",
        "\n",
        "The fine-tuning service reads data directly from Google Cloud Storage."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sb9OuMrjRdQ7"
      },
      "outputs": [],
      "source": [
        "import google.auth\n",
        "\n",
        "def save_jsonl_to_gcs(instances: list[dict[str, Any]], gcs_uri: str):\n",
        "    \"\"\"Saves a list of dictionaries as a JSONL file to GCS using Pandas.\"\"\"\n",
        "    if not instances:\n",
        "        print(f\"No instances to upload to {gcs_uri}. Skipping upload.\")\n",
        "        return\n",
        "\n",
        "    print(f\"Uploading {len(instances)} instances to {gcs_uri}...\")\n",
        "\n",
        "    try:\n",
        "        # Get the application default credentials\n",
        "        credentials, _ = google.auth.default()\n",
        "\n",
        "        # Convert list of dicts to DataFrame\n",
        "        df = pd.DataFrame(instances)\n",
        "\n",
        "        # Save DataFrame to GCS as JSONL\n",
        "        # We MUST pass the 'token' (credentials) to authenticate the request\n",
        "        storage_options = {\"project\": PROJECT_ID, \"token\": credentials}\n",
        "\n",
        "        df.to_json(\n",
        "            gcs_uri, orient=\"records\", lines=True, storage_options=storage_options\n",
        "        )\n",
        "\n",
        "        print(\"Upload complete.\")\n",
        "    except Exception as e:\n",
        "        print(f\"ERROR during GCS upload to {gcs_uri}: {e}\")\n",
        "        print(\n",
        "            \"Please ensure your GCS bucket is accessible and pandas has GCS permissions (installed via gcsfs).\"\n",
        "        )\n",
        "\n",
        "\n",
        "# Save splits to GCS\n",
        "save_jsonl_to_gcs(train_split, TRAIN_JSONL_GCS_URI)\n",
        "save_jsonl_to_gcs(validation_split, VALIDATION_JSONL_GCS_URI)\n",
        "save_jsonl_to_gcs(\n",
        "    test_split, TEST_JSONL_GCS_URI\n",
        ")  # Save test split for later evaluation"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bPa8ylyURdQ8"
      },
      "source": [
        "## Step 4: Launch Fine-tuning Job\n",
        "\n",
        "We use the `google-genai` client **configured for Vertex AI** (`vertex_client`) to start the supervised tuning job, as fine-tuning management is a Vertex AI feature."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "HF9z_1k3RdQ8"
      },
      "outputs": [],
      "source": [
        "TUNING_JOB_NAME = None  # Initialize\n",
        "if not train_split or not validation_split:\n",
        "    print(\"Skipping fine-tuning job launch as training or validation data is empty.\")\n",
        "else:\n",
        "    print(f\"Starting supervised fine-tuning job for model: {BASE_MODEL_ID}\")\n",
        "    print(f\"Tuned model display name: {TUNED_MODEL_DISPLAY_NAME}\")\n",
        "\n",
        "    training_dataset = {\n",
        "        \"gcs_uri\": TRAIN_JSONL_GCS_URI,\n",
        "    }\n",
        "\n",
        "    validation_dataset = genai_types.TuningValidationDataset(\n",
        "        gcs_uri=VALIDATION_JSONL_GCS_URI\n",
        "    )\n",
        "\n",
        "    try:\n",
        "        # Use the vertex_client configured specifically for Vertex AI operations\n",
        "        sft_tuning_job = vertex_client.tunings.tune(\n",
        "            base_model=BASE_MODEL_ID,\n",
        "            training_dataset=training_dataset,\n",
        "            config=genai_types.CreateTuningJobConfig(\n",
        "                adapter_size=\"ADAPTER_SIZE_FOUR\",  # Smaller adapter for faster tuning\n",
        "                epoch_count=3,  # Keep low for demonstration\n",
        "                tuned_model_display_name=TUNED_MODEL_DISPLAY_NAME,\n",
        "                validation_dataset=validation_dataset,\n",
        "            ),\n",
        "        )\n",
        "        print(\"\\nTuning job created:\")\n",
        "        print(sft_tuning_job)\n",
        "        TUNING_JOB_NAME = sft_tuning_job.name  # Save for monitoring\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"ERROR starting tuning job: {e}\")\n",
        "        # Attempt to list existing jobs with the same display name in case of interruption\n",
        "        try:\n",
        "            print(\n",
        "                f\"Checking for existing tuning jobs named '{TUNED_MODEL_DISPLAY_NAME}'...\"\n",
        "            )\n",
        "            existing_jobs = vertex_client.tunings.list(\n",
        "                page_size=100\n",
        "            )  # List might need pagination for many jobs\n",
        "            for job in existing_jobs:\n",
        "                # Check if config exists and has the attribute\n",
        "                job_config = getattr(job, \"config\", None)\n",
        "                if (\n",
        "                    job_config\n",
        "                    and getattr(job_config, \"tuned_model_display_name\", None)\n",
        "                    == TUNED_MODEL_DISPLAY_NAME\n",
        "                ):\n",
        "                    print(f\"Found existing job: {job.name} with state {job.state}\")\n",
        "                    TUNING_JOB_NAME = job.name  # Use the existing job name\n",
        "                    break\n",
        "        except Exception as list_e:\n",
        "            print(f\"Could not list existing tuning jobs: {list_e}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VV8iimM8RdQ8"
      },
      "source": [
        "**Note:** Fine-tuning can take a significant amount of time (potentially 30 minutes to several hours depending on the dataset size, base model, and adapter size)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Vs9s8uLRRdQ8"
      },
      "source": [
        "## Step 5: Monitor Job"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9mVFYfL2RdQ8"
      },
      "outputs": [],
      "source": [
        "TUNED_MODEL_ENDPOINT = None  # Initialize\n",
        "if TUNING_JOB_NAME:\n",
        "    print(f\"Monitoring tuning job: {TUNING_JOB_NAME}\")\n",
        "    running_states = {\n",
        "        genai_types.JobState.JOB_STATE_PENDING,\n",
        "        genai_types.JobState.JOB_STATE_RUNNING,\n",
        "    }\n",
        "\n",
        "    tuning_job = vertex_client.tunings.get(name=TUNING_JOB_NAME)\n",
        "\n",
        "    while tuning_job.state in running_states:\n",
        "        # Extract the simple state name for printing\n",
        "        current_state_name = str(tuning_job.state).split(\".\")[-1]\n",
        "        print(f\"  Current state: {current_state_name}...\")\n",
        "        time.sleep(60)  # Check every minute\n",
        "        # Poll the job status using the vertex_client\n",
        "        try:\n",
        "            tuning_job = vertex_client.tunings.get(name=TUNING_JOB_NAME)\n",
        "        except Exception as e:\n",
        "            print(\n",
        "                f\"Error polling tuning job status: {e}. Assuming job might still be running.\"\n",
        "            )\n",
        "            # Optional: Add retry logic or break after several failures\n",
        "            time.sleep(120)  # Wait longer if polling fails\n",
        "\n",
        "    final_state_name = str(tuning_job.state).split(\".\")[-1]\n",
        "    print(f\"\\nTuning job finished with state: {final_state_name}\")\n",
        "\n",
        "    if tuning_job.state == genai_types.JobState.JOB_STATE_SUCCEEDED:\n",
        "        # Check if tuned_model attribute exists and has endpoint\n",
        "        if (\n",
        "            hasattr(tuning_job, \"tuned_model\")\n",
        "            and tuning_job.tuned_model\n",
        "            and hasattr(tuning_job.tuned_model, \"endpoint\")\n",
        "        ):\n",
        "            TUNED_MODEL_ENDPOINT = tuning_job.tuned_model.endpoint\n",
        "            print(f\"Tuned model endpoint ready: {TUNED_MODEL_ENDPOINT}\")\n",
        "        else:\n",
        "            print(\n",
        "                \"Tuning job succeeded, but tuned model endpoint information is missing.\"\n",
        "            )\n",
        "            print(\"Please check the job details in the Google Cloud Console.\")\n",
        "    else:\n",
        "        print(\"Tuning job did not succeed.\")\n",
        "        # Check for error attribute before printing\n",
        "        job_error = getattr(tuning_job, \"error\", None)\n",
        "        if job_error:\n",
        "            print(f\"Error details: {job_error}\")\n",
        "        else:\n",
        "            print(\"No specific error details available.\")\n",
        "else:\n",
        "    print(\n",
        "        \"Skipping monitoring as tuning job name is not set (creation might have failed or data was empty).\"\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JeiZwXsMRdQ8"
      },
      "source": [
        "## Step 6: Evaluate Tuned Model (Qualitative)\n",
        "\n",
        "We take a sample from our test set (which the model hasn't seen during tuning) and compare the tuned model's prediction to the expected output."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0fG27PjrRdQ8"
      },
      "outputs": [],
      "source": [
        "def evaluate_qualitatively(\n",
        "    tuned_endpoint: str, test_data: list[dict[str, Any]], num_samples: int = 3\n",
        "):\n",
        "    \"\"\"Makes predictions with the tuned model and prints comparisons.\"\"\"\n",
        "    if not tuned_endpoint:\n",
        "        print(\"Tuned model endpoint not available. Skipping evaluation.\")\n",
        "        return\n",
        "\n",
        "    if not test_data:\n",
        "        print(\"No test data available for evaluation.\")\n",
        "        return\n",
        "\n",
        "    print(f\"\\n--- Qualitative Evaluation of Tuned Model ({tuned_endpoint}) ---\")\n",
        "\n",
        "    # Select random samples from the test set\n",
        "    samples = random.sample(test_data, min(num_samples, len(test_data)))\n",
        "\n",
        "    for i, sample in enumerate(samples):\n",
        "        print(f\"\\n--- Sample {i + 1} ---\")\n",
        "        # Ensure the sample structure is correct\n",
        "        try:\n",
        "            user_prompt = sample[\"contents\"][0][\"parts\"][0][\"text\"]\n",
        "            expected_output = sample[\"contents\"][1][\"parts\"][0][\"text\"]\n",
        "        except (KeyError, IndexError, TypeError) as e:\n",
        "            print(f\"Skipping sample due to unexpected format: {e}\")\n",
        "            continue\n",
        "\n",
        "        print(f\"Input Prompt:\\n{user_prompt}\")\n",
        "        print(f\"\\nExpected Output: {expected_output}\")\n",
        "\n",
        "        try:\n",
        "            # Prepare contents for prediction (only user part)\n",
        "            prediction_contents = [{\"role\": \"user\", \"parts\": [{\"text\": user_prompt}]}]\n",
        "\n",
        "            # Use the vertex_client for predictions against the tuned endpoint\n",
        "            # Note: The 'model' argument takes the endpoint resource name string directly\n",
        "            response = vertex_client.models.generate_content(\n",
        "                model=tuned_endpoint,\n",
        "                contents=prediction_contents,\n",
        "                config={\n",
        "                    \"temperature\": 0.1,  # Low temperature for more deterministic output\n",
        "                    \"max_output_tokens\": 50,\n",
        "                },\n",
        "            )\n",
        "            # Safely access predicted text\n",
        "            predicted_output = \"(No text generated)\"\n",
        "            if response and hasattr(response, \"text\"):\n",
        "                predicted_output = response.text.strip()\n",
        "            elif response and hasattr(response, \"candidates\") and response.candidates:\n",
        "                # Handle potential multi-candidate responses if safety filters trigger, etc.\n",
        "                first_candidate = response.candidates[0]\n",
        "                # Check finish reason before accessing content\n",
        "                finish_reason = getattr(first_candidate, \"finish_reason\", None)\n",
        "                if (\n",
        "                    finish_reason == genai_types.FinishReason.STOP\n",
        "                    and hasattr(first_candidate, \"content\")\n",
        "                    and first_candidate.content.parts\n",
        "                ):\n",
        "                    predicted_output = first_candidate.content.parts[0].text.strip()\n",
        "                else:\n",
        "                    predicted_output = f\"(Generation stopped: {finish_reason})\"\n",
        "\n",
        "            print(f\"Predicted Output: {predicted_output}\")\n",
        "\n",
        "            # Simple comparison\n",
        "            if predicted_output == expected_output:\n",
        "                print(\"Result: MATCH\")\n",
        "            else:\n",
        "                print(\"Result: MISMATCH\")\n",
        "\n",
        "        except Exception as e:\n",
        "            print(f\"ERROR during prediction for sample {i + 1}: {e}\")\n",
        "\n",
        "\n",
        "# Run qualitative evaluation (only if tuning succeeded and test data exists)\n",
        "evaluate_qualitatively(TUNED_MODEL_ENDPOINT, test_split)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jW2wMC1yRdQ8"
      },
      "source": [
        "## Step 7: Integrating Gemini for Reporting (Using Base Model)\n",
        "\n",
        "We can use a base Gemini model (accessed via Vertex AI) to summarize the fine-tuning job itself."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WZ5N1MAQRdQ8"
      },
      "outputs": [],
      "source": [
        "def generate_tuning_summary_with_gemini(tuning_job_details: Any):\n",
        "    \"\"\"Generates a summary of the tuning job using the Gemini API.\"\"\"\n",
        "    print(\"\\n--- Generating Tuning Job Summary with Gemini ---\")\n",
        "\n",
        "    if not tuning_job_details:\n",
        "        print(\"No tuning job details provided. Skipping summary.\")\n",
        "        return\n",
        "\n",
        "    # We will use the Vertex AI client, which is already initialized.\n",
        "    model_name_for_vertex_ai = \"gemini-2.5-flash\"  # Use a standard model for reporting\n",
        "    reporting_client = None\n",
        "\n",
        "    try:\n",
        "        # This uses the high-level vertexai SDK for base model generation\n",
        "        # Correctly import GenerativeModel from vertexai.preview.generative_models\n",
        "        from vertexai.preview.generative_models import GenerativeModel\n",
        "\n",
        "        reporting_client = GenerativeModel(model_name_for_vertex_ai)\n",
        "        print(f\"Using Vertex AI model ({model_name_for_vertex_ai}) for reporting.\")\n",
        "    except Exception as e:\n",
        "        print(\n",
        "            f\"Failed to initialize Vertex AI client for reporting with {model_name_for_vertex_ai}: {e}\"\n",
        "        )\n",
        "        print(\"Skipping summary generation.\")\n",
        "        return\n",
        "\n",
        "    try:\n",
        "        # Extract relevant details safely\n",
        "        job_name = getattr(tuning_job_details, \"name\", \"N/A\")\n",
        "        job_state_enum = getattr(\n",
        "            tuning_job_details, \"state\", genai_types.JobState.JOB_STATE_UNSPECIFIED\n",
        "        )  # Default to unspecified\n",
        "        job_state = str(job_state_enum).split(\".\")[\n",
        "            -1\n",
        "        ]  # Get 'SUCCEEDED', 'FAILED', etc.\n",
        "        base_model = getattr(tuning_job_details, \"base_model\", \"N/A\")\n",
        "        tuned_model_obj = getattr(tuning_job_details, \"tuned_model\", None)\n",
        "        tuned_endpoint = (\n",
        "            getattr(tuned_model_obj, \"endpoint\", \"N/A\") if tuned_model_obj else \"N/A\"\n",
        "        )\n",
        "        error_obj = getattr(tuning_job_details, \"error\", None)\n",
        "        error_message = str(error_obj) if error_obj else \"None\"\n",
        "        config_obj = getattr(tuning_job_details, \"config\", None)\n",
        "        display_name = (\n",
        "            getattr(config_obj, \"tuned_model_display_name\", \"N/A\")\n",
        "            if config_obj\n",
        "            else \"N/A\"\n",
        "        )\n",
        "\n",
        "        prompt = f\"\"\"Generate a brief status report for a Gemini model fine-tuning job.\n",
        "        Job Name: {job_name}\n",
        "        Base Model: {base_model}\n",
        "        Tuned Model Display Name: {display_name}\n",
        "        Final Status: {job_state}\n",
        "        Tuned Model Endpoint: {tuned_endpoint}\n",
        "        Error (if any): {error_message}\n",
        "\n",
        "        Summarize the outcome of this tuning job in 1-2 sentences.\"\"\"\n",
        "\n",
        "        print(\"\\nSending request to Gemini...\")\n",
        "        # Use the selected reporting_client (Vertex AI based)\n",
        "        response = reporting_client.generate_content(prompt)\n",
        "\n",
        "        print(\"\\n--- Gemini Tuning Job Summary ---\")\n",
        "        # Handle potential response variations\n",
        "        response_text = \"(No text content found in response)\"\n",
        "        try:\n",
        "            # Standard access\n",
        "            if hasattr(response, \"text\"):\n",
        "                response_text = response.text\n",
        "            # Access through candidates (common for safety filtering etc.)\n",
        "            elif hasattr(response, \"candidates\") and response.candidates:\n",
        "                first_candidate = response.candidates[0]\n",
        "                # Check finish reason before accessing content\n",
        "                finish_reason = getattr(first_candidate, \"finish_reason\", None)\n",
        "                # Check if STOPPED or MAX_TOKENS (can still have partial content)\n",
        "                if (\n",
        "                    finish_reason\n",
        "                    in [\n",
        "                        genai_types.FinishReason.STOP,\n",
        "                        genai_types.FinishReason.MAX_TOKENS,\n",
        "                    ]\n",
        "                    and hasattr(first_candidate, \"content\")\n",
        "                    and first_candidate.content.parts\n",
        "                ):\n",
        "                    response_text = first_candidate.content.parts[0].text\n",
        "                else:\n",
        "                    # Include finish reason if generation didn't stop normally\n",
        "                    response_text = f\"(Generation stopped: {finish_reason})\"\n",
        "        except Exception as resp_e:\n",
        "            print(f\"Error extracting text from response: {resp_e}\")\n",
        "            print(f\"Raw Response: {response}\")\n",
        "\n",
        "        print(response_text)\n",
        "        print(\"---------------------------------\")\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"\\nERROR: Failed to generate Gemini summary: {e}\")\n",
        "        if (\n",
        "            \"permission denied\" in str(e).lower()\n",
        "            or \"consumer project\" in str(e).lower()\n",
        "        ):\n",
        "            print(\n",
        "                \"Please ensure the Vertex AI API is enabled in your project and the runtime environment has the correct permissions.\"\n",
        "            )\n",
        "        else:\n",
        "            print(\n",
        "                \"Please check your Vertex AI setup, model name, and network connection.\"\n",
        "            )\n",
        "\n",
        "\n",
        "# Get the final job details again using the vertex_client (which manages tuning)\n",
        "final_tuning_job = None\n",
        "if TUNING_JOB_NAME:\n",
        "    try:\n",
        "        # Use vertex_client to get the job status\n",
        "        final_tuning_job = vertex_client.tunings.get(name=TUNING_JOB_NAME)\n",
        "    except Exception as e:\n",
        "        print(f\"Error retrieving final tuning job details: {e}\")\n",
        "\n",
        "# Generate the summary using the Vertex Gemini client\n",
        "generate_tuning_summary_with_gemini(final_tuning_job)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "sft_gemini_predictive_maintenance.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
