{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8d734046",
   "metadata": {},
   "source": [
    "# Quickstart: Multimodal OpenAI Prompter with Responses API"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2bd4cab2",
   "metadata": {},
   "source": [
    "Use OpenAI's Responses API to enrich your data with multimodal content such as PDFs, images or documentations. This quickstart shows how to use multimodal content from a Spark DataFrame by pointing `OpenAIPrompt` to columns that contain file paths or URLs."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e7ccc28",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "- An active Spark session (`spark`)\n",
    "- Access to an OpenAI resource with the Responses API enabled. Deployment should be a multimodal model.\n",
    "- File you want to talk to."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c597698f",
   "metadata": {},
   "source": [
    "## Fill in service information\n",
    "\n",
    "Next, edit the cell in the notebook to point to your service. In particular set the `service_name`, `deployment_name`, `location`, and `key` variables to match them to your OpenAI service:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8dfb9779",
   "metadata": {},
   "outputs": [],
   "source": [
    "from synapse.ml.core.platform import find_secret\n",
    "\n",
    "# Fill in the following lines with your service information\n",
    "# Learn more about selecting which embedding model to choose: https://openai.com/blog/new-and-improved-embedding-model\n",
    "service_name = \"synapseml-openai-2\"\n",
    "deployment_name = \"gpt-4.1\"\n",
    "api_version = (\n",
    "    \"2025-04-01-preview\"  # Responses API is only supported in this version and later\n",
    ")\n",
    "\n",
    "key = find_secret(\n",
    "    secret_name=\"openai-api-key-2\", keyvault=\"mmlspark-build-keys\"\n",
    ")  # please replace this line with your key as a string\n",
    "\n",
    "assert key is not None and service_name is not None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca674477",
   "metadata": {},
   "source": [
    "## Build a multimodal dataset\n",
    "Each row combines a natural-language question with a reference to a file. Attachments can be local paths or remote URLs(Currently only support http(s))."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "59433b83",
   "metadata": {},
   "outputs": [],
   "source": [
    "qa_df = spark.createDataFrame(\n",
    "    [\n",
    "        (\n",
    "            \"What's in the image?\",\n",
    "            \"https://mmlspark.blob.core.windows.net/datasets/OCR/test2.png\",\n",
    "        ),\n",
    "        (\n",
    "            \"Summarize this document.\",\n",
    "            \"https://mmlspark.blob.core.windows.net/datasets/OCR/paper.pdf\",\n",
    "        ),\n",
    "        (\n",
    "            \"What are the related works to SynapseML?\",\n",
    "            \"https://mmlspark.blob.core.windows.net/publicwasb/Overview.md\",\n",
    "        ),\n",
    "    ]\n",
    ").toDF(\"questions\", \"urls\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8b4e554",
   "metadata": {},
   "source": [
    "## Configure `OpenAIPrompt` for file-aware responses\n",
    "Any column declared as `path` is treated as an attachment. The prompt template automatically replaces `{file_path}` with a notice that the model will receive the referenced file in the message payload."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "12363a72",
   "metadata": {},
   "outputs": [],
   "source": [
    "from synapse.ml.services.openai import OpenAIPrompt\n",
    "\n",
    "prompt_template = \"{questions}: {urls}\"\n",
    "\n",
    "prompter = (\n",
    "    OpenAIPrompt()\n",
    "    .setSubscriptionKey(key)\n",
    "    .setDeploymentName(deployment_name)\n",
    "    .setCustomServiceName(service_name)\n",
    "    .setApiVersion(api_version)\n",
    "    .setApiType(\n",
    "        \"responses\"\n",
    "    )  # We need to use the Responses API, instead of Chat Completions API.\n",
    "    .setPromptTemplate(prompt_template)\n",
    "    .setColumnTypes(\n",
    "        {\"urls\": \"path\"}\n",
    "    )  # Configure the column type to 'path' for the url column\n",
    "    .setErrorCol(\"error\")\n",
    "    .setOutputCol(\"outputCol\")\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c84a661",
   "metadata": {},
   "source": [
    "## Generate multimodal responses\n",
    "See how the model react to your various file data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6e73dd1e",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompter.transform(qa_df).show(truncate=False)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
