{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a77a7fb6",
   "metadata": {},
   "source": [
    "# 🚀 Gretel to Opik Integration: Creating Q&A Datasets for Model Evaluation\n",
    "\n",
    "**The Story**: You need high-quality Q&A datasets to evaluate your AI models, but creating them manually is time-consuming and expensive. This cookbook shows you how to use Gretel's synthetic data generation to create diverse, realistic Q&A datasets and import them into Opik for model evaluation and optimization.\n",
    "\n",
    "**What you'll accomplish**:\n",
    "1. Generate synthetic Q&A data using Gretel Data Designer\n",
    "2. Convert it to Opik format\n",
    "3. Import into Opik for model evaluation\n",
    "4. See your dataset in the Opik UI\n",
    "\n",
    "---\n",
    "\n",
    "## 📋 Prerequisites\n",
    "\n",
    "- **Gretel Account**: Sign up at [gretel.ai](https://gretel.ai) and get your API key\n",
    "- **Comet Account**: Sign up at [comet.com](https://comet.com) for Opik access\n",
    "\n",
    "Let's get started! 🎯\n",
    "\n",
    "## 🛠️ **Two Approaches Available**\n",
    "\n",
    "This cookbook demonstrates **two methods** for generating synthetic data with Gretel:\n",
    "\n",
    "1. **Data Designer** (recommended for custom datasets): Create datasets from scratch with precise control\n",
    "2. **Safe Synthetics** (recommended for existing data): Generate synthetic versions of existing datasets\n",
    "\n",
    "We'll start with Data Designer, then show Safe Synthetics as an alternative.\n",
    "\n",
    "## 💾 Step 1: Install Required Packages\n",
    "\n",
    "We'll install the Gretel client and Opik SDK:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "d0eb0353",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip install gretel-client opik pandas --upgrade --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ea077df",
   "metadata": {},
   "source": [
    "## 🔐 Step 2: Authentication Setup\n",
    "\n",
    "Let's authenticate with both Gretel and Opik:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "5f08a5db",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "OPIK: Opik is already configured. You can check the settings by viewing the config file at /home/mavrick/.opik.config\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔐 Setting up authentication...\n",
      "✅ Authentication completed!\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import getpass\n",
    "import opik\n",
    "import pandas as pd\n",
    "\n",
    "print(\"🔐 Setting up authentication...\")\n",
    "\n",
    "# Set up Gretel API key\n",
    "if \"GRETEL_API_KEY\" not in os.environ:\n",
    "    os.environ[\"GRETEL_API_KEY\"] = getpass.getpass(\"Enter your Gretel API key: \")\n",
    "\n",
    "# Set up Opik (will prompt for API key if not configured)\n",
    "opik.configure()\n",
    "\n",
    "print(\"✅ Authentication completed!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ab73b15",
   "metadata": {},
   "source": [
    "## 📊 Step 3: Generate Q&A Dataset with Gretel Data Designer\n",
    "\n",
    "Now we'll use Gretel Data Designer to generate synthetic Q&A data. We'll create questions and answers about AI and machine learning:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "c21cb086",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🤖 Setting up Q&A dataset generation with Gretel Data Designer...\n",
      "Logged in as mavrickrishi@gmail.com ✅\n",
      "Using project: default-sdk-project-23e56962f6cd2e8\n",
      "Project link: https://console.gretel.ai/proj_2yqdYGW3Ez9CKToEZQYN8VFMPnN\n",
      "📊 Generating Q&A dataset...\n",
      "[00:04:43] [INFO] 🚀 Submitting batch workflow\n",
      "▶️ Creating Workflow: w_2zy6IaaJgChytHd3dG2KyIYstxL\n",
      "▶️ Created Workflow Run: wr_2zy6Ibf5N7fbYalvzwoKNVe52vq\n",
      "🔗 Workflow Run console link: https://console.gretel.ai/workflows/w_2zy6IaaJgChytHd3dG2KyIYstxL/runs/wr_2zy6Ibf5N7fbYalvzwoKNVe52vq\n",
      "Fetching task logs for workflow run wr_2zy6Ibf5N7fbYalvzwoKNVe52vq\n",
      "Got task wt_2zy6IdJs3BBeGquO4xgJpfnrUNo\n",
      "Workflow run is now in status: RUN_STATUS_ACTIVE\n",
      "[using-samplers-to-generate-2-columns] Task Status is now: RUN_STATUS_ACTIVE\n",
      "[using-samplers-to-generate-2-columns] 2025-07-16 18:35:04.057095+00:00 Preparing step 'using-samplers-to-generate-2-columns'\n",
      "[using-samplers-to-generate-2-columns] 2025-07-16 18:35:17.319534+00:00 Starting 'generate_columns_using_samplers' task execution\n",
      "[using-samplers-to-generate-2-columns] 2025-07-16 18:35:17.320962+00:00 🎲 Using numerical samplers to generate 20 records across 2 columns\n",
      "[using-samplers-to-generate-2-columns] 2025-07-16 18:35:18.228221+00:00 Task 'generate_columns_using_samplers' executed successfully\n",
      "[using-samplers-to-generate-2-columns] 2025-07-16 18:35:18.228653+00:00 Task execution completed. Saving task outputs.\n",
      "[using-samplers-to-generate-2-columns] 2025-07-16 18:35:19.370809+00:00 Task outputs saved.\n",
      "[using-samplers-to-generate-2-columns] Task Status is now: RUN_STATUS_COMPLETED\n",
      "Got task wt_2zy6IcegtAegKuGhyqpdTjfYYvy\n",
      "[generating-text-column-question] Task Status is now: RUN_STATUS_ACTIVE\n",
      "[generating-text-column-question] 2025-07-16 18:36:17.926188+00:00 Preparing step 'generating-text-column-question'\n",
      "[generating-text-column-question] 2025-07-16 18:36:26.594959+00:00 Starting 'generate_column_from_template_v2' task execution\n",
      "[generating-text-column-question] 2025-07-16 18:36:26.595542+00:00 📝 Preparing template to generate data column `question`\n",
      "[generating-text-column-question] 2025-07-16 18:36:26.595669+00:00   |-- model_alias: ModelAlias.TEXT\n",
      "[generating-text-column-question] 2025-07-16 18:36:27.505205+00:00 Progress: 10%\n",
      "[generating-text-column-question] 2025-07-16 18:36:27.554972+00:00 Progress: 20%\n",
      "[generating-text-column-question] 2025-07-16 18:36:27.618450+00:00 Progress: 30%\n",
      "[generating-text-column-question] 2025-07-16 18:36:27.957937+00:00 Progress: 40%\n",
      "[generating-text-column-question] 2025-07-16 18:36:28.177603+00:00 Progress: 50%\n",
      "[generating-text-column-question] 2025-07-16 18:36:28.368100+00:00 Progress: 60%\n",
      "[generating-text-column-question] 2025-07-16 18:36:28.425515+00:00 Progress: 70%\n",
      "[generating-text-column-question] 2025-07-16 18:36:28.527626+00:00 Progress: 80%\n",
      "[generating-text-column-question] 2025-07-16 18:36:28.896360+00:00 Progress: 90%\n",
      "[generating-text-column-question] 2025-07-16 18:36:29.144715+00:00 Progress: 100%\n",
      "[generating-text-column-question] 2025-07-16 18:36:29.156168+00:00 Task 'generate_column_from_template_v2' executed successfully\n",
      "[generating-text-column-question] 2025-07-16 18:36:29.156629+00:00 Task execution completed. Saving task outputs.\n",
      "[generating-text-column-question] 2025-07-16 18:36:29.728629+00:00 Task outputs saved.\n",
      "[generating-text-column-question] Task Status is now: RUN_STATUS_COMPLETED\n",
      "Got task wt_2zy6IbdZXzDAjx0R3oDweaznxip\n",
      "[generating-text-column-answer] Task Status is now: RUN_STATUS_ACTIVE\n",
      "[generating-text-column-answer] 2025-07-16 18:37:25.003797+00:00 Preparing step 'generating-text-column-answer'\n",
      "[generating-text-column-answer] 2025-07-16 18:37:33.760534+00:00 Starting 'generate_column_from_template_v2' task execution\n",
      "[generating-text-column-answer] 2025-07-16 18:37:33.761094+00:00 📝 Preparing template to generate data column `answer`\n",
      "[generating-text-column-answer] 2025-07-16 18:37:33.761222+00:00   |-- model_alias: ModelAlias.TEXT\n",
      "[generating-text-column-answer] 2025-07-16 18:37:40.967406+00:00 Progress: 10%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:42.880407+00:00 Progress: 20%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:45.024239+00:00 Progress: 30%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:46.127156+00:00 Progress: 40%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:48.816314+00:00 Progress: 50%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:54.539072+00:00 Progress: 60%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:55.775021+00:00 Progress: 70%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:57.361040+00:00 Progress: 80%\n",
      "[generating-text-column-answer] 2025-07-16 18:37:59.689579+00:00 Progress: 90%\n",
      "[generating-text-column-answer] 2025-07-16 18:38:01.701321+00:00 Progress: 100%\n",
      "[generating-text-column-answer] 2025-07-16 18:38:01.713215+00:00 Task 'generate_column_from_template_v2' executed successfully\n",
      "[generating-text-column-answer] 2025-07-16 18:38:01.713672+00:00 Task execution completed. Saving task outputs.\n",
      "[generating-text-column-answer] 2025-07-16 18:38:02.267614+00:00 Task outputs saved.\n",
      "[generating-text-column-answer] Task Status is now: RUN_STATUS_COMPLETED\n",
      "Workflow run is now in status: RUN_STATUS_COMPLETED\n",
      "✅ Generated 20 Q&A pairs!\n",
      "\n",
      "📊 Dataset shape: (20, 4)\n",
      "📋 Columns: ['topic', 'difficulty', 'question', 'answer']\n",
      "\n",
      "📄 Sample data:\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>topic</th>\n",
       "      <th>difficulty</th>\n",
       "      <th>question</th>\n",
       "      <th>answer</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>AI ethics</td>\n",
       "      <td>beginner</td>\n",
       "      <td>Should AI systems be designed to always priori...</td>\n",
       "      <td>The design of AI systems should indeed priorit...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>deep learning</td>\n",
       "      <td>beginner</td>\n",
       "      <td>How does the learning rate affect the converge...</td>\n",
       "      <td>The learning rate in a neural network determin...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>AI ethics</td>\n",
       "      <td>beginner</td>\n",
       "      <td>Should a self-driving car be programmed to pri...</td>\n",
       "      <td>The question of whether a self-driving car sho...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "           topic difficulty  \\\n",
       "0      AI ethics   beginner   \n",
       "1  deep learning   beginner   \n",
       "2      AI ethics   beginner   \n",
       "\n",
       "                                            question  \\\n",
       "0  Should AI systems be designed to always priori...   \n",
       "1  How does the learning rate affect the converge...   \n",
       "2  Should a self-driving car be programmed to pri...   \n",
       "\n",
       "                                              answer  \n",
       "0  The design of AI systems should indeed priorit...  \n",
       "1  The learning rate in a neural network determin...  \n",
       "2  The question of whether a self-driving car sho...  "
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from gretel_client.navigator_client import Gretel  # Use navigator_client instead!\n",
    "from gretel_client.data_designer import columns as C\n",
    "from gretel_client.data_designer import params as P\n",
    "\n",
    "print(\"🤖 Setting up Q&A dataset generation with Gretel Data Designer...\")\n",
    "\n",
    "# Initialize Data Designer using the navigator_client and factory method\n",
    "gretel_navigator = Gretel()  # This creates the navigator client\n",
    "dd = gretel_navigator.data_designer.new(model_suite=\"apache-2.0\")\n",
    "\n",
    "# Add topic column (categorical sampler)\n",
    "dd.add_column(\n",
    "    C.SamplerColumn(\n",
    "        name=\"topic\",\n",
    "        type=P.SamplerType.CATEGORY,\n",
    "        params=P.CategorySamplerParams(\n",
    "            values=[\n",
    "                \"neural networks\", \"deep learning\", \"machine learning\", \"NLP\", \n",
    "                \"computer vision\", \"reinforcement learning\", \"AI ethics\", \"data science\"\n",
    "            ]\n",
    "        )\n",
    "    )\n",
    ")\n",
    "\n",
    "# Add difficulty column\n",
    "dd.add_column(\n",
    "    C.SamplerColumn(\n",
    "        name=\"difficulty\",\n",
    "        type=P.SamplerType.CATEGORY,\n",
    "        params=P.CategorySamplerParams(\n",
    "            values=[\"beginner\", \"intermediate\", \"advanced\"]\n",
    "        )\n",
    "    )\n",
    ")\n",
    "\n",
    "# Add question column (LLM-generated)\n",
    "dd.add_column(\n",
    "    C.LLMTextColumn(\n",
    "        name=\"question\",\n",
    "        prompt=(\n",
    "            \"Generate a challenging, specific question about {{ topic }} \"\n",
    "            \"at {{ difficulty }} level. The question should be clear, focused, \"\n",
    "            \"and something a student or practitioner might actually ask.\"\n",
    "        )\n",
    "    )\n",
    ")\n",
    "\n",
    "# Add answer column (LLM-generated)\n",
    "dd.add_column(\n",
    "    C.LLMTextColumn(\n",
    "        name=\"answer\",\n",
    "        prompt=(\n",
    "            \"Provide a clear, accurate, and comprehensive answer to this {{ difficulty }}-level \"\n",
    "            \"question about {{ topic }}: '{{ question }}'. The answer should be educational \"\n",
    "            \"and directly address all aspects of the question.\"\n",
    "        )\n",
    "    )\n",
    ")\n",
    "\n",
    "print(\"📊 Generating Q&A dataset...\")\n",
    "\n",
    "# Generate the dataset\n",
    "workflow_run = dd.create(num_records=20, wait_until_done=True)\n",
    "synthetic_df = workflow_run.dataset.df\n",
    "\n",
    "print(f\"✅ Generated {len(synthetic_df)} Q&A pairs!\")\n",
    "print(f\"\\n📊 Dataset shape: {synthetic_df.shape}\")\n",
    "print(f\"📋 Columns: {list(synthetic_df.columns)}\")\n",
    "\n",
    "# Display first few rows\n",
    "print(\"\\n📄 Sample data:\")\n",
    "synthetic_df.head(3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4cafcad",
   "metadata": {},
   "source": [
    "## 🔄 Step 4: Convert to Opik Format\n",
    "\n",
    "Let's convert our Gretel-generated data to the format Opik expects:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "d5f79e26",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔄 Converting to Opik format...\n",
      "✅ Converted 20 items to Opik format!\n",
      "\n",
      "📋 Sample converted item:\n",
      "{\n",
      "  \"input\": {\n",
      "    \"question\": \"Should AI systems be designed to always prioritize human safety, even if it means sacrificing other values such as privacy or efficiency?\"\n",
      "  },\n",
      "  \"expected_output\": \"The design of AI systems should indeed prioritize human safety, but it's not accurate to frame the question as an absolute choice between safety and other values like privacy or efficiency. Here's why:\\n\\n1. **Human Safety**: AI systems should be designed to minimize harm to humans. This is a fundamental principle in AI ethics, as outlined in guidelines from organizations like the European Commission and the IEEE.\\n\\n2. **Other Values**: However, other values are also important. Privacy, for example, is a fundamental human right. Efficiency is crucial for the practical deployment of AI systems.\\n\\n3. **Balance**: Instead of prioritizing one value over the others, AI systems should be designed to balance these values. This means that while safety is paramount, it should not be used as an excuse to disregard privacy or efficiency completely.\\n\\n4. **Context Matters**: The balance between these values can change depending on the context. For example, in a healthcare setting, patient safety might be prioritized over privacy, but in a non-emergency situation, privacy might be more important.\\n\\n5. **Transparency and Accountability**: AI systems should be transparent and accountable. This means that decisions made by AI systems should be understandable, and there should be clear accountability when things go wrong.\\n\\n6. **Human Oversight**: AI systems should be designed with human oversight in mind. This means that there should always be a way for a human to intervene and override the AI system if necessary.\\n\\nIn conclusion, while human safety is a critical consideration in the design of AI systems, it should not be prioritized at the expense of other values. Instead, AI systems should be designed to balance these values, with the specific balance depending on the context.\",\n",
      "  \"metadata\": {\n",
      "    \"topic\": \"AI ethics\",\n",
      "    \"difficulty\": \"beginner\",\n",
      "    \"source\": \"gretel_navigator\"\n",
      "  }\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "def convert_to_opik_format(df):\n",
    "    \"\"\"Convert Gretel Q&A data to Opik dataset format\"\"\"\n",
    "    opik_items = []\n",
    "    \n",
    "    for _, row in df.iterrows():\n",
    "        # Create Opik dataset item\n",
    "        item = {\n",
    "            \"input\": {\n",
    "                \"question\": row[\"question\"]\n",
    "            },\n",
    "            \"expected_output\": row[\"answer\"],\n",
    "            \"metadata\": {\n",
    "                \"topic\": row.get(\"topic\", \"AI/ML\"),\n",
    "                \"difficulty\": row.get(\"difficulty\", \"unknown\"),\n",
    "                \"source\": \"gretel_navigator\"\n",
    "            }\n",
    "        }\n",
    "        opik_items.append(item)\n",
    "    \n",
    "    return opik_items\n",
    "\n",
    "print(\"🔄 Converting to Opik format...\")\n",
    "\n",
    "opik_data = convert_to_opik_format(synthetic_df)\n",
    "\n",
    "print(f\"✅ Converted {len(opik_data)} items to Opik format!\")\n",
    "print(\"\\n📋 Sample converted item:\")\n",
    "import json\n",
    "print(json.dumps(opik_data[0], indent=2))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc2540e7",
   "metadata": {},
   "source": [
    "## 📤 Step 5: Push Dataset to Opik\n",
    "\n",
    "Now let's upload our dataset to Opik where it can be used for model evaluation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "dd699c58",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "📤 Pushing dataset to Opik...\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/retrieve \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/items/stream \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/items/stream \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: PUT https://www.comet.com/opik/api/v1/private/datasets/items \"HTTP/1.1 204 No Content\"\n",
      "✅ Successfully created dataset: gretel-ai-qa-dataset\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/retrieve \"HTTP/1.1 200 OK\"\n",
      "🆔 Dataset ID: 0198135a-fadc-7d0d-8900-6a475c9523ad\n",
      "📊 Total items: 20\n"
     ]
    }
   ],
   "source": [
    "print(\"📤 Pushing dataset to Opik...\")\n",
    "\n",
    "# Initialize Opik client\n",
    "opik_client = opik.Opik()\n",
    "\n",
    "# Create the dataset\n",
    "dataset_name = \"gretel-ai-qa-dataset\"\n",
    "dataset = opik_client.get_or_create_dataset(\n",
    "    name=dataset_name,\n",
    "    description=\"Synthetic Q&A dataset generated using Gretel Data Designer for AI/ML evaluation\"\n",
    ")\n",
    "\n",
    "# Insert the data\n",
    "dataset.insert(opik_data)\n",
    "\n",
    "print(f\"✅ Successfully created dataset: {dataset.name}\")\n",
    "print(f\"🆔 Dataset ID: {dataset.id}\")\n",
    "print(f\"📊 Total items: {len(opik_data)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "91b6bb47",
   "metadata": {},
   "source": [
    "The trace can now be viewed in the UI:\n",
    "\n",
    "![gretel_opik_integration](https://raw.githubusercontent.com/comet-ml/opik/main/apps/opik-documentation/documentation/fern/img/cookbook/gretel_opik_integration_cookbook.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a454a5a",
   "metadata": {},
   "source": [
    "## ✅ Step 6: Verify Your Dataset\n",
    "\n",
    "Let's confirm the dataset was created successfully and see how to use it:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "ed31ca3a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔍 Verifying dataset creation...\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/retrieve \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/items/stream \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/items/stream \"HTTP/1.1 200 OK\"\n",
      "✅ Dataset verified: gretel-ai-qa-dataset\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/retrieve \"HTTP/1.1 200 OK\"\n",
      "🆔 Dataset ID: 0198135a-fadc-7d0d-8900-6a475c9523ad\n",
      "\n",
      "🎯 Next steps:\n",
      "1. Go to https://www.comet.com\n",
      "2. Navigate to Opik → Datasets\n",
      "3. Find your dataset: gretel-ai-qa-dataset\n",
      "4. Use it to evaluate your AI models!\n"
     ]
    }
   ],
   "source": [
    "print(\"🔍 Verifying dataset creation...\")\n",
    "\n",
    "# Try to retrieve the dataset\n",
    "try:\n",
    "    retrieved_dataset = opik_client.get_dataset(dataset_name)\n",
    "    print(f\"✅ Dataset verified: {retrieved_dataset.name}\")\n",
    "    print(f\"🆔 Dataset ID: {retrieved_dataset.id}\")\n",
    "    \n",
    "    print(f\"\\n🎯 Next steps:\")\n",
    "    print(f\"1. Go to https://www.comet.com\")\n",
    "    print(f\"2. Navigate to Opik → Datasets\")\n",
    "    print(f\"3. Find your dataset: {dataset_name}\")\n",
    "    print(f\"4. Use it to evaluate your AI models!\")\n",
    "    \n",
    "except Exception as e:\n",
    "    print(f\"❌ Could not verify dataset: {e}\")\n",
    "    print(\"Please check your Opik configuration and try again.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36aab01e",
   "metadata": {},
   "source": [
    "## 🧪 Step 7: Example Model Evaluation\n",
    "\n",
    "Here's how you can use your new dataset to evaluate a model with Opik:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "18b420ab",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🧪 Example model evaluation setup:\n",
      "Dataset: gretel-ai-qa-dataset\n",
      "Model: simple_qa_model (replace with your actual model)\n",
      "\n",
      "💡 To run evaluation, uncomment and run the following code:\n",
      "\n",
      "🎉 Integration complete! Your Gretel-generated dataset is ready for model evaluation in Opik.\n"
     ]
    }
   ],
   "source": [
    "# Example: Simple Q&A model evaluation\n",
    "@opik.track\n",
    "def simple_qa_model(input_data):\n",
    "    \"\"\"A simple example model that generates responses to questions\"\"\"\n",
    "    question = input_data.get('question', '')\n",
    "    \n",
    "    # This is just an example - replace with your actual model\n",
    "    if 'neural network' in question.lower():\n",
    "        return \"A neural network is a computational model inspired by biological neural networks.\"\n",
    "    elif 'machine learning' in question.lower():\n",
    "        return \"Machine learning is a subset of AI that enables systems to learn from data.\"\n",
    "    else:\n",
    "        return \"This is a complex AI/ML topic that requires detailed explanation.\"\n",
    "\n",
    "print(\"🧪 Example model evaluation setup:\")\n",
    "print(f\"Dataset: {dataset_name}\")\n",
    "print(\"Model: simple_qa_model (replace with your actual model)\")\n",
    "print(\"\\n💡 To run evaluation, uncomment and run the following code:\")\n",
    "print(\"\\n🎉 Integration complete! Your Gretel-generated dataset is ready for model evaluation in Opik.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "85491d4c",
   "metadata": {},
   "source": [
    "**Congratulations!** 🎉 You've successfully:\n",
    "\n",
    "1. **Generated synthetic Q&A data** using Gretel Data Designer's advanced column types\n",
    "2. **Converted the data** to Opik's expected format\n",
    "3. **Created a dataset** in Opik for model evaluation\n",
    "4. **Set up the foundation** for AI model testing and optimization\n",
    "\n",
    "The key advantage of using Gretel Data Designer is its modular approach - you can define exactly what data you want using samplers (for categories) and LLM columns (for generated text), giving you precise control over your synthetic dataset.\n",
    "\n",
    "---\n",
    "\n",
    "## 🔗 Next Steps\n",
    "\n",
    "- **View your dataset**: Go to your Comet workspace → Opik → Datasets\n",
    "- **Evaluate models**: Use the dataset to test your Q&A models\n",
    "- **Optimize prompts**: Use Opik's Agent Optimizer with your synthetic data\n",
    "- **Scale up**: Generate larger datasets for more comprehensive testing\n",
    "\n",
    "## 📚 Resources\n",
    "\n",
    "- [Gretel Documentation](https://docs.gretel.ai/)\n",
    "- [Opik Documentation](https://www.comet.com/docs/opik/)\n",
    "- [Gretel Data Designer Guide](https://docs.gretel.ai/create-synthetic-data/gretel-data-designer/)\n",
    "\n",
    "**Happy evaluating!** 🚀\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d59c633",
   "metadata": {},
   "source": [
    "# 🔄 Alternative: Using Gretel Safe Synthetics\n",
    "\n",
    "If you have an existing Q&A dataset and want to create a synthetic version, you can use **Gretel Safe Synthetics** instead:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "d393201c",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "%pip install -U gretel-client"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bec6553f",
   "metadata": {},
   "source": [
    "## Step A: Prepare Sample Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "16393302",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Found cached Gretel credentials\n",
      "Logged in as mavrickrishi@gmail.com ✅\n",
      "Using project: default-sdk-project-23e56962f6cd2e8\n",
      "Project link: https://console.gretel.ai/proj_2yqdYGW3Ez9CKToEZQYN8VFMPnN\n",
      "📄 Original dataset: 250 records\n",
      "                                    question  \\\n",
      "0                  What is machine learning?   \n",
      "1               How do neural networks work?   \n",
      "2  What is the difference between AI and ML?   \n",
      "3             Explain deep learning concepts   \n",
      "4          What are the applications of NLP?   \n",
      "\n",
      "                                              answer            topic  \\\n",
      "0  Machine learning is a subset of AI that enable...               ML   \n",
      "1  Neural networks are computational models inspi...  Neural Networks   \n",
      "2  AI is the broader concept while ML is a specif...            AI/ML   \n",
      "3  Deep learning uses multi-layer neural networks...    Deep Learning   \n",
      "4  NLP applications include chatbots, translation...              NLP   \n",
      "\n",
      "     difficulty  \n",
      "0      beginner  \n",
      "1  intermediate  \n",
      "2      beginner  \n",
      "3      advanced  \n",
      "4  intermediate  \n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "from gretel_client.navigator_client import Gretel\n",
    "\n",
    "# Initialize Gretel client\n",
    "gretel = Gretel(api_key=\"prompt\")\n",
    "\n",
    "# Option 1: Use Gretel's sample ecommerce dataset (has 200+ records)\n",
    "my_data_source = \"https://gretel-datasets.s3.us-west-2.amazonaws.com/ecommerce_customers.csv\"\n",
    "\n",
    "# Option 2: Create your own Q&A dataset (needs 200+ records for holdout)\n",
    "# For demonstration, we'll create a larger dataset\n",
    "sample_questions = [\n",
    "    'What is machine learning?',\n",
    "    'How do neural networks work?',\n",
    "    'What is the difference between AI and ML?',\n",
    "    'Explain deep learning concepts',\n",
    "    'What are the applications of NLP?'\n",
    "] * 50  # Repeat to get 250 records\n",
    "\n",
    "sample_answers = [\n",
    "    'Machine learning is a subset of AI that enables systems to learn from data.',\n",
    "    'Neural networks are computational models inspired by biological neural networks.',\n",
    "    'AI is the broader concept while ML is a specific approach to achieve AI.',\n",
    "    'Deep learning uses multi-layer neural networks to model complex patterns.',\n",
    "    'NLP applications include chatbots, translation, sentiment analysis, and text generation.'\n",
    "] * 50  # Repeat to get 250 records\n",
    "\n",
    "sample_data = {\n",
    "    'question': sample_questions,\n",
    "    'answer': sample_answers,\n",
    "    'topic': (['ML', 'Neural Networks', 'AI/ML', 'Deep Learning', 'NLP'] * 50),\n",
    "    'difficulty': (['beginner', 'intermediate', 'beginner', 'advanced', 'intermediate'] * 50)\n",
    "}\n",
    "\n",
    "original_df = pd.DataFrame(sample_data)\n",
    "print(f\"📄 Original dataset: {len(original_df)} records\")\n",
    "print(original_df.head())\n",
    "\n",
    "# Important: Gretel requires at least 200 records to use holdout\n",
    "if len(original_df) < 200:\n",
    "    print(\"⚠️ Warning: Dataset has less than 200 records. Holdout will be disabled.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "087c6458",
   "metadata": {},
   "source": [
    "## Step B: Generate Synthetic Version"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "93b75264",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Configuring generator for data source: DataFrame (250, 4)\n",
      "Configuring synthetic data generation model: tabular_ft/default\n",
      "▶️ Creating Workflow: w_2zy6n4dFIDX4rrumaRrkAs83vkK\n",
      "▶️ Created Workflow Run: wr_2zy6n9edEFTMeq6dI5dztpQj6TE\n",
      "🔗 Workflow Run console link: https://console.gretel.ai/workflows/w_2zy6n4dFIDX4rrumaRrkAs83vkK/runs/wr_2zy6n9edEFTMeq6dI5dztpQj6TE\n",
      "Fetching task logs for workflow run wr_2zy6n9edEFTMeq6dI5dztpQj6TE\n",
      "Got task wt_2zy6n5psdskpY9SMCSmwf8TVdky\n",
      "Workflow run is now in status: RUN_STATUS_ACTIVE\n",
      "[read-data-source] Task Status is now: RUN_STATUS_ACTIVE\n",
      "[read-data-source] 2025-07-16 18:39:07.251095+00:00 Preparing step 'read-data-source'\n",
      "[read-data-source] 2025-07-16 18:39:14.903445+00:00 Starting 'data_source' task execution\n",
      "[read-data-source] 2025-07-16 18:39:16.868988+00:00 Task 'data_source' executed successfully\n",
      "[read-data-source] 2025-07-16 18:39:16.869498+00:00 Task execution completed. Saving task outputs.\n",
      "[read-data-source] 2025-07-16 18:39:17.457615+00:00 Task outputs saved.\n",
      "[read-data-source] Task Status is now: RUN_STATUS_COMPLETED\n",
      "Got task wt_2zy6nCGVvCQsWRveoPmKfYkcynQ\n",
      "[tabular-ft] Task Status is now: RUN_STATUS_ACTIVE\n",
      "[tabular-ft] 2025-07-16 18:40:14.483432+00:00 Preparing step 'tabular-ft'\n",
      "[tabular-ft] 2025-07-16 18:40:27.331113+00:00 Starting 'tabular_ft' task execution\n",
      "[tabular-ft] 2025-07-16 18:40:44.946501+00:00 Analyzing input data and checking for auto-params...\n",
      "[tabular-ft] 2025-07-16 18:40:44.955173+00:00 Parameter `rope_scaling_factor` was automatically set to 1 based on an estimated token count given the lengths of each training record and the column names.\n",
      "[tabular-ft] 2025-07-16 18:40:44.955444+00:00 Found 3 auto-params that were set based on input data. - num_input_records_to_sample: 25000, use_unsloth: True, rope_scaling_factor: 1\n",
      "[tabular-ft] 2025-07-16 18:40:44.955508+00:00 Using updated model config: \n",
      "{\n",
      "    \"group_training_examples_by\": null,\n",
      "    \"order_training_examples_by\": null,\n",
      "    \"max_sequences_per_example\": null,\n",
      "    \"params\": {\n",
      "        \"num_input_records_to_sample\": 25000,\n",
      "        \"batch_size\": 1,\n",
      "        \"gradient_accumulation_steps\": 8,\n",
      "        \"weight_decay\": 0.01,\n",
      "        \"warmup_ratio\": 0.05,\n",
      "        \"lr_scheduler\": \"cosine\",\n",
      "        \"learning_rate\": 0.0005,\n",
      "        \"lora_r\": 32,\n",
      "        \"lora_alpha_over_r\": 1.0,\n",
      "        \"lora_target_modules\": [\n",
      "            \"q_proj\",\n",
      "            \"k_proj\",\n",
      "            \"v_proj\",\n",
      "            \"o_proj\"\n",
      "        ],\n",
      "        \"use_unsloth\": true,\n",
      "        \"rope_scaling_factor\": 1,\n",
      "        \"validation_ratio\": 0.0,\n",
      "        \"validation_steps\": 15\n",
      "    },\n",
      "    \"privacy_params\": {\n",
      "        \"dp\": false,\n",
      "        \"epsilon\": 8.0,\n",
      "        \"delta\": \"auto\",\n",
      "        \"per_sample_max_grad_norm\": 1.0\n",
      "    },\n",
      "    \"data_config\": null\n",
      "}\n",
      "[tabular-ft] 2025-07-16 18:40:44.956240+00:00 << 🧭 Tabular FT >> Preparing for training\n",
      "[tabular-ft] 2025-07-16 18:40:52.747593+00:00 << 🧭 Tabular FT >> Tokenizing records\n",
      "[tabular-ft] 2025-07-16 18:40:52.811122+00:00 << 🧭 Tabular FT >>  Number of unique train records: 250\n",
      "[tabular-ft] 2025-07-16 18:40:52.811269+00:00 << 🧭 Tabular FT >> Assembling examples from 10000.0% of the input records\n",
      "[tabular-ft] 2025-07-16 18:41:22.537101+00:00 << 🧭 Tabular FT >> Training Example Statistics:\n",
      "[tabular-ft] 2025-07-16 18:41:22.538063+00:00 \n",
      "╒════════╤═════════════════════╤══════════════════════╤═══════════════════════╕\n",
      "│        │   Tokens per record │   Tokens per example │   Records per example │\n",
      "╞════════╪═════════════════════╪══════════════════════╪═══════════════════════╡\n",
      "│ min    │                  40 │                 1506 │                    34 │\n",
      "├────────┼─────────────────────┼──────────────────────┼───────────────────────┤\n",
      "│ max    │                  48 │                 2048 │                    48 │\n",
      "├────────┼─────────────────────┼──────────────────────┼───────────────────────┤\n",
      "│ mean   │                42.8 │              2026.39 │                46.294 │\n",
      "├────────┼─────────────────────┼──────────────────────┼───────────────────────┤\n",
      "│ stddev │               3.194 │               25.984 │                 0.773 │\n",
      "╘════════╧═════════════════════╧══════════════════════╧═══════════════════════╛\n",
      "[tabular-ft] 2025-07-16 18:41:22.538177+00:00 << 🧭 Tabular FT >> Number of training examples: 540\n",
      "[tabular-ft] 2025-07-16 18:41:22.540940+00:00 << 🧭 Tabular FT >> Training model\n",
      "[tabular-ft] 2025-07-16 18:41:25.649897+00:00 << 🧭 Tabular FT >> Using PEFT - 9.01 million parameters are trainable\n",
      "[tabular-ft] 2025-07-16 18:41:37.142365+00:00 Training 1.5% complete - step: 8, epoch: 0.014814814814814815, loss: 0.2102\n",
      "[tabular-ft] 2025-07-16 18:41:39.460696+00:00 Training 3.0% complete - step: 16, epoch: 0.02962962962962963, loss: 0.2059\n",
      "[tabular-ft] 2025-07-16 18:41:41.382041+00:00 Training 4.5% complete - step: 24, epoch: 0.044444444444444446, loss: 0.1598\n",
      "[tabular-ft] 2025-07-16 18:41:43.308712+00:00 Training 6.0% complete - step: 32, epoch: 0.05925925925925926, loss: 0.1237\n",
      "[tabular-ft] 2025-07-16 18:41:49.234040+00:00 Training 7.5% complete - step: 40, epoch: 0.07407407407407407, loss: 0.0953\n",
      "[tabular-ft] 2025-07-16 18:41:51.158040+00:00 Training 9.0% complete - step: 48, epoch: 0.08888888888888889, loss: 0.0658\n",
      "[tabular-ft] 2025-07-16 18:41:53.082008+00:00 Training 10.4% complete - step: 56, epoch: 0.1037037037037037, loss: 0.0631\n",
      "[tabular-ft] 2025-07-16 18:41:55.001302+00:00 Training 11.9% complete - step: 64, epoch: 0.11851851851851852, loss: 0.0566\n",
      "[tabular-ft] 2025-07-16 18:41:56.929188+00:00 Training 13.4% complete - step: 72, epoch: 0.13333333333333333, loss: 0.0542\n",
      "[tabular-ft] 2025-07-16 18:41:58.851208+00:00 Training 14.9% complete - step: 80, epoch: 0.14814814814814814, loss: 0.0501\n",
      "[tabular-ft] 2025-07-16 18:42:00.773665+00:00 Training 16.4% complete - step: 88, epoch: 0.16296296296296298, loss: 0.047\n",
      "[tabular-ft] 2025-07-16 18:42:02.696935+00:00 Training 17.9% complete - step: 96, epoch: 0.17777777777777778, loss: 0.0442\n",
      "[tabular-ft] 2025-07-16 18:42:05.576126+00:00 Training 19.4% complete - step: 104, epoch: 0.1925925925925926, loss: 0.0447\n",
      "[tabular-ft] 2025-07-16 18:42:07.501836+00:00 Training 20.9% complete - step: 112, epoch: 0.2074074074074074, loss: 0.0427\n",
      "[tabular-ft] 2025-07-16 18:42:09.425727+00:00 Training 22.4% complete - step: 120, epoch: 0.2222222222222222, loss: 0.0429\n",
      "[tabular-ft] 2025-07-16 18:42:11.346379+00:00 Training 23.9% complete - step: 128, epoch: 0.23703703703703705, loss: 0.0427\n",
      "[tabular-ft] 2025-07-16 18:42:13.265457+00:00 Training 25.4% complete - step: 136, epoch: 0.2518518518518518, loss: 0.0412\n",
      "[tabular-ft] 2025-07-16 18:42:15.185425+00:00 Training 26.9% complete - step: 144, epoch: 0.26666666666666666, loss: 0.044\n",
      "[tabular-ft] 2025-07-16 18:42:17.109398+00:00 Training 28.4% complete - step: 152, epoch: 0.2814814814814815, loss: 0.0422\n",
      "[tabular-ft] 2025-07-16 18:42:19.035950+00:00 Training 29.9% complete - step: 160, epoch: 0.2962962962962963, loss: 0.0426\n",
      "[tabular-ft] 2025-07-16 18:42:20.957176+00:00 Training 31.3% complete - step: 168, epoch: 0.3111111111111111, loss: 0.0426\n",
      "[tabular-ft] 2025-07-16 18:42:22.876528+00:00 Training 32.8% complete - step: 176, epoch: 0.32592592592592595, loss: 0.0411\n",
      "[tabular-ft] 2025-07-16 18:42:24.796702+00:00 Training 34.3% complete - step: 184, epoch: 0.34074074074074073, loss: 0.0416\n",
      "[tabular-ft] 2025-07-16 18:42:26.723536+00:00 Training 35.8% complete - step: 192, epoch: 0.35555555555555557, loss: 0.0417\n",
      "[tabular-ft] 2025-07-16 18:42:28.648784+00:00 Training 37.3% complete - step: 200, epoch: 0.37037037037037035, loss: 0.0406\n",
      "[tabular-ft] 2025-07-16 18:42:30.569195+00:00 Training 38.8% complete - step: 208, epoch: 0.3851851851851852, loss: 0.0401\n",
      "[tabular-ft] 2025-07-16 18:42:32.488834+00:00 Training 40.3% complete - step: 216, epoch: 0.4, loss: 0.0401\n",
      "[tabular-ft] 2025-07-16 18:42:34.414845+00:00 Training 41.8% complete - step: 224, epoch: 0.4148148148148148, loss: 0.0413\n",
      "[tabular-ft] 2025-07-16 18:42:36.337585+00:00 Training 43.3% complete - step: 232, epoch: 0.42962962962962964, loss: 0.0417\n",
      "[tabular-ft] 2025-07-16 18:42:38.263134+00:00 Training 44.8% complete - step: 240, epoch: 0.4444444444444444, loss: 0.0411\n",
      "[tabular-ft] 2025-07-16 18:42:40.184947+00:00 Training 46.3% complete - step: 248, epoch: 0.45925925925925926, loss: 0.0439\n",
      "[tabular-ft] 2025-07-16 18:42:42.103858+00:00 Training 47.8% complete - step: 256, epoch: 0.4740740740740741, loss: 0.0417\n",
      "[tabular-ft] 2025-07-16 18:42:44.026152+00:00 Training 49.3% complete - step: 264, epoch: 0.4888888888888889, loss: 0.0403\n",
      "[tabular-ft] 2025-07-16 18:42:45.946277+00:00 Training 50.7% complete - step: 272, epoch: 0.5037037037037037, loss: 0.0618\n",
      "[tabular-ft] 2025-07-16 18:42:47.874356+00:00 Training 52.2% complete - step: 280, epoch: 0.5185185185185185, loss: 0.0404\n",
      "[tabular-ft] 2025-07-16 18:42:49.800615+00:00 Training 53.7% complete - step: 288, epoch: 0.5333333333333333, loss: 0.0396\n",
      "[tabular-ft] 2025-07-16 18:42:51.718092+00:00 Training 55.2% complete - step: 296, epoch: 0.5481481481481482, loss: 0.0408\n",
      "[tabular-ft] 2025-07-16 18:42:53.638592+00:00 Training 56.7% complete - step: 304, epoch: 0.562962962962963, loss: 0.0406\n",
      "[tabular-ft] 2025-07-16 18:42:55.565703+00:00 Training 58.2% complete - step: 312, epoch: 0.5777777777777777, loss: 0.04\n",
      "[tabular-ft] 2025-07-16 18:42:57.493391+00:00 Training 59.7% complete - step: 320, epoch: 0.5925925925925926, loss: 0.0395\n",
      "[tabular-ft] 2025-07-16 18:42:59.415897+00:00 Training 61.2% complete - step: 328, epoch: 0.6074074074074074, loss: 0.0395\n",
      "[tabular-ft] 2025-07-16 18:43:01.340530+00:00 Training 62.7% complete - step: 336, epoch: 0.6222222222222222, loss: 0.0395\n",
      "[tabular-ft] 2025-07-16 18:43:03.264175+00:00 Training 64.2% complete - step: 344, epoch: 0.6370370370370371, loss: 0.0393\n",
      "[tabular-ft] 2025-07-16 18:43:05.185413+00:00 Training 65.7% complete - step: 352, epoch: 0.6518518518518519, loss: 0.0396\n",
      "[tabular-ft] 2025-07-16 18:43:07.107621+00:00 Training 67.2% complete - step: 360, epoch: 0.6666666666666666, loss: 0.0394\n",
      "[tabular-ft] 2025-07-16 18:43:09.034055+00:00 Training 68.7% complete - step: 368, epoch: 0.6814814814814815, loss: 0.039\n",
      "[tabular-ft] 2025-07-16 18:43:10.955341+00:00 Training 70.1% complete - step: 376, epoch: 0.6962962962962963, loss: 0.0393\n",
      "[tabular-ft] 2025-07-16 18:43:12.877194+00:00 Training 71.6% complete - step: 384, epoch: 0.7111111111111111, loss: 0.0398\n",
      "[tabular-ft] 2025-07-16 18:43:14.797808+00:00 Training 73.1% complete - step: 392, epoch: 0.725925925925926, loss: 0.0394\n",
      "[tabular-ft] 2025-07-16 18:43:16.725915+00:00 Training 74.6% complete - step: 400, epoch: 0.7407407407407407, loss: 0.0392\n",
      "[tabular-ft] 2025-07-16 18:43:18.648625+00:00 Training 76.1% complete - step: 408, epoch: 0.7555555555555555, loss: 0.0393\n",
      "[tabular-ft] 2025-07-16 18:43:20.571952+00:00 Training 77.6% complete - step: 416, epoch: 0.7703703703703704, loss: 0.0389\n",
      "[tabular-ft] 2025-07-16 18:43:22.491785+00:00 Training 79.1% complete - step: 424, epoch: 0.7851851851851852, loss: 0.0394\n",
      "[tabular-ft] 2025-07-16 18:43:24.411916+00:00 Training 80.6% complete - step: 432, epoch: 0.8, loss: 0.0388\n",
      "[tabular-ft] 2025-07-16 18:43:26.336009+00:00 Training 82.1% complete - step: 440, epoch: 0.8148148148148148, loss: 0.0392\n",
      "[tabular-ft] 2025-07-16 18:43:28.259961+00:00 Training 83.6% complete - step: 448, epoch: 0.8296296296296296, loss: 0.0389\n",
      "[tabular-ft] 2025-07-16 18:43:30.182106+00:00 Training 85.1% complete - step: 456, epoch: 0.8444444444444444, loss: 0.0385\n",
      "[tabular-ft] 2025-07-16 18:43:32.102788+00:00 Training 86.6% complete - step: 464, epoch: 0.8592592592592593, loss: 0.0389\n",
      "[tabular-ft] 2025-07-16 18:43:34.028009+00:00 Training 88.1% complete - step: 472, epoch: 0.8740740740740741, loss: 0.0398\n",
      "[tabular-ft] 2025-07-16 18:43:35.950694+00:00 Training 89.6% complete - step: 480, epoch: 0.8888888888888888, loss: 0.0394\n",
      "[tabular-ft] 2025-07-16 18:43:37.874978+00:00 Training 91.0% complete - step: 488, epoch: 0.9037037037037037, loss: 0.0389\n",
      "[tabular-ft] 2025-07-16 18:43:39.799795+00:00 Training 92.5% complete - step: 496, epoch: 0.9185185185185185, loss: 0.0392\n",
      "[tabular-ft] 2025-07-16 18:43:41.721538+00:00 Training 94.0% complete - step: 504, epoch: 0.9333333333333333, loss: 0.039\n",
      "[tabular-ft] 2025-07-16 18:43:43.642540+00:00 Training 95.5% complete - step: 512, epoch: 0.9481481481481482, loss: 0.0392\n",
      "[tabular-ft] 2025-07-16 18:43:45.570740+00:00 Training 97.0% complete - step: 520, epoch: 0.9629629629629629, loss: 0.0392\n",
      "[tabular-ft] 2025-07-16 18:43:47.496702+00:00 Training 98.5% complete - step: 528, epoch: 0.9777777777777777, loss: 0.0388\n",
      "[tabular-ft] 2025-07-16 18:43:49.427192+00:00 Training 100.0% complete - step: 536, epoch: 0.9925925925925926, loss: 0.0393\n",
      "[tabular-ft] 2025-07-16 18:43:49.950696+00:00 << 🧭 Tabular FT >> Saving LoRA adapter\n",
      "[tabular-ft] 2025-07-16 18:43:50.585950+00:00 🙌 Training Completed\n",
      "[tabular-ft] 2025-07-16 18:43:51.001909+00:00 Task 'train_tabular_ft' executed successfully\n",
      "[tabular-ft] 2025-07-16 18:43:51.002769+00:00 << 🧭 Tabular FT >> 👀 Heads up -> Generation stopping is enabled:\n",
      "patience: 3\n",
      "invalid_fraction_threshold: 0.8\n",
      "[tabular-ft] 2025-07-16 18:43:51.002861+00:00 << 🧭 Tabular FT >> Generating 5 records\n",
      "[tabular-ft] 2025-07-16 18:45:44.537827+00:00 Batch Generation Summary 👇\n",
      "----------------------------------------------------------------------------------------------------\n",
      "🦜 Number of prompts submitted: 100\n",
      "👍 Number of valid records generated: 3975\n",
      "👎 Number of invalid records generated: 592\n",
      "📊 Percentage of records that are valid: 87.04%\n",
      "🔬 Top invalid record errors:\n",
      "╒════════════════════════╤══════════════╕\n",
      "│ Error Category         │   Percentage │\n",
      "╞════════════════════════╪══════════════╡\n",
      "│ Invalid field value    │        69.8% │\n",
      "├────────────────────────┼──────────────┤\n",
      "│ Missing required field │        27.9% │\n",
      "├────────────────────────┼──────────────┤\n",
      "│ Invalid JSON           │         2.4% │\n",
      "╘════════════════════════╧══════════════╛\n",
      "----------------------------------------------------------------------------------------------------\n",
      "[tabular-ft] 2025-07-16 18:45:44.538023+00:00 Batch timing and progress 👇\n",
      "----------------------------------------------------------------------------------------------------\n",
      "⏱️ Generation time: 52.5 seconds\n",
      "🐇 Generation speed: 75.7 records per second.\n",
      "⌛️ Progress: 3975 out of 5 records generated.\n",
      "----------------------------------------------------------------------------------------------------\n",
      "[tabular-ft] 2025-07-16 18:45:44.538115+00:00 🎉 Generation complete 🎉\n",
      "[tabular-ft] 2025-07-16 18:45:45.899212+00:00 🙌 Successfully generated 5 records\n",
      "[tabular-ft] 2025-07-16 18:45:45.901529+00:00 Task 'generate_from_tabular_ft' executed successfully\n",
      "[tabular-ft] 2025-07-16 18:45:45.920233+00:00 Task 'tabular_ft' executed successfully\n",
      "[tabular-ft] 2025-07-16 18:45:45.920692+00:00 Task execution completed. Saving task outputs.\n",
      "[tabular-ft] 2025-07-16 18:45:46.460818+00:00 Task outputs saved.\n",
      "[tabular-ft] Task Status is now: RUN_STATUS_COMPLETED\n",
      "Got task wt_2zy6nC3liVHrpDWCrkJWJV6AccA\n",
      "[evaluate-safe-synthetics-dataset] Task Status is now: RUN_STATUS_ACTIVE\n",
      "[evaluate-safe-synthetics-dataset] 2025-07-16 18:46:31.742408+00:00 Preparing step 'evaluate-safe-synthetics-dataset'\n",
      "[evaluate-safe-synthetics-dataset] 2025-07-16 18:46:45.086367+00:00 Starting 'evaluate_safe_synthetics_dataset' task execution\n",
      "[evaluate-safe-synthetics-dataset] 2025-07-16 18:46:59.621346+00:00 LLM column classification took 1.148496109000007 seconds.\n",
      "[evaluate-safe-synthetics-dataset] 2025-07-16 18:47:06.046539+00:00 Task 'evaluate_safe_synthetics_dataset' executed successfully\n",
      "[evaluate-safe-synthetics-dataset] 2025-07-16 18:47:06.047265+00:00 Task execution completed. Saving task outputs.\n",
      "[evaluate-safe-synthetics-dataset] 2025-07-16 18:47:06.630089+00:00 Task outputs saved.\n",
      "[evaluate-safe-synthetics-dataset] Task Status is now: RUN_STATUS_COMPLETED\n",
      "Workflow run is now in status: RUN_STATUS_COMPLETED\n",
      "✅ Generated 5 synthetic Q&A pairs using Safe Synthetics!\n",
      "                                    question  \\\n",
      "0                  What is machine learning?   \n",
      "1             Explain deep learning concepts   \n",
      "2               How do neural networks work?   \n",
      "3             Explain deep learning concepts   \n",
      "4  What is the difference between AI and ML?   \n",
      "\n",
      "                                              answer            topic  \\\n",
      "0  Machine learning is a subset of AI that enable...               ML   \n",
      "1  Deep learning uses multi-layer neural networks...    Deep Learning   \n",
      "2  Neural networks are computational models inspi...  Neural Networks   \n",
      "3  Deep learning uses multi-layer neural networks...    Deep Learning   \n",
      "4  AI is the broader concept while ML is a specif...            AI/ML   \n",
      "\n",
      "     difficulty  \n",
      "0      beginner  \n",
      "1      advanced  \n",
      "2  intermediate  \n",
      "3      advanced  \n",
      "4      beginner  \n"
     ]
    }
   ],
   "source": [
    "# For quick demo with small dataset - disable holdout and transform\n",
    "synthetic_dataset = gretel.safe_synthetic_dataset \\\n",
    "    .from_data_source(original_df, holdout=None) \\\n",
    "    .synthesize(num_records=5) \\\n",
    "    .create()\n",
    "\n",
    "# Wait for completion and get results\n",
    "synthetic_dataset.wait_until_done()\n",
    "synthetic_df_safe = synthetic_dataset.dataset.df\n",
    "\n",
    "print(f\"✅ Generated {len(synthetic_df_safe)} synthetic Q&A pairs using Safe Synthetics!\")\n",
    "print(synthetic_df_safe.head())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4b2786d",
   "metadata": {},
   "source": [
    "## Step C: View Results and Quality Report"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "5ec1ea4d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔍 Synthetic dataset preview:\n",
      "                                    question  \\\n",
      "0                  What is machine learning?   \n",
      "1             Explain deep learning concepts   \n",
      "2               How do neural networks work?   \n",
      "3             Explain deep learning concepts   \n",
      "4  What is the difference between AI and ML?   \n",
      "\n",
      "                                              answer            topic  \\\n",
      "0  Machine learning is a subset of AI that enable...               ML   \n",
      "1  Deep learning uses multi-layer neural networks...    Deep Learning   \n",
      "2  Neural networks are computational models inspi...  Neural Networks   \n",
      "3  Deep learning uses multi-layer neural networks...    Deep Learning   \n",
      "4  AI is the broader concept while ML is a specif...            AI/ML   \n",
      "\n",
      "     difficulty  \n",
      "0      beginner  \n",
      "1      advanced  \n",
      "2  intermediate  \n",
      "3      advanced  \n",
      "4      beginner  \n",
      "📊 Quality Report Summary:\n",
      "<rich.table.Table object at 0x720e11fbe9f0>\n",
      "\n",
      "🔧 Workflow Configuration:\n",
      "globals: {}\n",
      "name: tabular-ft--evaluate-safe-synthetics-dataset--5rnviz\n",
      "steps:\n",
      "- config:\n",
      "    data_source: file_539b15ae8a1d44c38ab9a2f0479b10ed\n",
      "  inputs: []\n",
      "  name: read-data-source\n",
      "  task: data_source\n",
      "- config:\n",
      "    generate:\n",
      "      num_records: 5\n",
      "    train:\n",
      "      group_training_examples_by: null\n",
      "      order_training_examples_by: null\n",
      "      params:\n",
      "        num_input_records_to_sample: auto\n",
      "        rope_scaling_factor: auto\n",
      "  inputs:\n",
      "  - read-data-source\n",
      "  name: tabular-ft\n",
      "  task: tabular_ft\n",
      "- config: {}\n",
      "  inputs:\n",
      "  - tabular-ft\n",
      "  - read-data-source\n",
      "  name: evaluate-safe-synthetics-dataset\n",
      "  task: evaluate_safe_synthetics_dataset\n",
      "version: \"2\"\n",
      "\n",
      "\n",
      "📋 Workflow Steps:\n",
      "- read-data-source\n",
      "- tabular-ft\n",
      "- evaluate-safe-synthetics-dataset\n"
     ]
    }
   ],
   "source": [
    "# Preview synthetic data\n",
    "print(\"🔍 Synthetic dataset preview:\")\n",
    "print(synthetic_dataset.dataset.df.head())\n",
    "\n",
    "# View quality report table\n",
    "print(\"📊 Quality Report Summary:\")\n",
    "print(synthetic_dataset.report.table)\n",
    "\n",
    "# View detailed HTML report in notebook\n",
    "# synthetic_dataset.report.display_in_notebook()\n",
    "\n",
    "# Access workflow details\n",
    "print(\"\\n🔧 Workflow Configuration:\")\n",
    "print(synthetic_dataset.config_yaml)\n",
    "\n",
    "# List all workflow steps\n",
    "print(\"\\n📋 Workflow Steps:\")\n",
    "for step in synthetic_dataset.steps:\n",
    "    print(f\"- {step.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3799c5fa",
   "metadata": {},
   "source": [
    "## Step D: Convert to Opik and Upload"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "c5fd89f9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/retrieve \"HTTP/1.1 404 Not Found\"\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets \"HTTP/1.1 201 Created\"\n",
      "HTTP Request: POST https://www.comet.com/opik/api/v1/private/datasets/retrieve \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "OPIK: Created a \"gretel-safe-synthetics-qa-dataset\" dataset at https://www.comet.com/opik/api/v1/session/redirect/datasets/?dataset_id=01981490-5a78-77d9-8e53-016c576373e3&path=aHR0cHM6Ly93d3cuY29tZXQuY29tL29waWsvYXBpLw==.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "HTTP Request: PUT https://www.comet.com/opik/api/v1/private/datasets/items \"HTTP/1.1 204 No Content\"\n",
      "✅ Safe Synthetics dataset created: gretel-safe-synthetics-qa-dataset\n"
     ]
    }
   ],
   "source": [
    "def convert_to_opik_format(df):\n",
    "    \"\"\"Convert Gretel Q&A data to Opik dataset format\"\"\"\n",
    "    opik_items = []\n",
    "    \n",
    "    for _, row in df.iterrows():\n",
    "        # Create Opik dataset item\n",
    "        item = {\n",
    "            \"input\": {\n",
    "                \"question\": row[\"question\"]\n",
    "            },\n",
    "            \"expected_output\": row[\"answer\"],\n",
    "            \"metadata\": {\n",
    "                \"topic\": row.get(\"topic\", \"AI/ML\"),\n",
    "                \"difficulty\": row.get(\"difficulty\", \"unknown\"),\n",
    "                \"source\": \"gretel_navigator\"\n",
    "            }\n",
    "        }\n",
    "        opik_items.append(item)\n",
    "    \n",
    "    return opik_items\n",
    "\n",
    "# Initialize Opik client if not already defined\n",
    "opik_client = opik.Opik()\n",
    "# Convert and upload to Opik (same process as before)\n",
    "opik_data_safe = convert_to_opik_format(synthetic_df_safe)\n",
    "\n",
    "# Create dataset in Opik\n",
    "dataset_safe = opik_client.get_or_create_dataset(\n",
    "    name=\"gretel-safe-synthetics-qa-dataset\",\n",
    "    description=\"Synthetic Q&A dataset generated using Gretel Safe Synthetics\"\n",
    ")\n",
    "\n",
    "dataset_safe.insert(opik_data_safe)\n",
    "print(f\"✅ Safe Synthetics dataset created: {dataset_safe.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "619db827",
   "metadata": {},
   "source": [
    "The trace can now be viewed in the UI:\n",
    "\n",
    "![gretel opik integration synthetics](https://raw.githubusercontent.com/comet-ml/opik/main/apps/opik-documentation/documentation/fern/img/cookbook/gretel_opik_integration_cookbook_synthetics.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72ff6929",
   "metadata": {},
   "source": [
    "\n",
    "## 🚨 **Important: Dataset Size Requirements**\n",
    "\n",
    "| Dataset Size | Holdout Setting | Example |\n",
    "|--------------|----------------|---------|\n",
    "| **< 200 records** | `holdout=None` | `from_data_source(df, holdout=None)` |\n",
    "| **200+ records** | Default (5%) or custom | `from_data_source(df)` or `from_data_source(df, holdout=0.1)` |\n",
    "| **Large datasets** | Custom percentage/count | `from_data_source(df, holdout=250)` |\n",
    "\n",
    "## 🤔 **When to Use Which Approach?**\n",
    "\n",
    "| Use Case | Recommended Approach | Why |\n",
    "|----------|---------------------|-----|\n",
    "| **Creating new datasets from scratch** | **Data Designer** | More control, custom column types, guided generation |\n",
    "| **Synthesizing existing datasets** | **Safe Synthetics** | Preserves statistical relationships, privacy-safe |\n",
    "| **Custom data structures** | **Data Designer** | Flexible column definitions, template system |\n",
    "| **Production data replication** | **Safe Synthetics** | Maintains data utility while ensuring privacy |\n",
    "\n",
    "Both approaches integrate seamlessly with Opik for model evaluation! 🎯\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
