{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# 🤗 Welcome to AdalFlow!\n",
        "## The PyTorch library to auto-optimize any LLM task pipelines\n",
        "\n",
        "Thanks for trying us out, we're here to provide you with the best LLM application development experience you can dream of 😊 any questions or concerns you may have, [come talk to us on discord,](https://discord.gg/ezzszrRZvT) we're always here to help! ⭐ <i>Star us on <a href=\"https://github.com/SylphAI-Inc/AdalFlow\">Github</a> </i> ⭐\n",
        "\n",
        "\n",
        "# Quick Links\n",
        "\n",
        "Github repo: https://github.com/SylphAI-Inc/AdalFlow\n",
        "\n",
        "Full Tutorials: https://adalflow.sylph.ai/index.html#.\n",
        "\n",
        "Deep dive on each API: check out the [developer notes](https://adalflow.sylph.ai/tutorials/index.html).\n",
        "\n",
        "Common use cases along with the auto-optimization:  check out [Use cases](https://adalflow.sylph.ai/use_cases/index.html).\n",
        "\n",
        "## 📖 Outline\n",
        "\n",
        "This is the code for a classification optimization tutorial ![image.png]()\n"
      ],
      "metadata": {
        "id": "xHF95Kr4CzGq"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "\n",
        "# Installation\n",
        "\n",
        "1. Use `pip` to install the `adalflow` Python package. We will need `openai`, `groq` from the extra packages.\n",
        "\n",
        "  ```bash\n",
        "  pip install adalflow[openai,groq]\n",
        "  ```\n",
        "2. Setup  `openai` and `groq` API key in the environment variables\n",
        "\n",
        "You can choose to use different client. You can import the model client you prefer. We support `Anthropic`, `Cohere`, `Google`, `GROQ`, `OpenAI`, `Transformer` and more in development. We will use OpenAI here as an example.Please refer to our [full installation guide](https://adalflow.sylph.ai/get_started/installation.html)"
      ],
      "metadata": {
        "id": "Kof5M6DRaKhh"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tAp3eDjOCma1"
      },
      "outputs": [],
      "source": [
        "from IPython.display import clear_output\n",
        "\n",
        "!pip install -U adalflow[openai] # also install the package for the model client you'll use\n",
        "!pip install datasets\n",
        "clear_output()"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!pip uninstall httpx anyio -y\n",
        "!pip install “anyio>=3.1.0,<4.0”\n",
        "!pip install httpx==0.24.1"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "CU672Gt4bY7b",
        "outputId": "532c84d2-c7bd-40ac-c050-e2c5dddc8946"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Found existing installation: httpx 0.28.1\n",
            "Uninstalling httpx-0.28.1:\n",
            "  Successfully uninstalled httpx-0.28.1\n",
            "Found existing installation: anyio 3.7.1\n",
            "Uninstalling anyio-3.7.1:\n",
            "  Successfully uninstalled anyio-3.7.1\n",
            "/bin/bash: line 1: 4.0”: No such file or directory\n",
            "Collecting httpx==0.24.1\n",
            "  Downloading httpx-0.24.1-py3-none-any.whl.metadata (7.4 kB)\n",
            "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx==0.24.1) (2024.8.30)\n",
            "Collecting httpcore<0.18.0,>=0.15.0 (from httpx==0.24.1)\n",
            "  Downloading httpcore-0.17.3-py3-none-any.whl.metadata (18 kB)\n",
            "Requirement already satisfied: idna in /usr/local/lib/python3.10/dist-packages (from httpx==0.24.1) (3.10)\n",
            "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from httpx==0.24.1) (1.3.1)\n",
            "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.10/dist-packages (from httpcore<0.18.0,>=0.15.0->httpx==0.24.1) (0.14.0)\n",
            "Collecting anyio<5.0,>=3.0 (from httpcore<0.18.0,>=0.15.0->httpx==0.24.1)\n",
            "  Downloading anyio-4.7.0-py3-none-any.whl.metadata (4.7 kB)\n",
            "Requirement already satisfied: exceptiongroup>=1.0.2 in /usr/local/lib/python3.10/dist-packages (from anyio<5.0,>=3.0->httpcore<0.18.0,>=0.15.0->httpx==0.24.1) (1.2.2)\n",
            "Requirement already satisfied: typing_extensions>=4.5 in /usr/local/lib/python3.10/dist-packages (from anyio<5.0,>=3.0->httpcore<0.18.0,>=0.15.0->httpx==0.24.1) (4.12.2)\n",
            "Downloading httpx-0.24.1-py3-none-any.whl (75 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.4/75.4 kB\u001b[0m \u001b[31m2.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading httpcore-0.17.3-py3-none-any.whl (74 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m74.5/74.5 kB\u001b[0m \u001b[31m6.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading anyio-4.7.0-py3-none-any.whl (93 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m93.1/93.1 kB\u001b[0m \u001b[31m8.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hInstalling collected packages: anyio, httpcore, httpx\n",
            "  Attempting uninstall: httpcore\n",
            "    Found existing installation: httpcore 1.0.7\n",
            "    Uninstalling httpcore-1.0.7:\n",
            "      Successfully uninstalled httpcore-1.0.7\n",
            "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
            "jupyter-server 1.24.0 requires anyio<4,>=3.1.0, but you have anyio 4.7.0 which is incompatible.\u001b[0m\u001b[31m\n",
            "\u001b[0mSuccessfully installed anyio-4.7.0 httpcore-0.17.3 httpx-0.24.1\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Set Environment Variables\n",
        "\n",
        "Run the following code and pass your api key.\n",
        "\n",
        "Note: for normal `.py` projects, follow our [official installation guide](https://lightrag.sylph.ai/get_started/installation.html).\n",
        "\n",
        "*Go to [OpenAI](https://platform.openai.com/docs/introduction) to get API keys if you don't already have.*"
      ],
      "metadata": {
        "id": "KapUyHMM07pJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "\n",
        "from getpass import getpass\n",
        "\n",
        "# Prompt user to enter their API keys securely\n",
        "openai_api_key = getpass(\"Please enter your OpenAI API key: \")\n",
        "\n",
        "\n",
        "# Set environment variables\n",
        "os.environ[\"OPENAI_API_KEY\"] = openai_api_key\n",
        "\n",
        "print(\"API keys have been set.\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ONfzF9Puzdd_",
        "outputId": "a8ca0388-be6e-4b7a-cd05-d4ec52f64e95"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Please enter your OpenAI API key: ··········\n",
            "API keys have been set.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Prepare data structures and prompt template"
      ],
      "metadata": {
        "id": "4W3yEpRpepNK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from dataclasses import dataclass, field\n",
        "from typing import List, Dict, Union, Optional, Tuple, Any, Callable\n",
        "from datasets import load_dataset\n",
        "from adalflow.components.model_client import OpenAIClient\n",
        "import adalflow as adal\n",
        "from adalflow.core.component import Component\n",
        "from adalflow.datasets.types import TrecData\n",
        "from adalflow.datasets.trec import TrecDataset\n",
        "\n",
        "from adalflow.eval.answer_match_acc import AnswerMatchAcc\n",
        "\n",
        "\n",
        "_COARSE_LABELS = [\"ABBR\", \"DESC\", \"ENTY\", \"HUM\", \"LOC\", \"NUM\"]\n",
        "\n",
        "_COARSE_LABELS_DESC = [\n",
        "    \"Abbreviation: Questions about abbreviations and their meanings\",\n",
        "    \"Description: Questions seeking descriptions of people, things, or concepts\",\n",
        "    \"Entity: Questions about entities (e.g., animals, colors, inventions)\",\n",
        "    \"Human: Questions about people or organizations\",\n",
        "    \"Location: Questions about places, cities, countries\",\n",
        "    \"Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\",\n",
        "]\n",
        "\n",
        "\n",
        "template = r\"\"\"<START_OF_SYSTEM_MESSAGE>\n",
        " {{system_prompt}}\n",
        " {% if output_format_str is not none %}\n",
        " {{output_format_str}}\n",
        " {% endif %}\n",
        " {% if few_shot_demos is not none %}\n",
        " Here are some examples:\n",
        " {{few_shot_demos}}\n",
        " {% endif %}\n",
        " <END_OF_SYSTEM_MESSAGE>\n",
        " <START_OF_USER_MESSAGE>\n",
        " {{input_str}}\n",
        " <END_OF_USER_MESSAGE>\n",
        " \"\"\"\n",
        "\n",
        "task_desc_template = r\"\"\"You are a classifier. Given a question, you need to classify it into one of the following classes:\n",
        " Format: class_index. class_name, class_description\n",
        " {% if classes %}\n",
        " {% for class in classes %}\n",
        " {{loop.index-1}}. {{class.label}}, {{class.desc}}\n",
        " {% endfor %}\n",
        " {% endif %}\n",
        " - Do not try to answer the question:\n",
        " \"\"\"\n",
        "\n",
        "\n",
        "@dataclass\n",
        "class TRECExtendedData(TrecData):\n",
        "    rationale: str = field(\n",
        "        metadata={\n",
        "            \"desc\": \"Your step-by-step reasoning to classify the question to class_name\"\n",
        "        },\n",
        "        default=None,\n",
        "    )\n",
        "    __input_fields__ = [\"question\"]\n",
        "    __output_fields__ = [\n",
        "        \"rationale\",\n",
        "        \"class_name\",\n",
        "    ]  # it is important to have the rationale before the class_name\n",
        "\n",
        "def load_datasets():\n",
        "    \"\"\"Load the dataset\"\"\"\n",
        "    train_data = TrecDataset(split=\"train\")\n",
        "    val_data = TrecDataset(split=\"val\")\n",
        "    test_data = TrecDataset(split=\"test\")\n",
        "    return train_data, val_data, test_data  # 0.694, 0.847"
      ],
      "metadata": {
        "id": "ZZIEtZYHNVjo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# prepare models\n",
        "\n",
        "from adalflow.components.model_client.openai_client import OpenAIClient\n",
        "\n",
        "# used as the target model\n",
        "gpt_3_model = {\n",
        "    \"model_client\": OpenAIClient(),\n",
        "    \"model_kwargs\": {\n",
        "        \"model\": \"gpt-3.5-turbo\",\n",
        "        \"max_tokens\": 2000,\n",
        "        \"temperature\": 0.0,\n",
        "        \"top_p\": 0.99,\n",
        "        \"frequency_penalty\": 0,\n",
        "        \"presence_penalty\": 0,\n",
        "        \"stop\": None,\n",
        "    },\n",
        "}\n",
        "\n",
        "# used as optimizer and backward engine\n",
        "gpt_4o_mini_model = {\n",
        "    \"model_client\": OpenAIClient(),\n",
        "    \"model_kwargs\": {\n",
        "        \"model\": \"gpt-4o-mini\",\n",
        "        \"temperature\": 1,\n",
        "        \"top_p\": 0.99,\n",
        "        \"max_tokens\": 1000,\n",
        "        # \"frequency_penalty\": 1,  # high for nto repeating prompt\n",
        "    },\n",
        "}"
      ],
      "metadata": {
        "id": "yAvzn7DZeUX-"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Create the task pipeline"
      ],
      "metadata": {
        "id": "G664uy9MgDdC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class TRECClassifierStructuredOutput(adal.Component):\n",
        "\n",
        "    def __init__(self, model_client: adal.ModelClient, model_kwargs: Dict):\n",
        "        super().__init__()\n",
        "\n",
        "        label_desc = [\n",
        "            {\"label\": label, \"desc\": desc}\n",
        "            for label, desc in zip(_COARSE_LABELS, _COARSE_LABELS_DESC)\n",
        "        ]\n",
        "\n",
        "        task_desc_str = adal.Prompt(\n",
        "            template=task_desc_template, prompt_kwargs={\"classes\": label_desc}\n",
        "        )()\n",
        "\n",
        "        self.data_class = TRECExtendedData\n",
        "        self.data_class.set_task_desc(task_desc_str)\n",
        "\n",
        "        self.parser = adal.DataClassParser(\n",
        "            data_class=self.data_class, return_data_class=True, format_type=\"yaml\"\n",
        "        )\n",
        "\n",
        "        prompt_kwargs = {\n",
        "            \"system_prompt\": adal.Parameter(\n",
        "                data=self.parser.get_task_desc_str(),\n",
        "                role_desc=\"Task description\",\n",
        "                requires_opt=True,\n",
        "                param_type=adal.ParameterType.PROMPT,\n",
        "            ),\n",
        "            \"output_format_str\": adal.Parameter(\n",
        "                data=self.parser.get_output_format_str(),\n",
        "                role_desc=\"Output format requirements\",\n",
        "                requires_opt=False,\n",
        "                param_type=adal.ParameterType.PROMPT,\n",
        "            ),\n",
        "            \"few_shot_demos\": adal.Parameter(\n",
        "                data=None,\n",
        "                requires_opt=True,\n",
        "                role_desc=\"Few shot examples to help the model\",\n",
        "                param_type=adal.ParameterType.DEMOS,\n",
        "            ),\n",
        "        }\n",
        "\n",
        "        self.llm = adal.Generator(\n",
        "            model_client=model_client,\n",
        "            model_kwargs=model_kwargs,\n",
        "            prompt_kwargs=prompt_kwargs,\n",
        "            template=template,\n",
        "            output_processors=self.parser,\n",
        "            use_cache=True,\n",
        "        )\n",
        "\n",
        "    def _prepare_input(self, question: str):\n",
        "        input_data = self.data_class(question=question)\n",
        "        input_str = self.parser.get_input_str(input_data)\n",
        "        prompt_kwargs = {\n",
        "            \"input_str\": adal.Parameter(\n",
        "                data=input_str, requires_opt=False, role_desc=\"input to the LLM\"\n",
        "            )\n",
        "        }\n",
        "        return prompt_kwargs\n",
        "\n",
        "    def call(\n",
        "        self, question: str, id: Optional[str] = None\n",
        "    ) -> Union[adal.GeneratorOutput, adal.Parameter]:\n",
        "        prompt_kwargs = self._prepare_input(question)\n",
        "        output = self.llm(prompt_kwargs=prompt_kwargs, id=id)\n",
        "        return output"
      ],
      "metadata": {
        "id": "3Q3H9XC4Ncfi"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Inference the task pipeline and draw the computation graph"
      ],
      "metadata": {
        "id": "gj08oOqqgGyr"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# load dataset to get one example\n",
        "\n",
        "train_dataset, val_dataset, test_dataset = load_datasets()\n",
        "example = train_dataset[0]\n",
        "print(example)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "qtvLN8zOgnSg",
        "outputId": "9996f8c3-371d-4b5c-ec48-e8cf6d6c396b"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "TrecData(id='e73a82a7-6a3d-4947-90f5-03739e169db0', question='When reading classified ads , what does EENTY : other stand for ?', class_name='ABBR', class_index=0)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "task = TRECClassifierStructuredOutput(\n",
        "    model_client=gpt_3_model[\"model_client\"],\n",
        "    model_kwargs=gpt_3_model[\"model_kwargs\"],\n",
        ")\n",
        "task.train()\n",
        "\n",
        "output = task(question=example.question, id=example.id)\n",
        "print(output)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "cKuW3QlhgLTG",
        "outputId": "7f1f9cd6-9615-4b41-ecc5-5901626d57ae"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Parameter(name=Generator_output, requires_opt=True, param_type=generator_output (The output of the generator.), role_desc=Output from (llm) Generator, data=```\n",
            "rationale: The question is asking for the meaning of the abbreviation \"EENTY\" in classified ads, which falls under the ABBR class.\n",
            "class_name: ABBR\n",
            "```, predecessors={Parameter(name=Output_for, requires_opt=False, param_type=prompt (Instruction to the language model on task, data, and format.), role_desc=Output format requirements, data=Your output should be formatted as a standard YAML instance with the following schema:\n",
            "```\n",
            "rationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\n",
            "class_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\n",
            "```\n",
            "-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\n",
            "-Follow the YAML formatting conventions with an indent of 2 spaces.\n",
            "-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\n",
            "-Quote the string values properly., predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={}), Parameter(name=Few_shot_e, requires_opt=True, param_type=demos (A few examples to guide the language model.), role_desc=Few shot examples to help the model, data=None, predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={}), Parameter(name=Input_to_t, requires_opt=False, param_type=none (), role_desc=input to the LLM, data=question: 'When reading classified ads , what does EENTY : other stand for ?', predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={}), Parameter(name=Task_descr, requires_opt=True, param_type=prompt (Instruction to the language model on task, data, and format.), role_desc=Task description, data=You are a classifier. Given a question, you need to classify it into one of the following classes:\n",
            " Format: class_index. class_name, class_description\n",
            " 0. ABBR, Abbreviation: Questions about abbreviations and their meanings\n",
            " 1. DESC, Description: Questions seeking descriptions of people, things, or concepts\n",
            " 2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions)\n",
            " 3. HUM, Human: Questions about people or organizations\n",
            " 4. LOC, Location: Questions about places, cities, countries\n",
            " 5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\n",
            " - Do not try to answer the question:\n",
            " , predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={})}, gradients=[],            raw_response=None, input_args={'prompt_kwargs': {'system_prompt': Parameter(name=Task_descr, requires_opt=True, param_type=prompt (Instruction to the language model on task, data, and format.), role_desc=Task description, data=You are a classifier. Given a question, you need to classify it into one of the following classes:\n",
            " Format: class_index. class_name, class_description\n",
            " 0. ABBR, Abbreviation: Questions about abbreviations and their meanings\n",
            " 1. DESC, Description: Questions seeking descriptions of people, things, or concepts\n",
            " 2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions)\n",
            " 3. HUM, Human: Questions about people or organizations\n",
            " 4. LOC, Location: Questions about places, cities, countries\n",
            " 5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\n",
            " - Do not try to answer the question:\n",
            " , predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={}), 'output_format_str': Parameter(name=Output_for, requires_opt=False, param_type=prompt (Instruction to the language model on task, data, and format.), role_desc=Output format requirements, data=Your output should be formatted as a standard YAML instance with the following schema:\n",
            "```\n",
            "rationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\n",
            "class_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\n",
            "```\n",
            "-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\n",
            "-Follow the YAML formatting conventions with an indent of 2 spaces.\n",
            "-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\n",
            "-Quote the string values properly., predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={}), 'few_shot_demos': Parameter(name=Few_shot_e, requires_opt=True, param_type=demos (A few examples to guide the language model.), role_desc=Few shot examples to help the model, data=None, predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={}), 'input_str': Parameter(name=Input_to_t, requires_opt=False, param_type=none (), role_desc=input to the LLM, data=question: 'When reading classified ads , what does EENTY : other stand for ?', predecessors=set(), gradients=[],            raw_response=None, input_args=None, traces={})}, 'model_kwargs': {'model': 'gpt-3.5-turbo', 'max_tokens': 2000, 'temperature': 0.0, 'top_p': 0.99, 'frequency_penalty': 0, 'presence_penalty': 0, 'stop': None}}, traces={})\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "class TrecClassifierAdal(adal.AdalComponent):\n",
        "    def __init__(\n",
        "        self,\n",
        "        model_client: adal.ModelClient,\n",
        "        model_kwargs: Dict,\n",
        "        teacher_model_config: Dict,\n",
        "        backward_engine_model_config: Dict,\n",
        "        text_optimizer_model_config: Dict,\n",
        "    ):\n",
        "        task = TRECClassifierStructuredOutput(model_client, model_kwargs)\n",
        "        eval_fn = AnswerMatchAcc(type=\"exact_match\").compute_single_item\n",
        "        loss_fn = adal.EvalFnToTextLoss(\n",
        "            eval_fn=eval_fn,\n",
        "            eval_fn_desc=\"exact_match: 1 if str(y) == str(y_gt) else 0\",\n",
        "        )\n",
        "        super().__init__(\n",
        "            task=task,\n",
        "            eval_fn=eval_fn,\n",
        "            loss_fn=loss_fn,\n",
        "            backward_engine_model_config=backward_engine_model_config,\n",
        "            text_optimizer_model_config=text_optimizer_model_config,\n",
        "            teacher_model_config=teacher_model_config,\n",
        "        )\n",
        "\n",
        "    def prepare_task(self, sample: TRECExtendedData):\n",
        "        return self.task.call, {\"question\": sample.question, \"id\": sample.id}\n",
        "\n",
        "    def prepare_eval(\n",
        "        self, sample: TRECExtendedData, y_pred: adal.GeneratorOutput\n",
        "    ) -> float:\n",
        "        y_label = -1\n",
        "        if y_pred and y_pred.data is not None and y_pred.data.class_name is not None:\n",
        "            y_label = y_pred.data.class_name\n",
        "        return self.eval_fn, {\"y\": y_label, \"y_gt\": sample.class_name}\n",
        "\n",
        "    def prepare_loss(\n",
        "        self, sample: TRECExtendedData, y_pred: adal.Parameter, *args, **kwargs\n",
        "    ) -> Tuple[Callable[..., Any], Dict]:\n",
        "        full_response = y_pred.full_response\n",
        "        y_label = -1\n",
        "        if (\n",
        "            full_response\n",
        "            and full_response.data is not None\n",
        "            and full_response.data.class_name is not None\n",
        "        ):\n",
        "            y_label = full_response.data.class_name\n",
        "\n",
        "        y_pred.eval_input = y_label\n",
        "        y_gt = adal.Parameter(\n",
        "            name=\"y_gt\",\n",
        "            data=sample.class_name,\n",
        "            eval_input=sample.class_name,\n",
        "            requires_opt=False,\n",
        "        )\n",
        "        return self.loss_fn, {\"kwargs\": {\"y\": y_pred, \"y_gt\": y_gt}}"
      ],
      "metadata": {
        "id": "HpkQYsh2NevT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "def train(\n",
        "    model_client: adal.ModelClient,\n",
        "    model_kwargs: Dict,\n",
        "    train_batch_size=4,\n",
        "    raw_shots: int = 0,\n",
        "    bootstrap_shots: int = 1,\n",
        "    max_steps=12,\n",
        "    num_workers=4,\n",
        "    strategy=\"constrained\",\n",
        "    optimization_order=\"sequential\",\n",
        "    debug=False,\n",
        "):\n",
        "    print(\"Starting training process...\")\n",
        "\n",
        "    # Define the model configuration for all components\n",
        "    gpt_4o_model = {\n",
        "    \"model_client\": OpenAIClient(),\n",
        "    \"model_kwargs\": {\n",
        "        \"model\": \"gpt-4o-mini\",\n",
        "        \"temperature\": 1,\n",
        "        \"top_p\": 0.99,\n",
        "        \"max_tokens\": 1000,\n",
        "        # \"frequency_penalty\": 1,  # high for nto repeating prompt\n",
        "    },\n",
        "  }\n",
        "\n",
        "    print(f\"Component model configuration: {gpt_4o_model}\")\n",
        "\n",
        "    try:\n",
        "        print(\"Initializing ADAL component...\")\n",
        "        adal_component = TrecClassifierAdal(\n",
        "            model_client=model_client,\n",
        "            model_kwargs=model_kwargs,\n",
        "            text_optimizer_model_config=gpt_4o_model,\n",
        "            backward_engine_model_config=gpt_4o_model,\n",
        "            teacher_model_config=gpt_4o_model,\n",
        "        )\n",
        "        print(\"ADAL component initialized successfully\")\n",
        "\n",
        "        print(\"Initializing trainer...\")\n",
        "        trainer = adal.Trainer(\n",
        "            train_batch_size=train_batch_size,\n",
        "            adaltask=adal_component,\n",
        "            strategy=strategy,\n",
        "            max_steps=max_steps,\n",
        "            num_workers=num_workers,\n",
        "            raw_shots=raw_shots,\n",
        "            bootstrap_shots=bootstrap_shots,\n",
        "            debug=debug,\n",
        "            weighted_sampling=True,\n",
        "            optimization_order=optimization_order,\n",
        "            exclude_input_fields_from_bootstrap_demos=True,\n",
        "        )\n",
        "        print(\"Trainer initialized successfully\")\n",
        "\n",
        "        print(\"Loading datasets...\")\n",
        "        train_dataset, val_dataset, test_dataset = load_datasets()\n",
        "        print(\n",
        "            f\"Datasets loaded - Train size: {len(train_dataset)}, Val size: {len(val_dataset)}, Test size: {len(test_dataset)}\"\n",
        "        )\n",
        "\n",
        "        print(\"Starting model training...\")\n",
        "        trainer.fit(\n",
        "            train_dataset=train_dataset,\n",
        "            val_dataset=test_dataset,\n",
        "            debug=debug,\n",
        "        )\n",
        "        print(\"Training completed successfully\")\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Error occurred: {str(e)}\")\n",
        "        raise"
      ],
      "metadata": {
        "id": "PEj6xiZ5dVaj"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "train(**gpt_3_model)"
      ],
      "metadata": {
        "id": "GnlZBQOMEj6E",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "outputId": "055a95c4-ccae-4028-d904-86b839bc1c14"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Starting training process...\n",
            "Component model configuration: {'model_client': OpenAIClient(), 'model_kwargs': {'model': 'gpt-4o-mini', 'temperature': 1, 'top_p': 0.99, 'max_tokens': 1000}}\n",
            "Initializing ADAL component...\n",
            "ADAL component initialized successfully\n",
            "Initializing trainer...\n",
            "Trainer initialized successfully\n",
            "Loading datasets...\n",
            "Datasets loaded - Train size: 120, Val size: 36, Test size: 144\n",
            "Starting model training...\n",
            "raw_shots: 0, bootstrap_shots: 1\n",
            "Configuring teacher generator.\n",
            "Configuring teacher generator for Generator(\n",
            "  model_kwargs={'model': 'gpt-4o-mini', 'temperature': 1, 'top_p': 0.99, 'max_tokens': 1000}, trainable_prompt_kwargs=[]\n",
            "  (prompt): Prompt(\n",
            "    template: <START_OF_SYSTEM_MESSAGE>\n",
            "     {{system_prompt}}\n",
            "     {% if output_format_str is not none %}\n",
            "     {{output_format_str}}\n",
            "     {% endif %}\n",
            "     {% if few_shot_demos is not none %}\n",
            "     Here are some examples:\n",
            "     {{few_shot_demos}}\n",
            "     {% endif %}\n",
            "     <END_OF_SYSTEM_MESSAGE>\n",
            "     <START_OF_USER_MESSAGE>\n",
            "     {{input_str}}\n",
            "     <END_OF_USER_MESSAGE>\n",
            "     , prompt_kwargs: {'system_prompt': 'You are a classifier. Given a question, you need to classify it into one of the following classes:\\n Format: class_index. class_name, class_description\\n 0. ABBR, Abbreviation: Questions about abbreviations and their meanings\\n 1. DESC, Description: Questions seeking descriptions of people, things, or concepts\\n 2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions)\\n 3. HUM, Human: Questions about people or organizations\\n 4. LOC, Location: Questions about places, cities, countries\\n 5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\\n - Do not try to answer the question:\\n ', 'output_format_str': 'Your output should be formatted as a standard YAML instance with the following schema:\\n```\\nrationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\\nclass_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\\n```\\n-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\\n-Follow the YAML formatting conventions with an indent of 2 spaces.\\n-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\\n-Quote the string values properly.', 'few_shot_demos': 'None'}, prompt_variables: ['output_format_str', 'system_prompt', 'input_str', 'few_shot_demos']\n",
            "  )\n",
            "  (model_client): OpenAIClient()\n",
            "  (output_processors): DataClassParser(\n",
            "    data_class=TRECExtendedData, format_type=yaml,            return_data_class=True, input_fields=['question'],            output_fields=['rationale', 'class_name']\n",
            "    (_output_processor): YamlParser()\n",
            "    (output_format_prompt): Prompt(\n",
            "      template: Your output should be formatted as a standard YAML instance with the following schema:\n",
            "      ```\n",
            "      {{schema}}\n",
            "      ```\n",
            "      -Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\n",
            "      -Follow the YAML formatting conventions with an indent of 2 spaces.\n",
            "      -DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\n",
            "      -Quote the string values properly., prompt_variables: ['schema']\n",
            "    )\n",
            "  )\n",
            ")\n",
            "Teacher generator set: Generator(\n",
            "  model_kwargs={'model': 'gpt-4o-mini', 'temperature': 1, 'top_p': 0.99, 'max_tokens': 1000}, trainable_prompt_kwargs=[]\n",
            "  (prompt): Prompt(\n",
            "    template: <START_OF_SYSTEM_MESSAGE>\n",
            "     {{system_prompt}}\n",
            "     {% if output_format_str is not none %}\n",
            "     {{output_format_str}}\n",
            "     {% endif %}\n",
            "     {% if few_shot_demos is not none %}\n",
            "     Here are some examples:\n",
            "     {{few_shot_demos}}\n",
            "     {% endif %}\n",
            "     <END_OF_SYSTEM_MESSAGE>\n",
            "     <START_OF_USER_MESSAGE>\n",
            "     {{input_str}}\n",
            "     <END_OF_USER_MESSAGE>\n",
            "     , prompt_kwargs: {'system_prompt': 'You are a classifier. Given a question, you need to classify it into one of the following classes:\\n Format: class_index. class_name, class_description\\n 0. ABBR, Abbreviation: Questions about abbreviations and their meanings\\n 1. DESC, Description: Questions seeking descriptions of people, things, or concepts\\n 2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions)\\n 3. HUM, Human: Questions about people or organizations\\n 4. LOC, Location: Questions about places, cities, countries\\n 5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\\n - Do not try to answer the question:\\n ', 'output_format_str': 'Your output should be formatted as a standard YAML instance with the following schema:\\n```\\nrationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\\nclass_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\\n```\\n-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\\n-Follow the YAML formatting conventions with an indent of 2 spaces.\\n-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\\n-Quote the string values properly.', 'few_shot_demos': 'None'}, prompt_variables: ['output_format_str', 'system_prompt', 'input_str', 'few_shot_demos']\n",
            "  )\n",
            "  (model_client): OpenAIClient()\n",
            "  (output_processors): DataClassParser(\n",
            "    data_class=TRECExtendedData, format_type=yaml,            return_data_class=True, input_fields=['question'],            output_fields=['rationale', 'class_name']\n",
            "    (_output_processor): YamlParser()\n",
            "    (output_format_prompt): Prompt(\n",
            "      template: Your output should be formatted as a standard YAML instance with the following schema:\n",
            "      ```\n",
            "      {{schema}}\n",
            "      ```\n",
            "      -Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\n",
            "      -Follow the YAML formatting conventions with an indent of 2 spaces.\n",
            "      -DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\n",
            "      -Quote the string values properly., prompt_variables: ['schema']\n",
            "    )\n",
            "  )\n",
            "), teacher Generator(\n",
            "  model_kwargs={'model': 'gpt-4o-mini', 'temperature': 1, 'top_p': 0.99, 'max_tokens': 1000}, trainable_prompt_kwargs=[]\n",
            "  (prompt): Prompt(\n",
            "    template: <START_OF_SYSTEM_MESSAGE>\n",
            "     {{system_prompt}}\n",
            "     {% if output_format_str is not none %}\n",
            "     {{output_format_str}}\n",
            "     {% endif %}\n",
            "     {% if few_shot_demos is not none %}\n",
            "     Here are some examples:\n",
            "     {{few_shot_demos}}\n",
            "     {% endif %}\n",
            "     <END_OF_SYSTEM_MESSAGE>\n",
            "     <START_OF_USER_MESSAGE>\n",
            "     {{input_str}}\n",
            "     <END_OF_USER_MESSAGE>\n",
            "     , prompt_kwargs: {'system_prompt': 'You are a classifier. Given a question, you need to classify it into one of the following classes:\\n Format: class_index. class_name, class_description\\n 0. ABBR, Abbreviation: Questions about abbreviations and their meanings\\n 1. DESC, Description: Questions seeking descriptions of people, things, or concepts\\n 2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions)\\n 3. HUM, Human: Questions about people or organizations\\n 4. LOC, Location: Questions about places, cities, countries\\n 5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\\n - Do not try to answer the question:\\n ', 'output_format_str': 'Your output should be formatted as a standard YAML instance with the following schema:\\n```\\nrationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\\nclass_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\\n```\\n-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\\n-Follow the YAML formatting conventions with an indent of 2 spaces.\\n-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\\n-Quote the string values properly.', 'few_shot_demos': 'None'}, prompt_variables: ['output_format_str', 'system_prompt', 'input_str', 'few_shot_demos']\n",
            "  )\n",
            "  (model_client): OpenAIClient()\n",
            "  (output_processors): DataClassParser(\n",
            "    data_class=TRECExtendedData, format_type=yaml,            return_data_class=True, input_fields=['question'],            output_fields=['rationale', 'class_name']\n",
            "    (_output_processor): YamlParser()\n",
            "    (output_format_prompt): Prompt(\n",
            "      template: Your output should be formatted as a standard YAML instance with the following schema:\n",
            "      ```\n",
            "      {{schema}}\n",
            "      ```\n",
            "      -Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\n",
            "      -Follow the YAML formatting conventions with an indent of 2 spaces.\n",
            "      -DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\n",
            "      -Quote the string values properly., prompt_variables: ['schema']\n",
            "    )\n",
            "  )\n",
            ")\n",
            "Teacher generator configured.\n",
            "Configured demo optimizers\n",
            "Backward engine configured for all generators.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "Loading Data: 100%|██████████| 144/144 [00:00<00:00, 9161.62it/s]\n",
            "Predicting: step(0): 0.8264 across 144 samples, Max potential: 0.8264: 100%|██████████| 144/144 [00:19<00:00,  7.39it/s]\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "completed_samples: 144, len: 144\n",
            "Initial validation score: 0.8263888888888888\n",
            "Initial test score: None\n",
            "Checkpoint path: /root/.adalflow/ckpt/TrecClassifierAdal\n",
            "save to /root/.adalflow/ckpt/TrecClassifierAdal/constrained_max_steps_12_a6e76_run_1.json\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "Training Step: 1:   0%|          | 0/30 [00:00<?, ?it/s]\n",
            "\n",
            "Loading Data: 100%|██████████| 4/4 [00:00<00:00, 328.98it/s]\n",
            "Training: 100%|██████████| 4/4 [00:00<00:00,  5.31it/s]\n",
            "\n",
            "\n",
            "Loading Data: 100%|██████████| 4/4 [00:00<00:00, 548.94it/s]\n",
            "Calculating Loss: 100%|██████████| 4/4 [00:00<00:00, 6197.72it/s]\n",
            "Evaluating: 100%|██████████| 4/4 [00:00<00:00, 5187.76it/s]\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Moving batch acc: 0.75\n",
            "Moving batch correct size: 3\n",
            "Moving batch error size: 1\n",
            "Subset Error size: 1\n",
            "Subset Correct size: 2\n",
            "Subset score: 0.6666666666666666\n",
            "Subset batch acc: 0.6666666666666666\n",
            "Subset loss backward...\n",
            "setting pred name Generator_outputy_pred_3 score to 1.0\n",
            "setting pred name Generator_outputy_pred_1 score to 1.0\n",
            "setting pred name Generator_outputy_pred_2 score to 0.0\n",
            "Subset loss backward time: 10.694303750991821\n",
            "Optimizer propose...\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "\n",
            "Proposing:   0%|          | 0/5 [00:00<?, ?it/s]\u001b[A\u001b[A"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "New prompts:  [PromptData(id='53b75924-9350-4ffb-9710-64652feabf23', name='llm.system_prompt', data='You are a classifier. Given a question, classify it into one of the classes:\\nFormat: class_index. class_name, class_description\\n0. ABBR, Abbreviation: Questions about abbreviations and their meanings\\n1. DESC, Description: Questions seeking descriptions of people, things, or concepts\\n2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions)\\n3. HUM, Human: Questions about people or organizations\\n4. LOC, Location: Questions about places, cities, countries\\n5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances)\\n- Do not attempt to answer the question directly. Ensure your classification is precise and reflects the specific focus of the inquiry:', requires_opt=True), PromptData(id='2218906f-9600-4ff8-8532-a8038ef6cb63', name='llm.output_format_str', data='Your output should be formatted as a standard YAML instance with the following schema:\\n```\\nrationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\\nclass_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\\n```\\n-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\\n-Follow the YAML formatting conventions with an indent of 2 spaces.\\n-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\\n-Quote the string values properly.', requires_opt=False), PromptData(id='8640ed64-3658-445d-ad82-011d398499f2', name='llm.few_shot_demos', data=None, requires_opt=True)]\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "\n",
            "\n",
            "Loading Data: 100%|██████████| 3/3 [00:00<00:00, 651.26it/s]\n",
            "Predicting: step(0): 0.6667 across 3 samples, Max potential: 0.6667: 100%|██████████| 3/3 [00:00<00:00,  3.59it/s]\n",
            "\n",
            "\n",
            "Proposing:  20%|██        | 1/5 [00:03<00:13,  3.27s/it]\u001b[A\u001b[A"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "completed_samples: 3, len: 3\n",
            "Fail subset check, try next proposal: 0.6666666666666666 <= 0.6666666666666666\n",
            "New prompts:  [PromptData(id='53b75924-9350-4ffb-9710-64652feabf23', name='llm.system_prompt', data='You are a classifier. Given a question, classify it into one of the following classes by following these guidelines: \\nFormat: class_index. class_name, class_description \\n0. ABBR, Abbreviation: Questions about abbreviations and their meanings \\n1. DESC, Description: Questions seeking descriptions of people, things, or concepts \\n2. ENTY, Entity: Questions about entities (e.g., animals, colors, inventions) \\n3. HUM, Human: Questions about people or organizations \\n4. LOC, Location: Questions about places, cities, countries \\n5. NUM, Numeric: Questions seeking numeric answers (e.g., dates, amounts, distances) \\n- Avoid attempting to directly answer the question; instead, focus on accurate classification based on specific criteria: \\n- Ensure the category accurately represents the essence of the question to avoid misclassification.', requires_opt=True), PromptData(id='2218906f-9600-4ff8-8532-a8038ef6cb63', name='llm.output_format_str', data='Your output should be formatted as a standard YAML instance with the following schema:\\n```\\nrationale: Your step-by-step reasoning to classify the question to class_name (str) (optional)\\nclass_name: One of {ABBR, ENTY, DESC, HUM, LOC, NUM} (str) (optional)\\n```\\n-Make sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!\\n-Follow the YAML formatting conventions with an indent of 2 spaces.\\n-DO NOT mistaken the \"properties\" and \"type\" in the schema as the actual fields in the YAML output.\\n-Quote the string values properly.', requires_opt=False), PromptData(id='8640ed64-3658-445d-ad82-011d398499f2', name='llm.few_shot_demos', data=None, requires_opt=True)]\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "\n",
            "\n",
            "\n",
            "Loading Data: 100%|██████████| 3/3 [00:00<00:00, 302.95it/s]\n",
            "Predicting: step(0): 0.6667 across 3 samples, Max potential: 0.6667: 100%|██████████| 3/3 [00:00<00:00,  4.25it/s]\n",
            "\n",
            "\n",
            "Proposing:  40%|████      | 2/5 [00:06<00:09,  3.24s/it]\u001b[A\u001b[A"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "completed_samples: 3, len: 3\n",
            "Fail subset check, try next proposal: 0.6666666666666666 <= 0.6666666666666666\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Proposing:  40%|████      | 2/5 [00:07<00:10,  3.55s/it]\n",
            "Training Step: 1:   0%|          | 0/30 [00:18<?, ?it/s]\n",
            "Epoch:   0%|          | 0/1 [00:18<?, ?it/s]\n"
          ]
        },
        {
          "output_type": "error",
          "ename": "KeyboardInterrupt",
          "evalue": "",
          "traceback": [
            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
            "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
            "\u001b[0;32m<ipython-input-31-a934b5e59252>\u001b[0m in \u001b[0;36m<cell line: 1>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mtrain\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mgpt_3_model\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
            "\u001b[0;32m<ipython-input-30-d83a6f6f0e0a>\u001b[0m in \u001b[0;36mtrain\u001b[0;34m(model_client, model_kwargs, train_batch_size, raw_shots, bootstrap_shots, max_steps, num_workers, strategy, optimization_order, debug)\u001b[0m\n\u001b[1;32m     61\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     62\u001b[0m         \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Starting model training...\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 63\u001b[0;31m         trainer.fit(\n\u001b[0m\u001b[1;32m     64\u001b[0m             \u001b[0mtrain_dataset\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtrain_dataset\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     65\u001b[0m             \u001b[0mval_dataset\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtest_dataset\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/optim/trainer/trainer.py\u001b[0m in \u001b[0;36mfit\u001b[0;34m(self, adaltask, train_loader, train_dataset, val_dataset, test_dataset, debug, save_traces, raw_shots, bootstrap_shots, resume_from_ckpt)\u001b[0m\n\u001b[1;32m    477\u001b[0m                     \u001b[0mstarting_step\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmax_steps\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    478\u001b[0m                 \u001b[0;32melif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrategy\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"constrained\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 479\u001b[0;31m                     trainer_results = self._fit_text_grad_constraint(\n\u001b[0m\u001b[1;32m    480\u001b[0m                         \u001b[0mtrain_loader\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    481\u001b[0m                         \u001b[0mval_dataset\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/optim/trainer/trainer.py\u001b[0m in \u001b[0;36m_fit_text_grad_constraint\u001b[0;34m(self, train_loader, val_dataset, test_dataset, trainer_results, starting_step)\u001b[0m\n\u001b[1;32m   1779\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1780\u001b[0m                 all_samples, all_losses, all_y_preds = (\n\u001b[0;32m-> 1781\u001b[0;31m                     self._text_grad_constraint_propose_step(\n\u001b[0m\u001b[1;32m   1782\u001b[0m                         \u001b[0msteps\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0msteps\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1783\u001b[0m                         \u001b[0mall_samples\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mall_samples\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/optim/trainer/trainer.py\u001b[0m in \u001b[0;36m_text_grad_constraint_propose_step\u001b[0;34m(self, steps, all_samples, all_losses, all_y_preds, include_demo_optimizers)\u001b[0m\n\u001b[1;32m   1657\u001b[0m             \u001b[0;31m# print(f\"Proposing step: {i}\")\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1658\u001b[0m             \u001b[0;31m# self.optimizer.propose()\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1659\u001b[0;31m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_propose_text_optimizers\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m  \u001b[0;31m# new prompts\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1660\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0minclude_demo_optimizers\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1661\u001b[0m                 \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_demo_optimizers_propose\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/optim/trainer/trainer.py\u001b[0m in \u001b[0;36m_propose_text_optimizers\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m    857\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m_propose_text_optimizers\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    858\u001b[0m         \u001b[0;32mfor\u001b[0m \u001b[0mtext_optimizer\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtext_optimizers\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 859\u001b[0;31m             \u001b[0mtext_optimizer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpropose\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    860\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    861\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m_get_trainable_text_params\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/optim/text_grad/tgd_optimizer.py\u001b[0m in \u001b[0;36mpropose\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m    323\u001b[0m             }\n\u001b[1;32m    324\u001b[0m             \u001b[0;31m# turn off cache\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 325\u001b[0;31m             response = self.llm_optimizer.call(\n\u001b[0m\u001b[1;32m    326\u001b[0m                 \u001b[0mprompt_kwargs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mprompt_kwargs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0muse_cache\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mnot\u001b[0m \u001b[0mno_cache\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    327\u001b[0m             )\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/core/generator.py\u001b[0m in \u001b[0;36mcall\u001b[0;34m(self, prompt_kwargs, model_kwargs, use_cache, id)\u001b[0m\n\u001b[1;32m    771\u001b[0m         \u001b[0;31m# call the model client\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    772\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 773\u001b[0;31m         \u001b[0mcompletion\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    774\u001b[0m         \u001b[0muse_cache\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0muse_cache\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0muse_cache\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m \u001b[0;32melse\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_use_cache\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    775\u001b[0m         \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/core/generator.py\u001b[0m in \u001b[0;36m_model_client_call\u001b[0;34m(self, api_kwargs, use_cache)\u001b[0m\n\u001b[1;32m    345\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    346\u001b[0m                 \u001b[0mcached_completion\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_check_cache\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindex_content\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 347\u001b[0;31m                 \u001b[0;32mif\u001b[0m \u001b[0mcached_completion\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    348\u001b[0m                     \u001b[0;32mreturn\u001b[0m \u001b[0mcached_completion\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    349\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/backoff/_sync.py\u001b[0m in \u001b[0;36mretry\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m    103\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    104\u001b[0m             \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 105\u001b[0;31m                 \u001b[0mret\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtarget\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    106\u001b[0m             \u001b[0;32mexcept\u001b[0m \u001b[0mexception\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    107\u001b[0m                 \u001b[0mmax_tries_exceeded\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mtries\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0mmax_tries_value\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/adalflow/components/model_client/openai_client.py\u001b[0m in \u001b[0;36mcall\u001b[0;34m(self, api_kwargs, model_type)\u001b[0m\n\u001b[1;32m    285\u001b[0m                 \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchat_completion_parser\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhandle_streaming_response\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    286\u001b[0m                 \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msync_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchat\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcompletions\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcreate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mapi_kwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 287\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msync_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchat\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcompletions\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcreate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mapi_kwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    288\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    289\u001b[0m             \u001b[0;32mraise\u001b[0m \u001b[0mValueError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf\"model_type {model_type} is not supported\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/openai/_utils/_utils.py\u001b[0m in \u001b[0;36mwrapper\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m    273\u001b[0m                         \u001b[0mmsg\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34mf\"Missing required argument: {quote(missing[0])}\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    274\u001b[0m                 \u001b[0;32mraise\u001b[0m \u001b[0mTypeError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmsg\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 275\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    276\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    277\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mwrapper\u001b[0m  \u001b[0;31m# type: ignore\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py\u001b[0m in \u001b[0;36mcreate\u001b[0;34m(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)\u001b[0m\n\u001b[1;32m    827\u001b[0m     ) -> ChatCompletion | Stream[ChatCompletionChunk]:\n\u001b[1;32m    828\u001b[0m         \u001b[0mvalidate_response_format\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse_format\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 829\u001b[0;31m         return self._post(\n\u001b[0m\u001b[1;32m    830\u001b[0m             \u001b[0;34m\"/chat/completions\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    831\u001b[0m             body=maybe_transform(\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/openai/_base_client.py\u001b[0m in \u001b[0;36mpost\u001b[0;34m(self, path, cast_to, body, options, files, stream, stream_cls)\u001b[0m\n\u001b[1;32m   1276\u001b[0m             \u001b[0mmethod\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"post\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0murl\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mjson_data\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mbody\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfiles\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mto_httpx_files\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfiles\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1277\u001b[0m         )\n\u001b[0;32m-> 1278\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mcast\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mResponseT\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mcast_to\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mopts\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstream\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstream\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstream_cls\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstream_cls\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1279\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1280\u001b[0m     def patch(\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/openai/_base_client.py\u001b[0m in \u001b[0;36mrequest\u001b[0;34m(self, cast_to, options, remaining_retries, stream, stream_cls)\u001b[0m\n\u001b[1;32m    953\u001b[0m             \u001b[0mretries_taken\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    954\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 955\u001b[0;31m         return self._request(\n\u001b[0m\u001b[1;32m    956\u001b[0m             \u001b[0mcast_to\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcast_to\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    957\u001b[0m             \u001b[0moptions\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/openai/_base_client.py\u001b[0m in \u001b[0;36m_request\u001b[0;34m(self, cast_to, options, retries_taken, stream, stream_cls)\u001b[0m\n\u001b[1;32m    989\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    990\u001b[0m         \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 991\u001b[0;31m             response = self._client.send(\n\u001b[0m\u001b[1;32m    992\u001b[0m                 \u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    993\u001b[0m                 \u001b[0mstream\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstream\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_should_stream_response_body\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36msend\u001b[0;34m(self, request, stream, auth, follow_redirects)\u001b[0m\n\u001b[1;32m    899\u001b[0m         \u001b[0mauth\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_build_request_auth\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mauth\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    900\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 901\u001b[0;31m         response = self._send_handling_auth(\n\u001b[0m\u001b[1;32m    902\u001b[0m             \u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    903\u001b[0m             \u001b[0mauth\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mauth\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36m_send_handling_auth\u001b[0;34m(self, request, auth, follow_redirects, history)\u001b[0m\n\u001b[1;32m    927\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    928\u001b[0m             \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 929\u001b[0;31m                 response = self._send_handling_redirects(\n\u001b[0m\u001b[1;32m    930\u001b[0m                     \u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    931\u001b[0m                     \u001b[0mfollow_redirects\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mfollow_redirects\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36m_send_handling_redirects\u001b[0;34m(self, request, follow_redirects, history)\u001b[0m\n\u001b[1;32m    964\u001b[0m                 \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    965\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 966\u001b[0;31m             \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_send_single_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    967\u001b[0m             \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    968\u001b[0m                 \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_event_hooks\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"response\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36m_send_single_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m   1000\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1001\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mrequest_context\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1002\u001b[0;31m             \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtransport\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1003\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1004\u001b[0m         \u001b[0;32massert\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstream\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mSyncByteStream\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    216\u001b[0m         )\n\u001b[1;32m    217\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mmap_httpcore_exceptions\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 218\u001b[0;31m             \u001b[0mresp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_pool\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mreq\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    219\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    220\u001b[0m         \u001b[0;32massert\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstream\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtyping\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mIterable\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    260\u001b[0m                 \u001b[0;32mwith\u001b[0m \u001b[0mShieldCancellation\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    261\u001b[0m                     \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mresponse_closed\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstatus\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 262\u001b[0;31m                 \u001b[0;32mraise\u001b[0m \u001b[0mexc\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    263\u001b[0m             \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    264\u001b[0m                 \u001b[0;32mbreak\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    243\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    244\u001b[0m             \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 245\u001b[0;31m                 \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mconnection\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    246\u001b[0m             \u001b[0;32mexcept\u001b[0m \u001b[0mConnectionNotAvailable\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    247\u001b[0m                 \u001b[0;31m# The ConnectionNotAvailable exception is a special case, that\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m     94\u001b[0m                 \u001b[0;32mraise\u001b[0m \u001b[0mConnectionNotAvailable\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     95\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 96\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_connection\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     97\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     98\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m_connect\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequest\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mRequest\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mNetworkStream\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    119\u001b[0m                 \u001b[0;32mwith\u001b[0m \u001b[0mTrace\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"response_closed\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlogger\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtrace\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    120\u001b[0m                     \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_response_closed\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 121\u001b[0;31m             \u001b[0;32mraise\u001b[0m \u001b[0mexc\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    122\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    123\u001b[0m     \u001b[0;31m# Sending the request...\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m     97\u001b[0m                     \u001b[0mreason_phrase\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     98\u001b[0m                     \u001b[0mheaders\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 99\u001b[0;31m                 ) = self._receive_response_headers(**kwargs)\n\u001b[0m\u001b[1;32m    100\u001b[0m                 trace.return_value = (\n\u001b[1;32m    101\u001b[0m                     \u001b[0mhttp_version\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36m_receive_response_headers\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    162\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    163\u001b[0m         \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 164\u001b[0;31m             \u001b[0mevent\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_receive_event\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    165\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mevent\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mh11\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mResponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    166\u001b[0m                 \u001b[0;32mbreak\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36m_receive_event\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    198\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    199\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0mevent\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0mh11\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mNEED_DATA\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 200\u001b[0;31m                 data = self._network_stream.read(\n\u001b[0m\u001b[1;32m    201\u001b[0m                     \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mREAD_NUM_BYTES\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    202\u001b[0m                 )\n",
            "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/httpcore/_backends/sync.py\u001b[0m in \u001b[0;36mread\u001b[0;34m(self, max_bytes, timeout)\u001b[0m\n\u001b[1;32m     26\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mmap_exceptions\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mexc_map\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     27\u001b[0m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sock\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msettimeout\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 28\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sock\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrecv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmax_bytes\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     29\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     30\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mwrite\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbuffer\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mbytes\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mtyping\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mOptional\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mfloat\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/lib/python3.10/ssl.py\u001b[0m in \u001b[0;36mrecv\u001b[0;34m(self, buflen, flags)\u001b[0m\n\u001b[1;32m   1286\u001b[0m                     \u001b[0;34m\"non-zero flags not allowed in calls to recv() on %s\"\u001b[0m \u001b[0;34m%\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1287\u001b[0m                     self.__class__)\n\u001b[0;32m-> 1288\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbuflen\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1289\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1290\u001b[0m             \u001b[0;32mreturn\u001b[0m \u001b[0msuper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrecv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbuflen\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mflags\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/lib/python3.10/ssl.py\u001b[0m in \u001b[0;36mread\u001b[0;34m(self, len, buffer)\u001b[0m\n\u001b[1;32m   1159\u001b[0m                 \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sslobj\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbuffer\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1160\u001b[0m             \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1161\u001b[0;31m                 \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sslobj\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1162\u001b[0m         \u001b[0;32mexcept\u001b[0m \u001b[0mSSLError\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1163\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0mSSL_ERROR_EOF\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msuppress_ragged_eofs\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Issues and feedback\n",
        "\n",
        "If you encounter any issues, please report them here: [GitHub Issues](https://github.com/SylphAI-Inc/LightRAG/issues).\n",
        "\n",
        "For feedback, you can use either the [GitHub discussions](https://github.com/SylphAI-Inc/LightRAG/discussions) or [Discord](https://discord.gg/ezzszrRZvT)."
      ],
      "metadata": {
        "id": "AmkbyxmuruUu"
      }
    }
  ]
}
