{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 🤗 Welcome to AdalFlow!\n",
    "## The library to build & auto-optimize any LLM task pipelines\n",
    "\n",
    "Thanks for trying us out, we're here to provide you with the best LLM application development experience you can dream of 😊 any questions or concerns you may have, [come talk to us on discord,](https://discord.gg/ezzszrRZvT) we're always here to help! ⭐ <i>Star us on <a href=\"https://github.com/SylphAI-Inc/AdalFlow\">Github</a> </i> ⭐\n",
    "\n",
    "\n",
    "# Quick Links\n",
    "\n",
    "Github repo: https://github.com/SylphAI-Inc/AdalFlow\n",
    "\n",
    "Full Tutorials: https://adalflow.sylph.ai/index.html#.\n",
    "\n",
    "Deep dive on each API: check out the [developer notes](https://adalflow.sylph.ai/tutorials/index.html).\n",
    "\n",
    "Common use cases along with the auto-optimization:  check out [Use cases](https://adalflow.sylph.ai/use_cases/index.html).\n",
    "\n",
    "# Author\n",
    "\n",
    "This notebook was created by community contributor [Ajith](https://github.com/ajithvcoder).\n",
    "\n",
    "# Outline\n",
    "\n",
    "This is a quick introduction of what AdalFlow is capable of. We will cover:\n",
    "\n",
    "* How to use `DataClass` with `DataClassParser`.\n",
    "* How to do nested dataclass, we will test both one and two levels of nesting.\n",
    "\n",
    "**Next: Try our [auto-optimization](https://colab.research.google.com/drive/1n3mHUWekTEYHiBdYBTw43TKlPN41A9za?usp=sharing)**\n",
    "\n",
    "\n",
    "# Installation\n",
    "\n",
    "1. Use `pip` to install the `adalflow` Python package. We will need `openai` and `groq`from the extra packages.\n",
    "\n",
    "  ```bash\n",
    "  pip install adalflow[openai,groq]\n",
    "  ```\n",
    "2. Setup  `openai` and `groq` API key in the environment variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "id": "Ab_OmE6XTl4h"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Found existing installation: adalflow 1.0.0b3\n",
      "Uninstalling adalflow-1.0.0b3:\n",
      "  Successfully uninstalled adalflow-1.0.0b3\n",
      "Collecting adalflow[datasets,groq,openai]\n",
      "  Using cached adalflow-1.0.0b3-py3-none-any.whl.metadata (15 kB)\n",
      "Requirement already satisfied: PyYAML>=6.0.1 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (6.0.2)\n",
      "Requirement already satisfied: backoff<3.0.0,>=2.2.1 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (2.2.1)\n",
      "Requirement already satisfied: colorama<0.5.0,>=0.4.6 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (0.4.6)\n",
      "Requirement already satisfied: diskcache<6.0.0,>=5.6.3 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (5.6.3)\n",
      "Requirement already satisfied: groq>=0.9.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (0.9.0)\n",
      "Requirement already satisfied: jinja2<4.0.0,>=3.1.3 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (3.1.5)\n",
      "Requirement already satisfied: jsonlines<5.0.0,>=4.0.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (4.0.0)\n",
      "Requirement already satisfied: nest-asyncio<2.0.0,>=1.6.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (1.6.0)\n",
      "Requirement already satisfied: numpy in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (2.0.2)\n",
      "Requirement already satisfied: openai>=1.12.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (1.59.7)\n",
      "Requirement already satisfied: python-dotenv<2.0.0,>=1.0.1 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (1.0.1)\n",
      "Requirement already satisfied: tiktoken>=0.3.3 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (0.8.0)\n",
      "Requirement already satisfied: tqdm<5.0.0,>=4.66.4 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from adalflow[datasets,groq,openai]) (4.67.1)\n",
      "Requirement already satisfied: anyio<5,>=3.5.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from groq>=0.9.0->adalflow[datasets,groq,openai]) (4.8.0)\n",
      "Requirement already satisfied: distro<2,>=1.7.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from groq>=0.9.0->adalflow[datasets,groq,openai]) (1.9.0)\n",
      "Requirement already satisfied: httpx<1,>=0.23.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from groq>=0.9.0->adalflow[datasets,groq,openai]) (0.27.2)\n",
      "Requirement already satisfied: pydantic<3,>=1.9.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from groq>=0.9.0->adalflow[datasets,groq,openai]) (2.10.5)\n",
      "Requirement already satisfied: sniffio in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from groq>=0.9.0->adalflow[datasets,groq,openai]) (1.3.1)\n",
      "Requirement already satisfied: typing-extensions<5,>=4.7 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from groq>=0.9.0->adalflow[datasets,groq,openai]) (4.12.2)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from jinja2<4.0.0,>=3.1.3->adalflow[datasets,groq,openai]) (3.0.2)\n",
      "Requirement already satisfied: attrs>=19.2.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from jsonlines<5.0.0,>=4.0.0->adalflow[datasets,groq,openai]) (24.3.0)\n",
      "Requirement already satisfied: jiter<1,>=0.4.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from openai>=1.12.0->adalflow[datasets,groq,openai]) (0.8.2)\n",
      "Requirement already satisfied: regex>=2022.1.18 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from tiktoken>=0.3.3->adalflow[datasets,groq,openai]) (2024.11.6)\n",
      "Requirement already satisfied: requests>=2.26.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from tiktoken>=0.3.3->adalflow[datasets,groq,openai]) (2.32.3)\n",
      "Requirement already satisfied: idna>=2.8 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from anyio<5,>=3.5.0->groq>=0.9.0->adalflow[datasets,groq,openai]) (3.10)\n",
      "Requirement already satisfied: certifi in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from httpx<1,>=0.23.0->groq>=0.9.0->adalflow[datasets,groq,openai]) (2024.12.14)\n",
      "Requirement already satisfied: httpcore==1.* in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from httpx<1,>=0.23.0->groq>=0.9.0->adalflow[datasets,groq,openai]) (1.0.7)\n",
      "Requirement already satisfied: h11<0.15,>=0.13 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->groq>=0.9.0->adalflow[datasets,groq,openai]) (0.14.0)\n",
      "Requirement already satisfied: annotated-types>=0.6.0 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from pydantic<3,>=1.9.0->groq>=0.9.0->adalflow[datasets,groq,openai]) (0.7.0)\n",
      "Requirement already satisfied: pydantic-core==2.27.2 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from pydantic<3,>=1.9.0->groq>=0.9.0->adalflow[datasets,groq,openai]) (2.27.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from requests>=2.26.0->tiktoken>=0.3.3->adalflow[datasets,groq,openai]) (3.4.1)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/liyin/Documents/test/LightRAG/.venv/lib/python3.12/site-packages (from requests>=2.26.0->tiktoken>=0.3.3->adalflow[datasets,groq,openai]) (2.3.0)\n",
      "Using cached adalflow-1.0.0b3-py3-none-any.whl (298 kB)\n",
      "Installing collected packages: adalflow\n",
      "Successfully installed adalflow-1.0.0b3\n",
      "\n",
      "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.1.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m25.0\u001b[0m\n",
      "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "from IPython.display import clear_output\n",
    "\n",
    "!pip uninstall -y adalflow\n",
    "!pip install --pre 'adalflow[openai,groq,datasets]'\n",
    "\n",
    "# clear_output()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'1.0.0.beta.3'"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import adalflow\n",
    "\n",
    "adalflow.__version__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip uninstall httpx anyio -y\n",
    "# !pip install \"anyio>=3.1.0,<4.0\"\n",
    "# !pip install httpx==0.24.1\n",
    "\n",
    "# uncomment this if you run into issue in colab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "id": "PbAIsBeeTQUk"
   },
   "outputs": [],
   "source": [
    "import re\n",
    "from adalflow.core import Component, Generator\n",
    "from adalflow.components.model_client import OpenAIClient\n",
    "from adalflow.components.model_client import GroqAPIClient\n",
    "from adalflow.utils import (\n",
    "    setup_env,\n",
    ")  # make sure you have a .env file with OPENAI_API_KEY and GROQ_API_KEY"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "kRymwpwHTQUm",
    "outputId": "6a992f52-1661-4002-ef74-ed26938c6baa"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "API keys have been set.\n"
     ]
    }
   ],
   "source": [
    "from getpass import getpass\n",
    "import os\n",
    "\n",
    "# Prompt user to enter their API keys securely\n",
    "openai_api_key = getpass(\"Please enter your OpenAI API key: \")\n",
    "\n",
    "# Set environment variables\n",
    "os.environ[\"OPENAI_API_KEY\"] = openai_api_key\n",
    "\n",
    "print(\"API keys have been set.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "czGDvnVUTQUm"
   },
   "outputs": [],
   "source": [
    "template_doc = r\"\"\"<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
    "<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PPs3gHqeTQUn"
   },
   "source": [
    "Let's turn on the library log to help with debugging."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "98QNsOcSTQUn",
    "outputId": "d63cba1b-6087-4b04-bb2b-0a9d9d4500a5"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<RootLogger root (INFO)>"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# from adalflow.utils import get_logger\n",
    "\n",
    "# get_logger()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "id": "b3ey1lozTQUo"
   },
   "outputs": [],
   "source": [
    "class DocQA(Component):\n",
    "    def __init__(self):\n",
    "        super(DocQA, self).__init__()\n",
    "        self.doc = Generator(\n",
    "            template=template_doc,\n",
    "            model_client=OpenAIClient(),\n",
    "            model_kwargs={\"model\": \"gpt-3.5-turbo\"},\n",
    "        )\n",
    "\n",
    "    def call(self, query: str) -> str:\n",
    "        return self.doc(prompt_kwargs={\"input_str\": query}).data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "TZAHSrbUTQUo",
    "outputId": "66e81fb3-17f9-4570-dbbd-681cad1afc65"
   },
   "outputs": [],
   "source": [
    "doc = DocQA()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "f-y6l44PTQUp",
    "outputId": "e24aabd5-d758-4700-fa0d-46b66a88c412"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'type': 'DocQA', 'data': {'_components': {'_ordered_dict': True, 'data': [('doc', {'type': 'Generator', 'data': {'model_str': 'OpenAIClient_gpt-3_5-turbo', 'cache_path': PosixPath('/Users/liyin/.adalflow/cache_OpenAIClient_gpt-3_5-turbo.db'), 'callbacks': {'on_success': [], 'on_failure': [], 'on_complete': []}, 'cache': <diskcache.core.Cache object at 0x10ea4a9c0>, '_components': {'_ordered_dict': True, 'data': [('prompt', {'type': 'Prompt', 'data': {'_components': {'_ordered_dict': True, 'data': []}, '_parameters': {'_ordered_dict': True, 'data': []}, 'training': False, 'teacher_mode': False, 'tracing': False, 'name': 'Prompt', '_init_args': {'template': None, 'prompt_kwargs': {}}, 'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>', 'prompt_variables': ['input_str'], 'prompt_kwargs': {}}}), ('model_client', {'type': 'OpenAIClient', 'data': {'_components': {'_ordered_dict': True, 'data': []}, '_parameters': {'_ordered_dict': True, 'data': []}, 'training': False, 'teacher_mode': False, 'tracing': False, 'name': 'OpenAIClient', '_init_args': {'api_key': None, 'chat_completion_parser': None, 'input_type': 'text'}, '_api_key': None, 'chat_completion_parser': <function get_first_message_content at 0x10eaac2c0>, '_input_type': 'text'}})]}, '_parameters': {'_ordered_dict': True, 'data': []}, 'training': False, 'teacher_mode': False, 'tracing': False, 'name': 'Generator', '_init_args': {'model_client': None, 'model_kwargs': {}, 'template': None, 'prompt_kwargs': {}, 'output_processors': None, 'name': None, 'cache_path': None, 'use_cache': False}, 'id': '76123db7-c612-4256-ab49-5590c57ecf86', 'desc': 'Generate a response using LLM model.', 'backward_engine': None, 'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>', 'prompt_kwargs': {}, 'model_kwargs': {'model': 'gpt-3.5-turbo'}, 'output_processors': None, 'mock_output': False, 'mock_output_data': 'mock data', '_use_cache': False, '_kwargs': {'model_client': {'type': 'OpenAIClient', 'data': {'_components': {'_ordered_dict': True, 'data': []}, '_parameters': {'_ordered_dict': True, 'data': []}, 'training': False, 'teacher_mode': False, 'tracing': False, 'name': 'OpenAIClient', '_init_args': {'api_key': None, 'chat_completion_parser': None, 'input_type': 'text'}, '_api_key': None, 'chat_completion_parser': <function get_first_message_content at 0x10eaac2c0>, '_input_type': 'text'}}, 'model_kwargs': {'model': 'gpt-3.5-turbo'}, 'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>', 'prompt_kwargs': {}, 'output_processors': None, 'name': None, 'cache_path': None, 'use_cache': False}, '_teacher': None, '_trace_api_kwargs': {}}})]}, '_parameters': {'_ordered_dict': True, 'data': []}, 'training': False, 'teacher_mode': False, 'tracing': False, 'name': 'DocQA', '_init_args': {}}}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'_components': OrderedDict([('doc',\n",
       "               Generator(\n",
       "                 model_kwargs={'model': 'gpt-3.5-turbo'}, trainable_prompt_kwargs=[], prompt=template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
       "                 <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
       "                 (prompt): template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
       "                 <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
       "                 (model_client): OpenAIClient()\n",
       "               ))]),\n",
       " '_parameters': OrderedDict(),\n",
       " 'training': False,\n",
       " 'teacher_mode': False,\n",
       " 'tracing': False,\n",
       " 'name': 'DocQA',\n",
       " '_init_args': {}}"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# states\n",
    "states = doc.to_dict()\n",
    "print(states)\n",
    "doc.__dict__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "z_sH59_bTQUp"
   },
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "P81kIS2qTQUp",
    "outputId": "d8e0e398-d704-4a85-8692-66a8c570b910"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'_components': OrderedDict([('doc',\n",
       "               Generator(\n",
       "                 model_kwargs={'model': 'gpt-3.5-turbo'}, trainable_prompt_kwargs=[], prompt=template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
       "                 <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
       "                 (prompt): template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
       "                 <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
       "                 (model_client): OpenAIClient()\n",
       "               ))]),\n",
       " '_parameters': OrderedDict(),\n",
       " 'training': False,\n",
       " 'teacher_mode': False,\n",
       " 'tracing': False,\n",
       " 'name': 'DocQA',\n",
       " '_init_args': {}}"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# restore the states\n",
    "doc2 = DocQA.from_dict(states)\n",
    "doc2.__dict__\n",
    "# doc2.to_dict()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "198xYpLGTQUp",
    "outputId": "ffd33d12-6db0-45c2-dfb1-3d57460ad4c9"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "True\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'type': 'DocQA',\n",
       " 'data': {'_components': {'_ordered_dict': True,\n",
       "   'data': [('doc',\n",
       "     {'type': 'Generator',\n",
       "      'data': {'model_str': 'OpenAIClient_gpt-3_5-turbo',\n",
       "       'cache_path': PosixPath('/Users/liyin/.adalflow/cache_OpenAIClient_gpt-3_5-turbo.db'),\n",
       "       'callbacks': {'on_success': [], 'on_failure': [], 'on_complete': []},\n",
       "       'cache': <diskcache.core.Cache at 0x10ea4a9c0>,\n",
       "       '_components': {'_ordered_dict': True,\n",
       "        'data': [('prompt',\n",
       "          {'type': 'Prompt',\n",
       "           'data': {'_components': {'_ordered_dict': True, 'data': []},\n",
       "            '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "            'training': False,\n",
       "            'teacher_mode': False,\n",
       "            'tracing': False,\n",
       "            'name': 'Prompt',\n",
       "            '_init_args': {'template': None, 'prompt_kwargs': {}},\n",
       "            'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>',\n",
       "            'prompt_variables': ['input_str'],\n",
       "            'prompt_kwargs': {}}}),\n",
       "         ('model_client',\n",
       "          {'type': 'OpenAIClient',\n",
       "           'data': {'_components': {'_ordered_dict': True, 'data': []},\n",
       "            '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "            'training': False,\n",
       "            'teacher_mode': False,\n",
       "            'tracing': False,\n",
       "            'name': 'OpenAIClient',\n",
       "            '_init_args': {'api_key': None,\n",
       "             'chat_completion_parser': None,\n",
       "             'input_type': 'text'},\n",
       "            '_api_key': None,\n",
       "            'chat_completion_parser': <function adalflow.components.model_client.openai_client.get_first_message_content(completion: openai.types.chat.chat_completion.ChatCompletion) -> str>,\n",
       "            '_input_type': 'text'}})]},\n",
       "       '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "       'training': False,\n",
       "       'teacher_mode': False,\n",
       "       'tracing': False,\n",
       "       'name': 'Generator',\n",
       "       '_init_args': {'model_client': None,\n",
       "        'model_kwargs': {},\n",
       "        'template': None,\n",
       "        'prompt_kwargs': {},\n",
       "        'output_processors': None,\n",
       "        'name': None,\n",
       "        'cache_path': None,\n",
       "        'use_cache': False},\n",
       "       'id': '76123db7-c612-4256-ab49-5590c57ecf86',\n",
       "       'desc': 'Generate a response using LLM model.',\n",
       "       'backward_engine': None,\n",
       "       'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>',\n",
       "       'prompt_kwargs': {},\n",
       "       'model_kwargs': {'model': 'gpt-3.5-turbo'},\n",
       "       'output_processors': None,\n",
       "       'mock_output': False,\n",
       "       'mock_output_data': 'mock data',\n",
       "       '_use_cache': False,\n",
       "       '_kwargs': {'model_client': {'type': 'OpenAIClient',\n",
       "         'data': {'_components': {'_ordered_dict': True, 'data': []},\n",
       "          '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "          'training': False,\n",
       "          'teacher_mode': False,\n",
       "          'tracing': False,\n",
       "          'name': 'OpenAIClient',\n",
       "          '_init_args': {'api_key': None,\n",
       "           'chat_completion_parser': None,\n",
       "           'input_type': 'text'},\n",
       "          '_api_key': None,\n",
       "          'chat_completion_parser': <function adalflow.components.model_client.openai_client.get_first_message_content(completion: openai.types.chat.chat_completion.ChatCompletion) -> str>,\n",
       "          '_input_type': 'text'}},\n",
       "        'model_kwargs': {'model': 'gpt-3.5-turbo'},\n",
       "        'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>',\n",
       "        'prompt_kwargs': {},\n",
       "        'output_processors': None,\n",
       "        'name': None,\n",
       "        'cache_path': None,\n",
       "        'use_cache': False},\n",
       "       '_teacher': None,\n",
       "       '_trace_api_kwargs': {}}})]},\n",
       "  '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "  'training': False,\n",
       "  'teacher_mode': False,\n",
       "  'tracing': False,\n",
       "  'name': 'DocQA',\n",
       "  '_init_args': {}}}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "print(doc2.to_dict() == doc.to_dict())\n",
    "doc2.to_dict()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Ulb1OWxxTQUq",
    "outputId": "99972fcd-ed52-43b4-e461-a76c19bd9522"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error calling the model: Error code: 401 - {'error': {'message': 'Incorrect API key provided: /Users/l******************************************rnel. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n",
      "Error calling the model: Error code: 401 - {'error': {'message': 'Incorrect API key provided: /Users/l******************************************rnel. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None\n"
     ]
    }
   ],
   "source": [
    "print(doc(\"What is the best treatment for headache?\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "POVal8CgTQUq",
    "outputId": "2fadb1d6-b858-4964-9045-8ea7454178e3"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Error calling the model: Error code: 401 - {'error': {'message': 'Incorrect API key provided: /Users/l******************************************rnel. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n",
      "Error calling the model: Error code: 401 - {'error': {'message': 'Incorrect API key provided: /Users/l******************************************rnel. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None\n"
     ]
    }
   ],
   "source": [
    "print(doc2(\"What is the best treatment for headache?\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "R5gTO1-8TQUr"
   },
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "jhgSpKrMTQUr",
    "outputId": "15615bf7-2b72-4ac7-d1fe-f436a7304734"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "('', DocQA(\n",
      "  (doc): Generator(\n",
      "    model_kwargs={'model': 'gpt-3.5-turbo'}, trainable_prompt_kwargs=[], prompt=template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
      "    <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
      "    (prompt): template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
      "    <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
      "    (model_client): OpenAIClient()\n",
      "  )\n",
      "))\n",
      "('doc', Generator(\n",
      "  model_kwargs={'model': 'gpt-3.5-turbo'}, trainable_prompt_kwargs=[], prompt=template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
      "  <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
      "  (prompt): template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
      "  <START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str']\n",
      "  (model_client): OpenAIClient()\n",
      "))\n",
      "('doc.prompt', template: <START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\n",
      "<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>, prompt_variables: ['input_str'])\n",
      "('doc.model_client', OpenAIClient())\n"
     ]
    }
   ],
   "source": [
    "# list other subcomponents\n",
    "\n",
    "for subcomponent in doc.named_components():\n",
    "    print(subcomponent)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XjIHAY6bTQUr"
   },
   "source": [
    "Let's add a parameter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "id": "vxgjAUiFTQUr"
   },
   "outputs": [],
   "source": [
    "from adalflow.optim.parameter import Parameter\n",
    "\n",
    "doc.register_parameter(\"demo\", param=Parameter(data=\"demo\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "86C-h1e1TQUr",
    "outputId": "57cab4d0-eddf-433d-e364-5d7f07072fbf"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "('demo', Parameter(name=param_f0c65c95-61f1-462f-9f71-67036d660579, requires_opt=True, param_type=none (), role_desc=, data=demo, predecessors=set(), gradients=set(),            traces={}))\n"
     ]
    }
   ],
   "source": [
    "# list all parameters\n",
    "for param in doc.named_parameters():\n",
    "    print(param)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "_s2MPukiTQUr",
    "outputId": "b51c7d09-fb52-42d9-b2d5-4f44f5d22dc9"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'type': 'DocQA',\n",
       " 'data': {'_components': {'_ordered_dict': True,\n",
       "   'data': [('doc',\n",
       "     {'type': 'Generator',\n",
       "      'data': {'model_str': 'OpenAIClient_gpt-3_5-turbo',\n",
       "       'cache_path': PosixPath('/Users/liyin/.adalflow/cache_OpenAIClient_gpt-3_5-turbo.db'),\n",
       "       'callbacks': {'on_success': [], 'on_failure': [], 'on_complete': []},\n",
       "       'cache': <diskcache.core.Cache at 0x11716a0f0>,\n",
       "       '_components': {'_ordered_dict': True,\n",
       "        'data': [('prompt',\n",
       "          {'type': 'Prompt',\n",
       "           'data': {'_components': {'_ordered_dict': True, 'data': []},\n",
       "            '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "            'training': False,\n",
       "            'teacher_mode': False,\n",
       "            'tracing': False,\n",
       "            'name': 'Prompt',\n",
       "            '_init_args': {'template': None, 'prompt_kwargs': {}},\n",
       "            'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>',\n",
       "            'prompt_variables': ['input_str'],\n",
       "            'prompt_kwargs': {}}}),\n",
       "         ('model_client',\n",
       "          {'type': 'OpenAIClient',\n",
       "           'data': {'_components': {'_ordered_dict': True, 'data': []},\n",
       "            '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "            'training': False,\n",
       "            'teacher_mode': False,\n",
       "            'tracing': False,\n",
       "            'name': 'OpenAIClient',\n",
       "            '_init_args': {'api_key': None,\n",
       "             'chat_completion_parser': None,\n",
       "             'input_type': 'text'},\n",
       "            '_api_key': None,\n",
       "            'chat_completion_parser': <function adalflow.components.model_client.openai_client.get_first_message_content(completion: openai.types.chat.chat_completion.ChatCompletion) -> str>,\n",
       "            '_input_type': 'text'}})]},\n",
       "       '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "       'training': False,\n",
       "       'teacher_mode': False,\n",
       "       'tracing': False,\n",
       "       'name': 'Generator',\n",
       "       '_init_args': {'model_client': None,\n",
       "        'model_kwargs': {},\n",
       "        'template': None,\n",
       "        'prompt_kwargs': {},\n",
       "        'output_processors': None,\n",
       "        'name': None,\n",
       "        'cache_path': None,\n",
       "        'use_cache': False},\n",
       "       'id': '9c68423f-0e75-413b-bdfe-262c42a89a10',\n",
       "       'desc': 'Generate a response using LLM model.',\n",
       "       'backward_engine': None,\n",
       "       'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>',\n",
       "       'prompt_kwargs': {},\n",
       "       'model_kwargs': {'model': 'gpt-3.5-turbo'},\n",
       "       'output_processors': None,\n",
       "       'mock_output': False,\n",
       "       'mock_output_data': 'mock data',\n",
       "       '_use_cache': False,\n",
       "       '_kwargs': {'model_client': {'type': 'OpenAIClient',\n",
       "         'data': {'_components': {'_ordered_dict': True, 'data': []},\n",
       "          '_parameters': {'_ordered_dict': True, 'data': []},\n",
       "          'training': False,\n",
       "          'teacher_mode': False,\n",
       "          'tracing': False,\n",
       "          'name': 'OpenAIClient',\n",
       "          '_init_args': {'api_key': None,\n",
       "           'chat_completion_parser': None,\n",
       "           'input_type': 'text'},\n",
       "          '_api_key': None,\n",
       "          'chat_completion_parser': <function adalflow.components.model_client.openai_client.get_first_message_content(completion: openai.types.chat.chat_completion.ChatCompletion) -> str>,\n",
       "          '_input_type': 'text'}},\n",
       "        'model_kwargs': {'model': 'gpt-3.5-turbo'},\n",
       "        'template': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> {{input_str}} <END_OF_USER_PROMPT>',\n",
       "        'prompt_kwargs': {},\n",
       "        'output_processors': None,\n",
       "        'name': None,\n",
       "        'cache_path': None,\n",
       "        'use_cache': False},\n",
       "       '_teacher': None,\n",
       "       '_trace_api_kwargs': {'model': 'gpt-3.5-turbo',\n",
       "        'messages': [{'role': 'system',\n",
       "          'content': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> What is the best treatment for headache? <END_OF_USER_PROMPT>'}]}}})]},\n",
       "  '_parameters': {'_ordered_dict': True,\n",
       "   'data': [('demo',\n",
       "     {'name': 'param_f0c65c95-61f1-462f-9f71-67036d660579',\n",
       "      'id': 'f0c65c95-61f1-462f-9f71-67036d660579',\n",
       "      'role_desc': '',\n",
       "      'data': 'demo',\n",
       "      'requires_opt': True,\n",
       "      'param_type': 'none ()',\n",
       "      'predecessors': [],\n",
       "      'gradients': [],\n",
       "      'previous_data': None,\n",
       "      'grad_fn': 'None',\n",
       "      'score': None,\n",
       "      'traces': {},\n",
       "      'demos': []})]},\n",
       "  'training': False,\n",
       "  'teacher_mode': False,\n",
       "  'tracing': False,\n",
       "  'name': 'DocQA',\n",
       "  '_init_args': {}}}"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc.to_dict()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "id": "mcIO1DuVTQUr"
   },
   "outputs": [],
   "source": [
    "from adalflow.utils.file_io import save_json\n",
    "\n",
    "save_json(doc.to_dict(), \"doc.json\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "0vvO0nogTQUr",
    "outputId": "59131d9e-a996-4c8b-f32c-9a6a623d3db6"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "OrderedDict([('demo',\n",
       "              Parameter(name=param_f0c65c95-61f1-462f-9f71-67036d660579, requires_opt=True, param_type=none (), role_desc=, data=demo, predecessors=set(), gradients=set(),            traces={}))])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc.state_dict()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 125
    },
    "id": "uroqi93tTQUs",
    "outputId": "8a3e4ecc-1368-475b-dc4d-2ff38821b8ac"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2025-01-29 21:39:17 - openai_client - INFO - [openai_client.py:364:call] - api_kwargs: {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'system', 'content': '<START_OF_SYS_PROMPT> You are a doctor <END_OF_SYS_PROMPT>\\n<START_OF_USER_PROMPT> What is the best treatment for a cold? <END_OF_USER_PROMPT>'}]}\n",
      "2025-01-29 21:39:19 - _client - INFO - [_client.py:1038:_send_single_request] - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "2025-01-29 21:39:19 - generator - INFO - [generator.py:1129:call] - output: GeneratorOutput(id=None, data=\"As a doctor, I recommend the following treatments for a cold:\\n\\n1. Rest: Get plenty of rest to help your body recover and fight off the cold virus.\\n\\n2. Stay hydrated: Drink plenty of fluids such as water, herbal tea, and clear broths to stay hydrated and help loosen mucus.\\n\\n3. Over-the-counter (OTC) medications: Consider taking OTC medications such as decongestants, antihistamines, or pain relievers to alleviate symptoms like congestion, runny nose, and fever. Always follow the recommended dosage instructions.\\n\\n4. Saline nasal drops or sprays: These can help relieve nasal congestion and improve breathing.\\n\\n5. Humidifier: Using a humidifier in your room can add moisture to the air and help ease congestion.\\n\\n6. Gargling with salt water: This can help soothe a sore throat and reduce inflammation.\\n\\n7. Honey and lemon: Mixing honey and lemon in warm water can help soothe a sore throat and provide relief from coughing.\\n\\nRemember, if your symptoms worsen or persist for more than a week, it's important to consult with a healthcare provider for further evaluation and treatment.\", error=None, usage=CompletionUsage(completion_tokens=237, prompt_tokens=48, total_tokens=285), raw_response=\"As a doctor, I recommend the following treatments for a cold:\\n\\n1. Rest: Get plenty of rest to help your body recover and fight off the cold virus.\\n\\n2. Stay hydrated: Drink plenty of fluids such as water, herbal tea, and clear broths to stay hydrated and help loosen mucus.\\n\\n3. Over-the-counter (OTC) medications: Consider taking OTC medications such as decongestants, antihistamines, or pain relievers to alleviate symptoms like congestion, runny nose, and fever. Always follow the recommended dosage instructions.\\n\\n4. Saline nasal drops or sprays: These can help relieve nasal congestion and improve breathing.\\n\\n5. Humidifier: Using a humidifier in your room can add moisture to the air and help ease congestion.\\n\\n6. Gargling with salt water: This can help soothe a sore throat and reduce inflammation.\\n\\n7. Honey and lemon: Mixing honey and lemon in warm water can help soothe a sore throat and provide relief from coughing.\\n\\nRemember, if your symptoms worsen or persist for more than a week, it's important to consult with a healthcare provider for further evaluation and treatment.\", metadata=None)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\"As a doctor, I recommend the following treatments for a cold:\\n\\n1. Rest: Get plenty of rest to help your body recover and fight off the cold virus.\\n\\n2. Stay hydrated: Drink plenty of fluids such as water, herbal tea, and clear broths to stay hydrated and help loosen mucus.\\n\\n3. Over-the-counter (OTC) medications: Consider taking OTC medications such as decongestants, antihistamines, or pain relievers to alleviate symptoms like congestion, runny nose, and fever. Always follow the recommended dosage instructions.\\n\\n4. Saline nasal drops or sprays: These can help relieve nasal congestion and improve breathing.\\n\\n5. Humidifier: Using a humidifier in your room can add moisture to the air and help ease congestion.\\n\\n6. Gargling with salt water: This can help soothe a sore throat and reduce inflammation.\\n\\n7. Honey and lemon: Mixing honey and lemon in warm water can help soothe a sore throat and provide relief from coughing.\\n\\nRemember, if your symptoms worsen or persist for more than a week, it's important to consult with a healthcare provider for further evaluation and treatment.\""
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc.call(\"What is the best treatment for a cold?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "mYSDr462TQUs",
    "outputId": "82414c82-8feb-4667-90ed-91c594cc6a73"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2\n",
      "<class 'adalflow.core.component.FuncComponent'>\n"
     ]
    }
   ],
   "source": [
    "from adalflow.core.component import FuncComponent\n",
    "\n",
    "\n",
    "def add_one(x):\n",
    "    return x + 1\n",
    "\n",
    "\n",
    "fun_component = FuncComponent(add_one)\n",
    "print(fun_component(1))\n",
    "print(type(fun_component))\n",
    "\n",
    "# output:\n",
    "# 2\n",
    "# <class 'core.component.FuncComponent'>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "3MW1tpzRTQUs",
    "outputId": "351b8922-1423-434a-f470-ff435a1962d2"
   },
   "outputs": [
    {
     "ename": "AttributeError",
     "evalue": "'AddOneComponent' object has no attribute 'fun'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mAttributeError\u001b[0m                            Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[16], line 4\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01madalflow\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcore\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcomponent\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m func_to_component\n\u001b[1;32m      3\u001b[0m fun_component \u001b[38;5;241m=\u001b[39m func_to_component(add_one)\n\u001b[0;32m----> 4\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[43mfun_component\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m)\n\u001b[1;32m      5\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;28mtype\u001b[39m(fun_component))\n\u001b[1;32m      7\u001b[0m \u001b[38;5;66;03m# output:\u001b[39;00m\n\u001b[1;32m      8\u001b[0m \u001b[38;5;66;03m# 2\u001b[39;00m\n\u001b[1;32m      9\u001b[0m \u001b[38;5;66;03m# <class 'adalflow.core.component.AddOneComponent'>\u001b[39;00m\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:532\u001b[0m, in \u001b[0;36mComponent.__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m    530\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m output\n\u001b[1;32m    531\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 532\u001b[0m     output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcall\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    533\u001b[0m     \u001b[38;5;66;03m# Validation for inference\u001b[39;00m\n\u001b[1;32m    534\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(output, Parameter):\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:1056\u001b[0m, in \u001b[0;36mFuncComponent.call\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m   1054\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21mcall\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[0;32m-> 1056\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfun\u001b[49m(\u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:883\u001b[0m, in \u001b[0;36mComponent.__getattr__\u001b[0;34m(self, name)\u001b[0m\n\u001b[1;32m    880\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m name \u001b[38;5;129;01min\u001b[39;00m components:\n\u001b[1;32m    881\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m components[name]\n\u001b[0;32m--> 883\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[1;32m    884\u001b[0m     \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28mself\u001b[39m)\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m object has no attribute \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m    885\u001b[0m )\n",
      "\u001b[0;31mAttributeError\u001b[0m: 'AddOneComponent' object has no attribute 'fun'"
     ]
    }
   ],
   "source": [
    "from adalflow.core.component import func_to_data_component\n",
    "\n",
    "fun_component = func_to_data_component(add_one)\n",
    "print(fun_component(1))\n",
    "print(type(fun_component))\n",
    "\n",
    "# output:\n",
    "# 2\n",
    "# <class 'adalflow.core.component.AddOneComponent'>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "dxAoGrnQTQUs",
    "outputId": "38c462a3-5abf-41f4-9231-746c8d0ffcb3"
   },
   "outputs": [
    {
     "ename": "NameError",
     "evalue": "name 'func_to_component' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[1], line 2\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# use it as a decorator\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[38;5;129m@func_to_component\u001b[39m\n\u001b[1;32m      3\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21madd_one\u001b[39m(x):\n\u001b[1;32m      4\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m x \u001b[38;5;241m+\u001b[39m \u001b[38;5;241m1\u001b[39m\n\u001b[1;32m      7\u001b[0m \u001b[38;5;28mprint\u001b[39m(add_one(\u001b[38;5;241m1\u001b[39m))\n",
      "\u001b[0;31mNameError\u001b[0m: name 'func_to_component' is not defined"
     ]
    }
   ],
   "source": [
    "# use it as a decorator\n",
    "@func_to_data_component\n",
    "def add_one(x):\n",
    "    return x + 1\n",
    "\n",
    "\n",
    "print(add_one(1))\n",
    "print(type(add_one))\n",
    "\n",
    "# output:\n",
    "# 2\n",
    "# <class 'adalflow.core.component.AddOneComponent'>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "7BvJEP_mTQUs",
    "outputId": "066281b8-a650-4c48-c786-312022198015"
   },
   "outputs": [
    {
     "ename": "AttributeError",
     "evalue": "'EnhanceQueryComponent' object has no attribute 'fun'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mAttributeError\u001b[0m                            Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[17], line 12\u001b[0m\n\u001b[1;32m      9\u001b[0m seq \u001b[38;5;241m=\u001b[39m Sequential(enhance_query, doc)\n\u001b[1;32m     11\u001b[0m query \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mWhat is the best treatment for headache?\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m---> 12\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[43mseq\u001b[49m\u001b[43m(\u001b[49m\u001b[43mquery\u001b[49m\u001b[43m)\u001b[49m)\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:532\u001b[0m, in \u001b[0;36mComponent.__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m    530\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m output\n\u001b[1;32m    531\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 532\u001b[0m     output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcall\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    533\u001b[0m     \u001b[38;5;66;03m# Validation for inference\u001b[39;00m\n\u001b[1;32m    534\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(output, Parameter):\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/container.py:292\u001b[0m, in \u001b[0;36mSequential.call\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m    290\u001b[0m     \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m args[\u001b[38;5;241m0\u001b[39m]\n\u001b[1;32m    291\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m component \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_components\u001b[38;5;241m.\u001b[39mvalues():\n\u001b[0;32m--> 292\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mcomponent\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    293\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n\u001b[1;32m    294\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:532\u001b[0m, in \u001b[0;36mComponent.__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m    530\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m output\n\u001b[1;32m    531\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 532\u001b[0m     output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcall\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    533\u001b[0m     \u001b[38;5;66;03m# Validation for inference\u001b[39;00m\n\u001b[1;32m    534\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(output, Parameter):\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:1056\u001b[0m, in \u001b[0;36mFuncComponent.call\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m   1054\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21mcall\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[0;32m-> 1056\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfun\u001b[49m(\u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
      "File \u001b[0;32m~/Documents/test/LightRAG/.venv/lib/python3.12/site-packages/adalflow/core/component.py:883\u001b[0m, in \u001b[0;36mComponent.__getattr__\u001b[0;34m(self, name)\u001b[0m\n\u001b[1;32m    880\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m name \u001b[38;5;129;01min\u001b[39;00m components:\n\u001b[1;32m    881\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m components[name]\n\u001b[0;32m--> 883\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[1;32m    884\u001b[0m     \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28mself\u001b[39m)\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m object has no attribute \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m    885\u001b[0m )\n",
      "\u001b[0;31mAttributeError\u001b[0m: 'EnhanceQueryComponent' object has no attribute 'fun'"
     ]
    }
   ],
   "source": [
    "from adalflow.core import Sequential\n",
    "\n",
    "\n",
    "@func_to_data_component\n",
    "def enhance_query(query: str) -> str:\n",
    "    return query + \"Please be concise and only list the top treatments.\"\n",
    "\n",
    "\n",
    "seq = Sequential(enhance_query, doc)\n",
    "\n",
    "query = \"What is the best treatment for headache?\"\n",
    "print(seq(query))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "aoZ2w8RUTQUt",
    "outputId": "115d0ccf-33d1-4464-a951-cf9f5476284b"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (0): EnhanceQueryComponent(fun_name=enhance_query)\n",
       "  (1): DocQA(\n",
       "    (doc): Generator(\n",
       "      model_kwargs={'model': 'gpt-3.5-turbo'}, trainable_prompt_kwargs=[]\n",
       "      (prompt): Prompt(template: <SYS> You are a doctor </SYS> User: {{input_str}}, prompt_variables: ['input_str'])\n",
       "      (model_client): OpenAIClient()\n",
       "    )\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "seq"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "F-ffAlC6TQUt"
   },
   "source": [
    "# TODO: LLM for single choices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Issues and feedback\n",
    "\n",
    "If you encounter any issues, please report them here: [GitHub Issues](https://github.com/SylphAI-Inc/LightRAG/issues).\n",
    "\n",
    "For feedback, you can use either the [GitHub discussions](https://github.com/SylphAI-Inc/LightRAG/discussions) or [Discord](https://discord.gg/ezzszrRZvT)."
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "my-project-kernel",
   "language": "python",
   "name": "my-project-kernel"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
