{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "04e27741-355e-4200-918a-318467eca523",
   "metadata": {},
   "source": [
    "- https://github.com/langchain-ai/langchain/tree/master/cookbook\n",
    "- https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel\n",
    "    - https://python.langchain.com/v0.1/docs/expression_language/why/\n",
    "- https://ai.plainenglish.io/understanding-large-language-model-based-agents-27bee5c82cec"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "658ce0a5-ef6a-42a0-b83e-497a9fb3aae9",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:29:30.082722Z",
     "iopub.status.busy": "2024-08-03T12:29:30.081399Z",
     "iopub.status.idle": "2024-08-03T12:29:30.114988Z",
     "shell.execute_reply": "2024-08-03T12:29:30.113347Z",
     "shell.execute_reply.started": "2024-08-03T12:29:30.082657Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "from dotenv import load_dotenv\n",
    "# LANGCHAIN_TRACING_V2=true\n",
    "# LANGCHAIN_API_KEY=\n",
    "# OPENAI_API_KEY=\n",
    "load_dotenv()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd851500-edac-4b06-920c-d9f338a010d6",
   "metadata": {},
   "source": [
    "## LangChain"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "006664c2-ffe0-48c2-9f9f-52b0f918372b",
   "metadata": {},
   "source": [
    "- 所谓的 `agent` 开发，LLMs workflows，GenAI 时代的软件工程\n",
    "    - 丰富的生态，\n",
    "    - workflows 的复杂，手撸的效率非常低，而且不好维护，\n",
    "    - Input -> Processing -> Output\n",
    "- 与 AutoGen 等相比，更多地面向开发者，面向软件工程\n",
    "    - LangGraph：multi-agents workflows\n",
    "    - `LangSmith` 也在更多地弥补中间过程显示的不足\n",
    "- 推荐 《大模型应用开发 动手做AI Agent》（https://www.bilibili.com/opus/935785456083140628?spm_id_from=333.999.0.0）\n",
    "    - 面向开发者，第一本\n",
    "    - 系统而全面，可以做一个很好的入门"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e42756b9-8126-44ba-a327-96abf12cead3",
   "metadata": {},
   "source": [
    "## LCEL (LangChain Expression Language)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "015c4fd4-0eef-4b50-a5ee-66552a354973",
   "metadata": {},
   "source": [
    "- LangChain 重写了 `|`（`__or__`），`Chain 之所在`\n",
    "- `RunnablePassthrough`: RunnablePassthrough 允许你将输入数据直接传递而不做任何更改（identity），通常与 RunnableParallel 一起使用，将数据传递到新的键中。\n",
    "- LangChain 的应用 RunnablePassthrough 作为一个占位符，可以在需要时填充数据，比如在公司名称尚未确定时先留空，后续再填入。\n",
    "- 所谓的最佳实践\n",
    "    - python：遍历，也可以用 list comprehension\n",
    "    - 对于 matlab：也可以遍历，也可以整理成 matrix，直接矩阵矢量乘法；"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "a50feaa2-9cf2-4029-8c36-0f33d5bfa243",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T10:49:59.663699Z",
     "iopub.status.busy": "2024-08-03T10:49:59.663129Z",
     "iopub.status.idle": "2024-08-03T10:49:59.670427Z",
     "shell.execute_reply": "2024-08-03T10:49:59.669598Z",
     "shell.execute_reply.started": "2024-08-03T10:49:59.663652Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# (2 | 3) > 3\n",
    "2 | 3 > 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "a961c22b-9aef-4fb2-9e09-5580ba182970",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T10:49:13.485538Z",
     "iopub.status.busy": "2024-08-03T10:49:13.484959Z",
     "iopub.status.idle": "2024-08-03T10:49:13.491450Z",
     "shell.execute_reply": "2024-08-03T10:49:13.490725Z",
     "shell.execute_reply.started": "2024-08-03T10:49:13.485492Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "2 | 3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "9e03e4be-6d6a-41ce-b58b-0c9f15a787b0",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:32:43.003172Z",
     "iopub.status.busy": "2024-08-03T12:32:43.002707Z",
     "iopub.status.idle": "2024-08-03T12:32:43.007505Z",
     "shell.execute_reply": "2024-08-03T12:32:43.006748Z",
     "shell.execute_reply.started": "2024-08-03T12:32:43.003119Z"
    }
   },
   "outputs": [],
   "source": [
    "from langchain_core.runnables import (\n",
    "    RunnablePassthrough, \n",
    "    RunnableLambda, \n",
    "    RunnableParallel\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "8d0de972-efb1-4ca8-a11e-afb29b0293fe",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:32:36.967885Z",
     "iopub.status.busy": "2024-08-03T12:32:36.967500Z",
     "iopub.status.idle": "2024-08-03T12:32:36.971819Z",
     "shell.execute_reply": "2024-08-03T12:32:36.971136Z",
     "shell.execute_reply.started": "2024-08-03T12:32:36.967863Z"
    }
   },
   "outputs": [],
   "source": [
    "os.environ[\"LANGCHAIN_PROJECT\"] = 'lcel_test'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "16236eca-6eeb-4c73-89b8-8fcf54ad345d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:32:44.464770Z",
     "iopub.status.busy": "2024-08-03T12:32:44.463909Z",
     "iopub.status.idle": "2024-08-03T12:32:46.198537Z",
     "shell.execute_reply": "2024-08-03T12:32:46.197649Z",
     "shell.execute_reply.started": "2024-08-03T12:32:44.464716Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Why did the ice cream truck break down?\\n\\nBecause it had too many \"scoops\"!'"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(\n",
    "    \"Tell me a short joke about {topic}\"\n",
    ")\n",
    "output_parser = StrOutputParser()\n",
    "llm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
    "\n",
    "# lcel\n",
    "chain = (\n",
    "    {\"topic\": RunnablePassthrough()} \n",
    "    | prompt\n",
    "    | llm\n",
    "    | output_parser\n",
    ")\n",
    "\n",
    "chain.invoke(\"ice cream\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "57b13906-8832-446d-96f6-93f36132a0af",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:33:50.795160Z",
     "iopub.status.busy": "2024-08-03T12:33:50.794426Z",
     "iopub.status.idle": "2024-08-03T12:33:50.816195Z",
     "shell.execute_reply": "2024-08-03T12:33:50.814140Z",
     "shell.execute_reply.started": "2024-08-03T12:33:50.795107Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptValue(messages=[HumanMessage(content='Tell me a short joke about ice cream')])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt.invoke({'topic': 'ice cream'})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "8700799e-9717-4c99-b4f8-7477caa30824",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:34:32.702847Z",
     "iopub.status.busy": "2024-08-03T12:34:32.701592Z",
     "iopub.status.idle": "2024-08-03T12:34:33.812151Z",
     "shell.execute_reply": "2024-08-03T12:34:33.810608Z",
     "shell.execute_reply.started": "2024-08-03T12:34:32.702791Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='Why did the ice cream truck break down?\\n\\nIt had too many \"scoops\" of ice cream!', response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 15, 'total_tokens': 37}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-fb46f4ce-665b-45f7-9fd0-9dd559465fcc-0', usage_metadata={'input_tokens': 15, 'output_tokens': 22, 'total_tokens': 37})"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.invoke(prompt.invoke({'topic': 'ice cream'}))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "2b1cf7c8-4b1e-405b-a96a-a47413523b8e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:35:07.910211Z",
     "iopub.status.busy": "2024-08-03T12:35:07.909581Z",
     "iopub.status.idle": "2024-08-03T12:35:09.619928Z",
     "shell.execute_reply": "2024-08-03T12:35:09.619044Z",
     "shell.execute_reply.started": "2024-08-03T12:35:07.910161Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Why did the ice cream truck break down? It had too many \"scoops\" on board!'"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output_parser.invoke(llm.invoke(prompt.invoke({'topic': 'ice cream'})))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9578425a-9be1-4ea7-8b63-84bee95d76e8",
   "metadata": {},
   "source": [
    "### runnables"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a2e20f30-88ef-4955-93c6-352882e53028",
   "metadata": {},
   "source": [
    "- `RunnablePassthrough()`: identity"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "id": "af02c268-c10b-427d-8950-68d9ee23d738",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T11:23:21.754999Z",
     "iopub.status.busy": "2024-08-03T11:23:21.754426Z",
     "iopub.status.idle": "2024-08-03T11:23:21.778822Z",
     "shell.execute_reply": "2024-08-03T11:23:21.777321Z",
     "shell.execute_reply.started": "2024-08-03T11:23:21.754951Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'hello'"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = RunnablePassthrough() | RunnablePassthrough () | RunnablePassthrough ()\n",
    "chain.invoke(\"hello\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "7267fa46-7600-4f6c-bc87-55afa3f52075",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T11:23:50.091408Z",
     "iopub.status.busy": "2024-08-03T11:23:50.090438Z",
     "iopub.status.idle": "2024-08-03T11:23:50.112567Z",
     "shell.execute_reply": "2024-08-03T11:23:50.111775Z",
     "shell.execute_reply.started": "2024-08-03T11:23:50.091358Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'HELLO'"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = RunnablePassthrough() | RunnableLambda(lambda x: x.upper())\n",
    "chain.invoke(\"hello\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "id": "591ef883-a7a5-4ae3-aa7e-68087c72caba",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T11:23:56.699551Z",
     "iopub.status.busy": "2024-08-03T11:23:56.698985Z",
     "iopub.status.idle": "2024-08-03T11:23:56.726784Z",
     "shell.execute_reply": "2024-08-03T11:23:56.725257Z",
     "shell.execute_reply.started": "2024-08-03T11:23:56.699503Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'HELLO'"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = RunnablePassthrough() | RunnableLambda(lambda x: x.upper()) | RunnablePassthrough()\n",
    "chain.invoke(\"hello\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40f2432d-ea88-4f0e-badf-2d7e585771bd",
   "metadata": {},
   "source": [
    "### json test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "e4a09da9-ac24-4c7f-b1ce-f1671b3eb01e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:38:17.301463Z",
     "iopub.status.busy": "2024-08-03T12:38:17.300730Z",
     "iopub.status.idle": "2024-08-03T12:38:17.306870Z",
     "shell.execute_reply": "2024-08-03T12:38:17.305740Z",
     "shell.execute_reply.started": "2024-08-03T12:38:17.301412Z"
    }
   },
   "outputs": [],
   "source": [
    "os.environ[\"LANGCHAIN_PROJECT\"] = 'json_test2'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "79d3eb05-bb92-47af-8f53-7474db08cb7d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:38:35.530067Z",
     "iopub.status.busy": "2024-08-03T12:38:35.528791Z",
     "iopub.status.idle": "2024-08-03T12:38:35.535764Z",
     "shell.execute_reply": "2024-08-03T12:38:35.534435Z",
     "shell.execute_reply.started": "2024-08-03T12:38:35.530010Z"
    }
   },
   "outputs": [],
   "source": [
    "from langchain_core.prompts import HumanMessagePromptTemplate\n",
    "from langchain_core.prompts.chat import SystemMessagePromptTemplate\n",
    "from langchain_core.output_parsers import JsonOutputParser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "c91d08aa-9f7a-4cbb-84c7-0d4e1c966260",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:38:47.097283Z",
     "iopub.status.busy": "2024-08-03T12:38:47.096566Z",
     "iopub.status.idle": "2024-08-03T12:38:47.198079Z",
     "shell.execute_reply": "2024-08-03T12:38:47.197310Z",
     "shell.execute_reply.started": "2024-08-03T12:38:47.097233Z"
    }
   },
   "outputs": [],
   "source": [
    "llm = ChatOpenAI(model=\"gpt-4o\", \n",
    "                 model_kwargs={'response_format': {\"type\": \"json_object\"}})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "5c09a358-40ea-4d54-8aec-26e4124cddfa",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:38:50.020622Z",
     "iopub.status.busy": "2024-08-03T12:38:50.020107Z",
     "iopub.status.idle": "2024-08-03T12:38:50.024993Z",
     "shell.execute_reply": "2024-08-03T12:38:50.024180Z",
     "shell.execute_reply.started": "2024-08-03T12:38:50.020589Z"
    }
   },
   "outputs": [],
   "source": [
    "json_parser = JsonOutputParser()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "7756f89b-7927-45c0-b91d-fce7ec0eb487",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:39:54.070627Z",
     "iopub.status.busy": "2024-08-03T12:39:54.070036Z",
     "iopub.status.idle": "2024-08-03T12:39:54.079310Z",
     "shell.execute_reply": "2024-08-03T12:39:54.078192Z",
     "shell.execute_reply.started": "2024-08-03T12:39:54.070579Z"
    }
   },
   "outputs": [],
   "source": [
    "# 创建提示模板\n",
    "prompt = ChatPromptTemplate.from_messages([\n",
    "    (\"system\", '''I want you to extract the person name, age and a description from the following text.\n",
    "    Here is the JSON object, output:\n",
    "    {{\n",
    "        \"name\": string,\n",
    "        \"age\": int,\n",
    "        \"description\": string\n",
    "    }}'''),\n",
    "    (\"human\", \"{input}\")\n",
    "])\n",
    "\n",
    "# 创建 LCEL 链\n",
    "chain = (\n",
    "    {\"input\": RunnablePassthrough()} \n",
    "    | prompt \n",
    "    | llm \n",
    "    | json_parser\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "45211927-3ee7-4cd3-9c96-af89b6576c5f",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:40:05.813960Z",
     "iopub.status.busy": "2024-08-03T12:40:05.813709Z",
     "iopub.status.idle": "2024-08-03T12:40:05.819751Z",
     "shell.execute_reply": "2024-08-03T12:40:05.819084Z",
     "shell.execute_reply.started": "2024-08-03T12:40:05.813940Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='I want you to extract the person name, age and a description from the following text.\\n    Here is the JSON object, output:\\n    {{\\n        \"name\": string,\\n        \"age\": int,\\n        \"description\": string\\n    }}')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "88cd72a0-aed1-484d-bf03-9248e8c55abf",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:40:27.173108Z",
     "iopub.status.busy": "2024-08-03T12:40:27.172391Z",
     "iopub.status.idle": "2024-08-03T12:40:27.180405Z",
     "shell.execute_reply": "2024-08-03T12:40:27.179356Z",
     "shell.execute_reply.started": "2024-08-03T12:40:27.173057Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "prompt=PromptTemplate(input_variables=[], template='I want you to extract the person name, age and a description from the following text.\\n    Here is the JSON object, output:\\n    {{\\n        \"name\": string,\\n        \"age\": int,\\n        \"description\": string\\n    }}')\n",
      "prompt=PromptTemplate(input_variables=['input'], template='{input}')\n"
     ]
    }
   ],
   "source": [
    "print(prompt[0])\n",
    "print(prompt[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "974fabef-0575-4c79-adae-603ca2c671c0",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:41:00.749539Z",
     "iopub.status.busy": "2024-08-03T12:41:00.748788Z",
     "iopub.status.idle": "2024-08-03T12:41:00.759425Z",
     "shell.execute_reply": "2024-08-03T12:41:00.758407Z",
     "shell.execute_reply.started": "2024-08-03T12:41:00.749489Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='I want you to extract the person name, age and a description from the following text.\\n    Here is the JSON object, output:\\n    {{\\n        \"name\": string,\\n        \"age\": int,\\n        \"description\": string\\n    }}')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ChatPromptTemplate.from_messages(\n",
    "    [SystemMessagePromptTemplate.from_template('''I want you to extract the person name, age and a description from the following text.\n",
    "    Here is the JSON object, output:\n",
    "    {{\n",
    "        \"name\": string,\n",
    "        \"age\": int,\n",
    "        \"description\": string\n",
    "    }}'''), \n",
    "     HumanMessagePromptTemplate.from_template(\"{input}\")\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "005e588f-c548-46d2-b5e0-26e896f4f48d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:41:13.265288Z",
     "iopub.status.busy": "2024-08-03T12:41:13.263987Z",
     "iopub.status.idle": "2024-08-03T12:41:14.520687Z",
     "shell.execute_reply": "2024-08-03T12:41:14.519559Z",
     "shell.execute_reply.started": "2024-08-03T12:41:13.265232Z"
    }
   },
   "outputs": [],
   "source": [
    "result = chain.invoke(\"John is 20 years old. He is a student at the University of California, Berkeley. He is a very smart student.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "a579bbcc-a2d2-4547-98f4-1d93abb06496",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:41:16.144537Z",
     "iopub.status.busy": "2024-08-03T12:41:16.143281Z",
     "iopub.status.idle": "2024-08-03T12:41:16.151578Z",
     "shell.execute_reply": "2024-08-03T12:41:16.150552Z",
     "shell.execute_reply.started": "2024-08-03T12:41:16.144480Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'name': 'John',\n",
       " 'age': 20,\n",
       " 'description': 'He is a student at the University of California, Berkeley. He is a very smart student.'}"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "a0b19644-3494-4c94-a5d2-4075bdd4d7ab",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:41:39.052920Z",
     "iopub.status.busy": "2024-08-03T12:41:39.051720Z",
     "iopub.status.idle": "2024-08-03T12:41:39.057591Z",
     "shell.execute_reply": "2024-08-03T12:41:39.056583Z",
     "shell.execute_reply.started": "2024-08-03T12:41:39.052869Z"
    }
   },
   "outputs": [],
   "source": [
    "# chain.invoke({'input': \"John is 20 years old. He is a student at the University of California, Berkeley. He is a very smart student.\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "27169d75-b128-4376-8498-b0c405e837e3",
   "metadata": {},
   "source": [
    "### RAG"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "3f861f93-1d7a-49f0-a691-407db40ac076",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T11:30:45.129229Z",
     "iopub.status.busy": "2024-08-03T11:30:45.128940Z",
     "iopub.status.idle": "2024-08-03T11:30:45.132447Z",
     "shell.execute_reply": "2024-08-03T11:30:45.131796Z",
     "shell.execute_reply.started": "2024-08-03T11:30:45.129211Z"
    }
   },
   "outputs": [],
   "source": [
    "# !conda install faiss-gpu -c pytorch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "f948a484-af5a-41de-bb9e-ed0354de0939",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:42:56.825905Z",
     "iopub.status.busy": "2024-08-03T12:42:56.825259Z",
     "iopub.status.idle": "2024-08-03T12:42:56.832850Z",
     "shell.execute_reply": "2024-08-03T12:42:56.831167Z",
     "shell.execute_reply.started": "2024-08-03T12:42:56.825855Z"
    }
   },
   "outputs": [],
   "source": [
    "os.environ[\"LANGCHAIN_PROJECT\"] = 'rag_test'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "25b8eaf5-f932-4dbe-a556-c9fe845f313f",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:42:54.323212Z",
     "iopub.status.busy": "2024-08-03T12:42:54.322452Z",
     "iopub.status.idle": "2024-08-03T12:42:54.339093Z",
     "shell.execute_reply": "2024-08-03T12:42:54.337646Z",
     "shell.execute_reply.started": "2024-08-03T12:42:54.323159Z"
    }
   },
   "outputs": [],
   "source": [
    "from langchain_community.vectorstores import FAISS\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_core.runnables import RunnablePassthrough\n",
    "from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "f4a0fa06-4ad8-49d1-b6e8-0670c7b45e57",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:43:01.493634Z",
     "iopub.status.busy": "2024-08-03T12:43:01.493053Z",
     "iopub.status.idle": "2024-08-03T12:43:01.597998Z",
     "shell.execute_reply": "2024-08-03T12:43:01.596704Z",
     "shell.execute_reply.started": "2024-08-03T12:43:01.493586Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "OpenAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0x7677f6861ac0>, async_client=<openai.resources.embeddings.AsyncEmbeddings object at 0x7677f6643e60>, model='text-embedding-ada-002', dimensions=None, deployment='text-embedding-ada-002', openai_api_version='', openai_api_base=None, openai_api_type='', openai_proxy='', embedding_ctx_length=8191, openai_api_key=SecretStr('**********'), openai_organization=None, allowed_special=None, disallowed_special=None, chunk_size=1000, max_retries=2, request_timeout=None, headers=None, tiktoken_enabled=True, tiktoken_model_name=None, show_progress_bar=False, model_kwargs={}, skip_empty=False, default_headers=None, default_query=None, retry_min_seconds=4, retry_max_seconds=20, http_client=None, http_async_client=None, check_embedding_ctx_length=True)"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "OpenAIEmbeddings()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "4985111c-5aff-4bda-8f39-641dd5390f0e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:43:20.783995Z",
     "iopub.status.busy": "2024-08-03T12:43:20.783445Z",
     "iopub.status.idle": "2024-08-03T12:43:21.918012Z",
     "shell.execute_reply": "2024-08-03T12:43:21.916936Z",
     "shell.execute_reply.started": "2024-08-03T12:43:20.783959Z"
    }
   },
   "outputs": [],
   "source": [
    "vectorstore = FAISS.from_texts(\n",
    "    [\"Cats love thuna\"], embedding=OpenAIEmbeddings()\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "c818bbe0-acb3-40f1-acc9-a4fd3d6db8e6",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:43:25.514145Z",
     "iopub.status.busy": "2024-08-03T12:43:25.513818Z",
     "iopub.status.idle": "2024-08-03T12:43:25.520633Z",
     "shell.execute_reply": "2024-08-03T12:43:25.519228Z",
     "shell.execute_reply.started": "2024-08-03T12:43:25.514125Z"
    }
   },
   "outputs": [],
   "source": [
    "retriever = vectorstore.as_retriever()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "f686dc47-e768-47c5-9586-18d635a9dd30",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:43:44.994370Z",
     "iopub.status.busy": "2024-08-03T12:43:44.993595Z",
     "iopub.status.idle": "2024-08-03T12:43:45.590381Z",
     "shell.execute_reply": "2024-08-03T12:43:45.589329Z",
     "shell.execute_reply.started": "2024-08-03T12:43:44.994317Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(page_content='Cats love thuna')]"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "retriever.invoke(\"What do cats like to eat?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "f7874070-4246-4cff-9b01-f6f4bc461d88",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:44:57.908754Z",
     "iopub.status.busy": "2024-08-03T12:44:57.907501Z",
     "iopub.status.idle": "2024-08-03T12:44:57.970329Z",
     "shell.execute_reply": "2024-08-03T12:44:57.969623Z",
     "shell.execute_reply.started": "2024-08-03T12:44:57.908696Z"
    }
   },
   "outputs": [],
   "source": [
    "\n",
    "template = \"\"\"Answer the question based only on the following context:\n",
    "{context}\n",
    "\n",
    "Question: {question}\n",
    "\"\"\"\n",
    "prompt = ChatPromptTemplate.from_template(template=template)\n",
    "\n",
    "def format_docs(docs):\n",
    "    return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
    "\n",
    "rag_chain = (\n",
    "    {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
    "    | prompt\n",
    "    | ChatOpenAI()\n",
    "    | StrOutputParser()\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "94d4e191-d6f1-4df4-be24-a9e858d94c09",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:44:59.861959Z",
     "iopub.status.busy": "2024-08-03T12:44:59.861566Z",
     "iopub.status.idle": "2024-08-03T12:45:01.621366Z",
     "shell.execute_reply": "2024-08-03T12:45:01.619841Z",
     "shell.execute_reply.started": "2024-08-03T12:44:59.861938Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Tuna'"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "rag_chain.invoke(\"What do cats like to eat?\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "073a42e6-cf81-456b-b517-5e6adf9c75e6",
   "metadata": {},
   "source": [
    "## Tool uses "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "45eab2e9-9198-447d-b53b-568fc10430a1",
   "metadata": {},
   "source": [
    "- precise math calculation\n",
    "- custom tools (自定义 functions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "e64f90fc-2ed2-45d6-930f-8382d5ff8dd7",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:45:30.028157Z",
     "iopub.status.busy": "2024-08-03T12:45:30.027423Z",
     "iopub.status.idle": "2024-08-03T12:45:30.033764Z",
     "shell.execute_reply": "2024-08-03T12:45:30.032709Z",
     "shell.execute_reply.started": "2024-08-03T12:45:30.028104Z"
    }
   },
   "outputs": [],
   "source": [
    "os.environ[\"LANGCHAIN_PROJECT\"] = 'tools_test2'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "74fbf160-b3c9-4076-a5ff-2b4ed2f30056",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:46:18.034473Z",
     "iopub.status.busy": "2024-08-03T12:46:18.033806Z",
     "iopub.status.idle": "2024-08-03T12:46:18.039697Z",
     "shell.execute_reply": "2024-08-03T12:46:18.038624Z",
     "shell.execute_reply.started": "2024-08-03T12:46:18.034428Z"
    }
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from langchain_core.tools import Tool\n",
    "from langchain_core.tools import tool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "df753252-e082-4c8e-aa25-17ce461a7aa4",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:46:19.265361Z",
     "iopub.status.busy": "2024-08-03T12:46:19.264642Z",
     "iopub.status.idle": "2024-08-03T12:46:19.313621Z",
     "shell.execute_reply": "2024-08-03T12:46:19.312535Z",
     "shell.execute_reply.started": "2024-08-03T12:46:19.265310Z"
    }
   },
   "outputs": [],
   "source": [
    "@tool\n",
    "def add(num1: float, num2: float) -> float:\n",
    "    \"Add two numbers.\"\n",
    "    return num1 + num2\n",
    "    \n",
    "@tool\n",
    "def subtract(num1: float, num2: float) -> float:\n",
    "    \"\"\"\n",
    "    Subtract two numbers.\n",
    "    \"\"\"\n",
    "    return num1 - num2\n",
    "    \n",
    "@tool\n",
    "def multiply(num1: float, num2: float) -> float:\n",
    "    \"\"\"Multiply two float .\"\"\"\n",
    "    return num1 * num2\n",
    "\n",
    "@tool\n",
    "def divide(numerator: float, denominator: float) -> float:\n",
    "    \"\"\"\n",
    "    Divides the numerator by the denominator.\n",
    "    \"\"\"\n",
    "\n",
    "    result = numerator / denominator\n",
    "    return result\n",
    "\n",
    "@tool\n",
    "def power(base: float, exponent: float) -> float:\n",
    "    \"Take the base to the exponent power, base^exponent.\"\n",
    "    return base**exponent\n",
    "\n",
    "\n",
    "@tool\n",
    "def exp(x):\n",
    "    \"\"\"\n",
    "    Calculate the natural exponential $e^x$\n",
    "    \"\"\"\n",
    "    return np.exp(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "358e6491-02e5-43e3-99a4-42065159d7c3",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:46:22.340435Z",
     "iopub.status.busy": "2024-08-03T12:46:22.339848Z",
     "iopub.status.idle": "2024-08-03T12:46:22.346013Z",
     "shell.execute_reply": "2024-08-03T12:46:22.344867Z",
     "shell.execute_reply.started": "2024-08-03T12:46:22.340385Z"
    }
   },
   "outputs": [],
   "source": [
    "tools = [add, subtract, multiply, divide, power, exp]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "eeef5b00-78fe-41fe-9f6e-a62c06c6ed3f",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:46:25.919363Z",
     "iopub.status.busy": "2024-08-03T12:46:25.918284Z",
     "iopub.status.idle": "2024-08-03T12:46:26.161948Z",
     "shell.execute_reply": "2024-08-03T12:46:26.161214Z",
     "shell.execute_reply.started": "2024-08-03T12:46:25.919308Z"
    }
   },
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "from langchain.agents import create_openai_tools_agent\n",
    "from langchain_core.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    MessagesPlaceholder,\n",
    "    HumanMessagePromptTemplate,\n",
    "    SystemMessagePromptTemplate,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "4828b764-3e4e-4932-8e8e-1ac5562d58d3",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:47:28.167445Z",
     "iopub.status.busy": "2024-08-03T12:47:28.167053Z",
     "iopub.status.idle": "2024-08-03T12:47:28.221453Z",
     "shell.execute_reply": "2024-08-03T12:47:28.220770Z",
     "shell.execute_reply.started": "2024-08-03T12:47:28.167412Z"
    }
   },
   "outputs": [],
   "source": [
    "llm = ChatOpenAI(model=\"gpt-4o\", temperature=0, streaming=True)\n",
    "\n",
    "system_template = \"\"\"\n",
    "You are a helpful math assistant that uses calculation functions to solve complex math problems step by step.\n",
    "\"\"\"\n",
    "\n",
    "human_template = \"{input}\"\n",
    "\n",
    "prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        SystemMessagePromptTemplate.from_template(system_template),\n",
    "        MessagesPlaceholder(variable_name=\"chat_history\", optional=True),\n",
    "        HumanMessagePromptTemplate.from_template(input_variables=[\"input\"], template=human_template),\n",
    "        MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "agent = create_openai_tools_agent(llm, tools, prompt)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "e7b7cdf1-e6bf-41b8-a769-af7aef6f1402",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:47:50.837574Z",
     "iopub.status.busy": "2024-08-03T12:47:50.837388Z",
     "iopub.status.idle": "2024-08-03T12:47:50.841587Z",
     "shell.execute_reply": "2024-08-03T12:47:50.840942Z",
     "shell.execute_reply.started": "2024-08-03T12:47:50.837560Z"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.agents import AgentExecutor\n",
    "\n",
    "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "b35e9e49-7616-4e87-9077-1549ff1a3585",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T11:47:23.029018Z",
     "iopub.status.busy": "2024-08-03T11:47:23.028715Z",
     "iopub.status.idle": "2024-08-03T11:47:23.032327Z",
     "shell.execute_reply": "2024-08-03T11:47:23.031653Z",
     "shell.execute_reply.started": "2024-08-03T11:47:23.028997Z"
    }
   },
   "outputs": [],
   "source": [
    "#  \"\"\"Agent that is using tools.\"\"\"\n",
    "# AgentExecutor??"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "d56ef370-3359-4db1-b27e-f44f22b37474",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:48:05.544045Z",
     "iopub.status.busy": "2024-08-03T12:48:05.543457Z",
     "iopub.status.idle": "2024-08-03T12:48:16.703786Z",
     "shell.execute_reply": "2024-08-03T12:48:16.702919Z",
     "shell.execute_reply.started": "2024-08-03T12:48:05.543995Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m\n",
      "Invoking: `exp` with `{'x': -2.5}`\n",
      "responded: The sigmoid function is defined as:\n",
      "\n",
      "\\[ \\sigma(x) = \\frac{1}{1 + e^{-x}} \\]\n",
      "\n",
      "The derivative of the sigmoid function is:\n",
      "\n",
      "\\[ \\sigma'(x) = \\sigma(x) \\cdot (1 - \\sigma(x)) \\]\n",
      "\n",
      "First, we need to calculate \\(\\sigma(2.5)\\):\n",
      "\n",
      "\\[ \\sigma(2.5) = \\frac{1}{1 + e^{-2.5}} \\]\n",
      "\n",
      "Let's calculate \\(e^{-2.5}\\) and then \\(\\sigma(2.5)\\).\n",
      "\n",
      "\u001b[0m\u001b[38;5;200m\u001b[1;3m0.0820849986238988\u001b[0m\u001b[32;1m\u001b[1;3m\n",
      "Invoking: `divide` with `{'numerator': 1, 'denominator': 1.082085}`\n",
      "responded: We have \\( e^{-2.5} \\approx 0.082085 \\).\n",
      "\n",
      "Now, we can calculate \\(\\sigma(2.5)\\):\n",
      "\n",
      "\\[ \\sigma(2.5) = \\frac{1}{1 + 0.082085} \\]\n",
      "\n",
      "Let's compute this value.\n",
      "\n",
      "\u001b[0m\u001b[36;1m\u001b[1;3m0.9241418188035136\u001b[0m\u001b[32;1m\u001b[1;3m\n",
      "Invoking: `subtract` with `{'num1': 1, 'num2': 0.9241418188035136}`\n",
      "responded: We have \\(\\sigma(2.5) \\approx 0.9241\\).\n",
      "\n",
      "Next, we need to calculate the derivative \\(\\sigma'(2.5)\\):\n",
      "\n",
      "\\[ \\sigma'(2.5) = \\sigma(2.5) \\cdot (1 - \\sigma(2.5)) \\]\n",
      "\n",
      "Let's compute \\(1 - \\sigma(2.5)\\) and then \\(\\sigma'(2.5)\\).\n",
      "\n",
      "\u001b[0m\u001b[33;1m\u001b[1;3m0.07585818119648635\u001b[0m\u001b[32;1m\u001b[1;3m\n",
      "Invoking: `multiply` with `{'num1': 0.9241418188035136, 'num2': 0.07585818119648635}`\n",
      "\n",
      "\n",
      "\u001b[0m\u001b[38;5;200m\u001b[1;3m0.0701037175420474\u001b[0m\u001b[32;1m\u001b[1;3mThe derivative of the sigmoid function at \\(x = 2.5\\) is approximately \\(0.0701\\).\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': 'What is the result of directive of sigmoid(2.5)?',\n",
       " 'output': 'The derivative of the sigmoid function at \\\\(x = 2.5\\\\) is approximately \\\\(0.0701\\\\).'}"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.invoke({\"input\": \"What is the result of directive of sigmoid(2.5)?\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "49fd8046-7873-447b-9d99-fe9088617e67",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:03:59.256776Z",
     "iopub.status.busy": "2024-08-03T12:03:59.256588Z",
     "iopub.status.idle": "2024-08-03T12:04:00.322192Z",
     "shell.execute_reply": "2024-08-03T12:04:00.321451Z",
     "shell.execute_reply.started": "2024-08-03T12:03:59.256757Z"
    }
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn.functional as F"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24b70207-8fef-4fb9-bfd5-9c3ad4ec3aa2",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{split}\n",
    "\\sigma(x)&=\\frac{1}{1+\\exp(-x)}\\\\\n",
    "\\sigma'(x)&=\\sigma(x)(1-\\sigma(x))\\\\\n",
    "\\end{split}\n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "a341b239-5521-4cbe-9ffe-7854185aee46",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-08-03T12:04:15.198164Z",
     "iopub.status.busy": "2024-08-03T12:04:15.197817Z",
     "iopub.status.idle": "2024-08-03T12:04:15.206062Z",
     "shell.execute_reply": "2024-08-03T12:04:15.205753Z",
     "shell.execute_reply.started": "2024-08-03T12:04:15.198144Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0.0701])"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "F.sigmoid(torch.tensor([2.5])) * (1-F.sigmoid(torch.tensor([2.5])))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "agent",
   "language": "python",
   "name": "agent"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
