{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Composition\n",
    "> LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.<br>\n",
    "LangChain提供了一个用户友好的界面，用于将提示的不同部分组合在一起。您可以使用字符串提示或聊天提示来执行此操作。以这种方式构造提示可以轻松重用组件"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## String prompt composition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tell me a joke about sports, make it funny\n",
      "\n",
      "and in spanish\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import PromptTemplate\n",
    "\n",
    "# 使用字符串组合模板\n",
    "prompt = (\n",
    "    PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
    "    + \", make it funny\"\n",
    "    + \"\\n\\nand in {language}\"\n",
    ")\n",
    "message=prompt.format(topic=\"sports\", language=\"spanish\")\n",
    "print(message)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'为什么篮球比赛总是那么吵？因为球员总是在投篮！哈哈哈！'"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.chains import LLMChain\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "model = ChatOpenAI()\n",
    "chain = LLMChain(llm=model, prompt=prompt)\n",
    "chain.run(topic=\"sports\", language=\"中文\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Chat prompt composition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='You are a nice pirate'),\n",
       " HumanMessage(content='hi'),\n",
       " AIMessage(content='what?'),\n",
       " HumanMessage(content='i said hi')]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
    "\n",
    "# 使用Chat Prompt组合模版\n",
    "prompt = SystemMessage(content=\"You are a nice pirate\")\n",
    "new_prompt = (\n",
    "    prompt + HumanMessage(content=\"hi\") + AIMessage(content=\"what?\") + \"{input}\"\n",
    ")\n",
    "new_prompt.format_messages(input=\"i said hi\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Oh, sorry about that! Hi there! How can I assist you today?'"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.chains import LLMChain\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "model = ChatOpenAI()\n",
    "chain = LLMChain(llm=model, prompt=new_prompt)\n",
    "chain.run(\"i said hi\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Using PipelinePrompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['example_a', 'example_q', 'input', 'person']"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.prompts.pipeline import PipelinePromptTemplate\n",
    "from langchain_core.prompts.prompt import PromptTemplate\n",
    "\n",
    "full_template = \"\"\"{introduction}\n",
    "\n",
    "{example}\n",
    "\n",
    "{start}\"\"\"\n",
    "\n",
    "full_prompt = PromptTemplate.from_template(full_template)\n",
    "# \n",
    "introduction_template = \"\"\"You are impersonating {person}.\"\"\"\n",
    "introduction_prompt = PromptTemplate.from_template(introduction_template)\n",
    "\n",
    "# \n",
    "example_template = \"\"\"Here's an example of an interaction:\n",
    "\n",
    "Q: {example_q}\n",
    "A: {example_a}\"\"\"\n",
    "example_prompt = PromptTemplate.from_template(example_template)\n",
    "\n",
    "# \n",
    "start_template = \"\"\"Now, do this for real!\n",
    "\n",
    "Q: {input}\n",
    "A:\"\"\"\n",
    "start_prompt = PromptTemplate.from_template(start_template)\n",
    "\n",
    "# \n",
    "input_prompts = [\n",
    "    (\"introduction\", introduction_prompt),\n",
    "    (\"example\", example_prompt),\n",
    "    (\"start\", start_prompt),\n",
    "]\n",
    "\n",
    "# pipeline组合模板\n",
    "pipeline_prompt = PipelinePromptTemplate(\n",
    "    final_prompt=full_prompt, pipeline_prompts=input_prompts\n",
    ")\n",
    "pipeline_prompt.input_variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "You are impersonating Elon Musk.\n",
      "\n",
      "Here's an example of an interaction:\n",
      "\n",
      "Q: What's your favorite car?\n",
      "A: Tesla\n",
      "\n",
      "Now, do this for real!\n",
      "\n",
      "Q: What's your favorite social media site?\n",
      "A:\n"
     ]
    }
   ],
   "source": [
    "print(\n",
    "    pipeline_prompt.format(\n",
    "        person=\"Elon Musk\",\n",
    "        example_q=\"What's your favorite car?\",\n",
    "        example_a=\"Tesla\",\n",
    "        input=\"What's your favorite social media site?\",\n",
    "    )\n",
    ")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain0_1",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
