{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Chaining runnables \n",
    "One key advantage of the `Runnable` interface is that any two runnables can be \"chained\" together into sequences. The output of the previous runnable's .invoke() call is passed as input to the next runnable. This can be done using the pipe operator (|), or the more explicit .pipe() method, which does the same thing. The resulting `RunnableSequence` is itself a runnable, which means it can be invoked, streamed, or piped just like any other runnable."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The `|` operator "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Why did the bear bring a flashlight to the party?\\n\\nBecause he heard it was going to be a \"beary\" good time!'"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n",
    "model = ChatOpenAI()\n",
    "\n",
    "chain = prompt | model | StrOutputParser()\n",
    "# chain = prompt.pipe(model).pipe(StrOutputParser())\n",
    "chain.invoke({\"topic\": \"bears\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the below example, the dict in the chain is automatically parsed and converted into a `RunnableParallel`, which runs all of its values in parallel and returns a dict with the results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Some people might find this joke funny, as it plays on the stereotype of bears being slow and clumsy. However, humor is subjective so others may not find it amusing.'"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "analysis_prompt = ChatPromptTemplate.from_template(\"is this a funny joke? {joke}\")\n",
    "\n",
    "composed_chain = {\"joke\": chain} | analysis_prompt | model | StrOutputParser()\n",
    "# composed_chain = RunnableParallel({\"joke\": chain}).pipe(analysis_prompt).pipe(model).pipe(StrOutputParser())\n",
    "composed_chain.invoke({\"topic\": \"bears\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'title': 'RunnableParallel<joke>Input',\n",
       " 'type': 'object',\n",
       " 'properties': {'topic': {'title': 'Topic', 'type': 'string'}}}"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "composed_chain.input_schema.schema()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Some people may find this joke funny, while others may not. It ultimately depends on individual sense of humor.'"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "composed_chain_with_lambda = (\n",
    "    chain\n",
    "    | (lambda input: {\"joke\": input})\n",
    "    | analysis_prompt\n",
    "    | model\n",
    "    | StrOutputParser()\n",
    ")\n",
    "composed_chain_with_lambda.invoke({\"topic\": \"beets\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The .pipe() method"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Some people might find it funny, especially fans of the TV show Battlestar Galactica which features Cylons. Others might not find it as humorous. Humor is subjective!'"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.runnables import RunnableParallel\n",
    "\n",
    "composed_chain_with_pipe = (\n",
    "    RunnableParallel({\"joke\": chain})\n",
    "    .pipe(analysis_prompt)\n",
    "    .pipe(model)\n",
    "    .pipe(StrOutputParser())\n",
    ")\n",
    "composed_chain_with_pipe.invoke({\"topic\": \"battlestar galactica\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'title': 'RunnableParallel',\n",
       " 'description': 'Runnable that runs a mapping of Runnables in parallel, and returns a mapping\\nof their outputs.\\n\\nRunnableParallel is one of the two main composition primitives for the LCEL,\\nalongside RunnableSequence. It invokes Runnables concurrently, providing the same\\ninput to each.\\n\\nA RunnableParallel can be instantiated directly or by using a dict literal within a\\nsequence.\\n\\nHere is a simple example that uses functions to illustrate the use of\\nRunnableParallel:\\n\\n    .. code-block:: python\\n\\n        from langchain_core.runnables import RunnableLambda\\n\\n        def add_one(x: int) -> int:\\n            return x + 1\\n\\n        def mul_two(x: int) -> int:\\n            return x * 2\\n\\n        def mul_three(x: int) -> int:\\n            return x * 3\\n\\n        runnable_1 = RunnableLambda(add_one)\\n        runnable_2 = RunnableLambda(mul_two)\\n        runnable_3 = RunnableLambda(mul_three)\\n\\n        sequence = runnable_1 | {  # this dict is coerced to a RunnableParallel\\n            \"mul_two\": runnable_2,\\n            \"mul_three\": runnable_3,\\n        }\\n        # Or equivalently:\\n        # sequence = runnable_1 | RunnableParallel(\\n        #     {\"mul_two\": runnable_2, \"mul_three\": runnable_3}\\n        # )\\n        # Also equivalently:\\n        # sequence = runnable_1 | RunnableParallel(\\n        #     mul_two=runnable_2,\\n        #     mul_three=runnable_3,\\n        # )\\n\\n        sequence.invoke(1)\\n        await sequence.ainvoke(1)\\n\\n        sequence.batch([1, 2, 3])\\n        await sequence.abatch([1, 2, 3])\\n\\nRunnableParallel makes it easy to run Runnables in parallel. In the below example,\\nwe simultaneously stream output from two different Runnables:\\n\\n    .. code-block:: python\\n\\n        from langchain_core.prompts import ChatPromptTemplate\\n        from langchain_core.runnables import RunnableParallel\\n        from langchain_openai import ChatOpenAI\\n\\n        model = ChatOpenAI()\\n        joke_chain = (\\n            ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\\n            | model\\n        )\\n        poem_chain = (\\n            ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\")\\n            | model\\n        )\\n\\n        runnable = RunnableParallel(joke=joke_chain, poem=poem_chain)\\n\\n        # Display stream\\n        output = {key: \"\" for key, _ in runnable.output_schema()}\\n        for chunk in runnable.stream({\"topic\": \"bear\"}):\\n            for key in chunk:\\n                output[key] = output[key] + chunk[key].content\\n            print(output)  # noqa: T201',\n",
       " 'type': 'object',\n",
       " 'properties': {'name': {'title': 'Name', 'type': 'string'},\n",
       "  'steps__': {'title': 'Steps  ',\n",
       "   'type': 'object',\n",
       "   'additionalProperties': {'allOf': [{'type': 'array', 'items': [{}, {}]}]}}},\n",
       " 'required': ['steps__']}"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pipe() result: RunableSequence\n",
    "RunnableParallel({\"joke\": chain}).schema()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain0_1",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
