url
stringlengths 30
161
| markdown
stringlengths 27
670k
| last_modified
stringclasses 1
value |
---|---|---|
https://python.langchain.com/v0.2/docs/how_to/streaming/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream runnables
On this page
How to stream runnables
=======================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language)
* [Output parsers](/v0.2/docs/concepts/#output-parsers)
Streaming is critical in making applications based on LLMs feel responsive to end-users.
Important LangChain primitives like [chat models](/v0.2/docs/concepts/#chat-models), [output parsers](/v0.2/docs/concepts/#output-parsers), [prompts](/v0.2/docs/concepts/#prompt-templates), [retrievers](/v0.2/docs/concepts/#retrievers), and [agents](/v0.2/docs/concepts/#agents) implement the LangChain [Runnable Interface](/v0.2/docs/concepts/#interface).
This interface provides two general approaches to stream content:
1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain.
2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain.
Let's take a look at both approaches, and try to understand how to use them.
info
For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/v0.2/docs/concepts/#streaming).
Using Stream[β](#using-stream "Direct link to Using Stream")
------------------------------------------------------------
All `Runnable` objects implement a sync method called `stream` and an async variant called `astream`.
These methods are designed to stream the final output in chunks, yielding each chunk as soon as it is available.
Streaming is only possible if all steps in the program know how to process an **input stream**; i.e., process an input chunk one at a time, and yield a corresponding output chunk.
The complexity of this processing can vary, from straightforward tasks like emitting tokens produced by an LLM, to more challenging ones like streaming parts of JSON results before the entire JSON is complete.
The best place to start exploring streaming is with the single most important components in LLMs apps-- the LLMs themselves!
### LLMs and Chat Models[β](#llms-and-chat-models "Direct link to LLMs and Chat Models")
Large language models and their chat variants are the primary bottleneck in LLM based apps.
Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.
The key strategy to make the application feel more responsive is to show intermediate progress; viz., to stream the output from the model **token by token**.
We will show examples of streaming using a chat model. Choose one from the options below:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
Let's start with the sync `stream` API:
chunks = []for chunk in model.stream("what color is the sky?"): chunks.append(chunk) print(chunk.content, end="|", flush=True)
The| sky| appears| blue| during| the| day|.|
Alternatively, if you're working in an async environment, you may consider using the async `astream` API:
chunks = []async for chunk in model.astream("what color is the sky?"): chunks.append(chunk) print(chunk.content, end="|", flush=True)
The| sky| appears| blue| during| the| day|.|
Let's inspect one of the chunks
chunks[0]
AIMessageChunk(content='The', id='run-b36bea64-5511-4d7a-b6a3-a07b3db0c8e7')
We got back something called an `AIMessageChunk`. This chunk represents a part of an `AIMessage`.
Message chunks are additive by design -- one can simply add them up to get the state of the response so far!
chunks[0] + chunks[1] + chunks[2] + chunks[3] + chunks[4]
AIMessageChunk(content='The sky appears blue during', id='run-b36bea64-5511-4d7a-b6a3-a07b3db0c8e7')
### Chains[β](#chains "Direct link to Chains")
Virtually all LLM applications involve more steps than just a call to a language model.
Let's build a simple chain using `LangChain Expression Language` (`LCEL`) that combines a prompt, model and a parser and verify that streaming works.
We will use [`StrOutputParser`](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.
tip
LCEL is a _declarative_ way to specify a "program" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for chunk in chain.astream({"topic": "parrot"}): print(chunk, end="|", flush=True)
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Here|'s| a| joke| about| a| par|rot|:|A man| goes| to| a| pet| shop| to| buy| a| par|rot|.| The| shop| owner| shows| him| two| stunning| pa|rr|ots| with| beautiful| pl|um|age|.|"|There|'s| a| talking| par|rot| an|d a| non|-|talking| par|rot|,"| the| owner| says|.| "|The| talking| par|rot| costs| $|100|,| an|d the| non|-|talking| par|rot| is| $|20|."|The| man| says|,| "|I|'ll| take| the| non|-|talking| par|rot| at| $|20|."|He| pays| an|d leaves| with| the| par|rot|.| As| he|'s| walking| down| the| street|,| the| par|rot| looks| up| at| him| an|d says|,| "|You| know|,| you| really| are| a| stupi|d man|!"|The| man| is| stun|ne|d an|d looks| at| the| par|rot| in| dis|bel|ief|.| The| par|rot| continues|,| "|Yes|,| you| got| r|ippe|d off| big| time|!| I| can| talk| just| as| well| as| that| other| par|rot|,| an|d you| only| pai|d $|20| |for| me|!"|
Note that we're getting streaming output even though we're using `parser` at the end of the chain above. The `parser` operates on each streaming chunk individidually. Many of the [LCEL primitives](/v0.2/docs/how_to/#langchain-expression-language-lcel) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.
Custom functions can be [designed to return generators](/v0.2/docs/how_to/functions/#streaming), which are able to operate on streams.
Certain runnables, like [prompt templates](/v0.2/docs/how_to/#prompt-templates) and [chat models](/v0.2/docs/how_to/#chat-models), cannot process individual chunks and instead aggregate all previous steps. Such runnables can interrupt the streaming process.
note
The LangChain Expression language allows you to separate the construction of a chain from the mode in which it is used (e.g., sync/async, batch/streaming etc.). If this is not relevant to what you're building, you can also rely on a standard **imperative** programming approach by caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit.
### Working with Input Streams[β](#working-with-input-streams "Direct link to Working with Input Streams")
What if you wanted to stream JSON from the output as it was being generated?
If you were to rely on `json.loads` to parse the partial json, the parsing would fail as the partial json wouldn't be valid json.
You'd likely be at a complete loss of what to do and claim that it wasn't possible to stream JSON.
Well, turns out there is a way to do it -- the parser needs to operate on the **input stream**, and attempt to "auto-complete" the partial json into a valid state.
Let's see such a parser in action to understand what this means.
from langchain_core.output_parsers import JsonOutputParserchain = ( model | JsonOutputParser()) # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`"): print(text, flush=True)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
{}{'countries': []}{'countries': [{}]}{'countries': [{'name': ''}]}{'countries': [{'name': 'France'}]}{'countries': [{'name': 'France', 'population': 67}]}{'countries': [{'name': 'France', 'population': 67413}]}{'countries': [{'name': 'France', 'population': 67413000}]}{'countries': [{'name': 'France', 'population': 67413000}, {}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': ''}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan'}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125584}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125584000}]}
Now, let's **break** streaming. We'll use the previous example and append an extraction function at the end that extracts the country names from the finalized JSON.
danger
Any steps in the chain that operate on **finalized inputs** rather than on **input streams** can break streaming functionality via `stream` or `astream`.
tip
Later, we will discuss the `astream_events` API which streams results from intermediate steps. This API will stream results from intermediate steps even if the chain contains steps that only operate on **finalized inputs**.
from langchain_core.output_parsers import ( JsonOutputParser,)# A function that operates on finalized inputs# rather than on an input_streamdef _extract_country_names(inputs): """A function that does not operates on input streams and breaks streaming.""" if not isinstance(inputs, dict): return "" if "countries" not in inputs: return "" countries = inputs["countries"] if not isinstance(countries, list): return "" country_names = [ country.get("name") for country in countries if isinstance(country, dict) ] return country_nameschain = model | JsonOutputParser() | _extract_country_namesasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`"): print(text, end="|", flush=True)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
['France', 'Spain', 'Japan']|
#### Generator Functions[β](#generator-functions "Direct link to Generator Functions")
Le'ts fix the streaming using a generator function that can operate on the **input stream**.
tip
A generator function (a function that uses `yield`) allows writing code that operates on **input streams**
from langchain_core.output_parsers import JsonOutputParserasync def _extract_country_names_streaming(input_stream): """A function that operates on input streams.""" country_names_so_far = set() async for input in input_stream: if not isinstance(input, dict): continue if "countries" not in input: continue countries = input["countries"] if not isinstance(countries, list): continue for country in countries: name = country.get("name") if not name: continue if name not in country_names_so_far: yield name country_names_so_far.add(name)chain = model | JsonOutputParser() | _extract_country_names_streamingasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`",): print(text, end="|", flush=True)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
France|Spain|Japan|
note
Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!
We're focusing on streaming concepts, not necessarily the results of the chains.
### Non-streaming components[β](#non-streaming-components "Direct link to Non-streaming components")
Some built-in components like Retrievers do not offer any `streaming`. What happens if we try to `stream` them? π€¨
from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts( ["harrison worked at kensho", "harrison likes spicy food"], embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()chunks = [chunk for chunk in retriever.stream("where did harrison work?")]chunks
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
[[Document(page_content='harrison worked at kensho'), Document(page_content='harrison likes spicy food')]]
Stream just yielded the final result from that component.
This is OK π₯Ή! Not all components have to implement streaming -- in some cases streaming is either unnecessary, difficult or just doesn't make sense.
tip
An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.
retrieval_chain = ( { "context": retriever.with_config(run_name="Docs"), "question": RunnablePassthrough(), } | prompt | model | StrOutputParser())
for chunk in retrieval_chain.stream( "Where did harrison work? " "Write 3 made up sentences about this place."): print(chunk, end="|", flush=True)
Base|d on| the| given| context|,| Harrison| worke|d at| K|ens|ho|.|Here| are| |3| |made| up| sentences| about| this| place|:|1|.| K|ens|ho| was| a| cutting|-|edge| technology| company| known| for| its| innovative| solutions| in| artificial| intelligence| an|d data| analytics|.|2|.| The| modern| office| space| at| K|ens|ho| feature|d open| floor| plans|,| collaborative| work|sp|aces|,| an|d a| vib|rant| atmosphere| that| fos|tere|d creativity| an|d team|work|.|3|.| With| its| prime| location| in| the| heart| of| the| city|,| K|ens|ho| attracte|d top| talent| from| aroun|d the| worl|d,| creating| a| diverse| an|d dynamic| work| environment|.|
Now that we've seen how `stream` and `astream` work, let's venture into the world of streaming events. ποΈ
Using Stream Events[β](#using-stream-events "Direct link to Using Stream Events")
---------------------------------------------------------------------------------
Event Streaming is a **beta** API. This API may change a bit based on feedback.
note
This guide demonstrates the `V2` API and requires langchain-core >= 0.2. For the `V1` API compatible with older versions of LangChain, see [here](https://python.langchain.com/v0.1/docs/expression_language/streaming/#using-stream-events).
import langchain_corelangchain_core.__version__
For the `astream_events` API to work properly:
* Use `async` throughout the code to the extent possible (e.g., async tools etc)
* Propagate callbacks if defining custom functions / runnables
* Whenever using runnables without LCEL, make sure to call `.astream()` on LLMs rather than `.ainvoke` to force the LLM to stream tokens.
* Let us know if anything doesn't work as expected! :)
### Event Reference[β](#event-reference "Direct link to Event Reference")
Below is a reference table that shows some events that might be emitted by the various Runnable objects.
note
When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.
event
name
chunk
input
output
on\_chat\_model\_start
\[model name\]
{"messages": \[\[SystemMessage, HumanMessage\]\]}
on\_chat\_model\_stream
\[model name\]
AIMessageChunk(content="hello")
on\_chat\_model\_end
\[model name\]
{"messages": \[\[SystemMessage, HumanMessage\]\]}
AIMessageChunk(content="hello world")
on\_llm\_start
\[model name\]
{'input': 'hello'}
on\_llm\_stream
\[model name\]
'Hello'
on\_llm\_end
\[model name\]
'Hello human!'
on\_chain\_start
format\_docs
on\_chain\_stream
format\_docs
"hello world!, goodbye world!"
on\_chain\_end
format\_docs
\[Document(...)\]
"hello world!, goodbye world!"
on\_tool\_start
some\_tool
{"x": 1, "y": "2"}
on\_tool\_end
some\_tool
{"x": 1, "y": "2"}
on\_retriever\_start
\[retriever name\]
{"query": "hello"}
on\_retriever\_end
\[retriever name\]
{"query": "hello"}
\[Document(...), ..\]
on\_prompt\_start
\[template\_name\]
{"question": "hello"}
on\_prompt\_end
\[template\_name\]
{"question": "hello"}
ChatPromptValue(messages: \[SystemMessage, ...\])
### Chat Model[β](#chat-model "Direct link to Chat Model")
Let's start off by looking at the events produced by a chat model.
events = []async for event in model.astream_events("hello", version="v2"): events.append(event)
/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future. warn_beta(
note
Hey what's that funny version="v2" parameter in the API?! πΎ
This is a **beta API**, and we're almost certainly going to make some changes to it (in fact, we already have!)
This version parameter will allow us to minimize such breaking changes to your code.
In short, we are annoying you now, so we don't have to annoy you later.
`v2` is only available for langchain-core>=0.2.0.
Let's take a look at the few of the start event and a few of the end events.
events[:3]
[{'event': 'on_chat_model_start', 'data': {'input': 'hello'}, 'name': 'ChatAnthropic', 'tags': [], 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Hello', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='!', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}]
events[-2:]
[{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}, {'event': 'on_chat_model_end', 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}]
### Chain[β](#chain "Direct link to Chain")
Let's revisit the example chain that parsed streaming JSON to explore the streaming events API.
chain = ( model | JsonOutputParser()) # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsevents = [ event async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2", )]
If you examine at the first few events, you'll notice that there are **3** different start events rather than **2** start events.
The three start events correspond to:
1. The chain (model + parser)
2. The model
3. The parser
events[:3]
[{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': [], 'run_id': '4765006b-16e2-4b1d-a523-edd9fd64cb92', 'metadata': {}}, {'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1'], 'run_id': '0320c234-7b52-4a14-ae4e-5f100949e589', 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-0320c234-7b52-4a14-ae4e-5f100949e589')}, 'run_id': '0320c234-7b52-4a14-ae4e-5f100949e589', 'name': 'ChatAnthropic', 'tags': ['seq:step:1'], 'metadata': {}}]
What do you think you'd see if you looked at the last 3 events? what about the middle?
Let's use this API to take output the stream events from the model and the parser. We're ignoring start events, end events and events from the chain.
num_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2",): kind = event["event"] if kind == "on_chat_model_stream": print( f"Chat model chunk: {repr(event['data']['chunk'].content)}", flush=True, ) if kind == "on_parser_stream": print(f"Parser chunk: {event['data']['chunk']}", flush=True) num_events += 1 if num_events > 30: # Truncate the output print("...") break
Chat model chunk: '{'Parser chunk: {}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'countries'Chat model chunk: '":'Chat model chunk: ' ['Parser chunk: {'countries': []}Chat model chunk: '\n 'Chat model chunk: '{'Parser chunk: {'countries': [{}]}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'name'Chat model chunk: '":'Chat model chunk: ' "'Parser chunk: {'countries': [{'name': ''}]}Chat model chunk: 'France'Parser chunk: {'countries': [{'name': 'France'}]}Chat model chunk: '",'Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'population'...
Because both the model and the parser support streaming, we see streaming events from both components in real time! Kind of cool isn't it? π¦
### Filtering Events[β](#filtering-events "Direct link to Filtering Events")
Because this API produces so many events, it is useful to be able to filter on events.
You can filter by either component `name`, component `tags` or component `type`.
#### By Name[β](#by-name "Direct link to By Name")
chain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config( {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2", include_names=["my_parser"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break
{'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': []}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}...
#### By Type[β](#by-type "Direct link to By Type")
chain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config( {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events( 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`', version="v2", include_types=["chat_model"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break
{'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='":', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}...
#### By Tags[β](#by-tags "Direct link to By Tags")
caution
Tags are inherited by child components of a given runnable.
If you're using tags to filter, make sure that this is what you want.
chain = (model | JsonOutputParser()).with_config({"tags": ["my_chain"]})max_events = 0async for event in chain.astream_events( 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`', version="v2", include_tags=["my_chain"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break
{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'metadata': {}}{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': {}}, 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='":', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}...
### Non-streaming components[β](#non-streaming-components-1 "Direct link to Non-streaming components")
Remember how some components don't stream well because they don't operate on **input streams**?
While such components can break streaming of the final output when using `astream`, `astream_events` will still yield streaming events from intermediate steps that support streaming!
# Function that does not support streaming.# It operates on the finalizes inputs rather than# operating on the input stream.def _extract_country_names(inputs): """A function that does not operates on input streams and breaks streaming.""" if not isinstance(inputs, dict): return "" if "countries" not in inputs: return "" countries = inputs["countries"] if not isinstance(countries, list): return "" country_names = [ country.get("name") for country in countries if isinstance(country, dict) ] return country_nameschain = ( model | JsonOutputParser() | _extract_country_names) # This parser only works with OpenAI right now
As expected, the `astream` API doesn't work correctly because `_extract_country_names` doesn't operate on streams.
async for chunk in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`",): print(chunk, flush=True)
['France', 'Spain', 'Japan']
Now, let's confirm that with astream\_events we're still seeing streaming output from the model and the parser.
num_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2",): kind = event["event"] if kind == "on_chat_model_stream": print( f"Chat model chunk: {repr(event['data']['chunk'].content)}", flush=True, ) if kind == "on_parser_stream": print(f"Parser chunk: {event['data']['chunk']}", flush=True) num_events += 1 if num_events > 30: # Truncate the output print("...") break
Chat model chunk: '{'Parser chunk: {}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'countries'Chat model chunk: '":'Chat model chunk: ' ['Parser chunk: {'countries': []}Chat model chunk: '\n 'Chat model chunk: '{'Parser chunk: {'countries': [{}]}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'name'Chat model chunk: '":'Chat model chunk: ' "'Parser chunk: {'countries': [{'name': ''}]}Chat model chunk: 'France'Parser chunk: {'countries': [{'name': 'France'}]}Chat model chunk: '",'Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'population'Chat model chunk: '":'Chat model chunk: ' 'Chat model chunk: '67'Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}...
### Propagating Callbacks[β](#propagating-callbacks "Direct link to Propagating Callbacks")
caution
If you're using invoking runnables inside your tools, you need to propagate callbacks to the runnable; otherwise, no stream events will be generated.
note
When using `RunnableLambdas` or `@chain` decorator, callbacks are propagated automatically behind the scenes.
from langchain_core.runnables import RunnableLambdafrom langchain_core.tools import tooldef reverse_word(word: str): return word[::-1]reverse_word = RunnableLambda(reverse_word)@tooldef bad_tool(word: str): """Custom tool that doesn't propagate callbacks.""" return reverse_word.invoke(word)async for event in bad_tool.astream_events("hello", version="v2"): print(event)
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
{'event': 'on_tool_start', 'data': {'input': 'hello'}, 'name': 'bad_tool', 'tags': [], 'run_id': 'ea900472-a8f7-425d-b627-facdef936ee8', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': 'hello'}, 'name': 'reverse_word', 'tags': [], 'run_id': '77b01284-0515-48f4-8d7c-eb27c1882f86', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': 'olleh', 'input': 'hello'}, 'run_id': '77b01284-0515-48f4-8d7c-eb27c1882f86', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_tool_end', 'data': {'output': 'olleh'}, 'run_id': 'ea900472-a8f7-425d-b627-facdef936ee8', 'name': 'bad_tool', 'tags': [], 'metadata': {}}
Here's a re-implementation that does propagate callbacks correctly. You'll notice that now we're getting events from the `reverse_word` runnable as well.
@tooldef correct_tool(word: str, callbacks): """A tool that correctly propagates callbacks.""" return reverse_word.invoke(word, {"callbacks": callbacks})async for event in correct_tool.astream_events("hello", version="v2"): print(event)
{'event': 'on_tool_start', 'data': {'input': 'hello'}, 'name': 'correct_tool', 'tags': [], 'run_id': 'd5ea83b9-9278-49cc-9f1d-aa302d671040', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': 'hello'}, 'name': 'reverse_word', 'tags': [], 'run_id': '44dafbf4-2f87-412b-ae0e-9f71713810df', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': 'olleh', 'input': 'hello'}, 'run_id': '44dafbf4-2f87-412b-ae0e-9f71713810df', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_tool_end', 'data': {'output': 'olleh'}, 'run_id': 'd5ea83b9-9278-49cc-9f1d-aa302d671040', 'name': 'correct_tool', 'tags': [], 'metadata': {}}
If you're invoking runnables from within Runnable Lambdas or `@chains`, then callbacks will be passed automatically on your behalf.
from langchain_core.runnables import RunnableLambdaasync def reverse_and_double(word: str): return await reverse_word.ainvoke(word) * 2reverse_and_double = RunnableLambda(reverse_and_double)await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v2"): print(event)
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html)
{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_and_double', 'tags': [], 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_word', 'tags': [], 'run_id': '5cf26fc8-840b-4642-98ed-623dda28707a', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '4321', 'input': '1234'}, 'run_id': '5cf26fc8-840b-4642-98ed-623dda28707a', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': '43214321'}, 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '43214321'}, 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}
And with the `@chain` decorator:
from langchain_core.runnables import chain@chainasync def reverse_and_double(word: str): return await reverse_word.ainvoke(word) * 2await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v2"): print(event)
**API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_and_double', 'tags': [], 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_word', 'tags': [], 'run_id': '64fc99f0-5d7d-442b-b4f5-4537129f67d1', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '4321', 'input': '1234'}, 'run_id': '64fc99f0-5d7d-442b-b4f5-4537129f67d1', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': '43214321'}, 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '43214321'}, 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you've learned some ways to stream both final outputs and internal steps with LangChain.
To learn more, check out the other how-to guides in this section, or the [conceptual guide on Langchain Expression Language](/v0.2/docs/concepts/#langchain-expression-language/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/streaming.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do query validation as part of SQL question-answering
](/v0.2/docs/how_to/sql_query_checking/)[
Next
How to stream responses from an LLM
](/v0.2/docs/how_to/streaming_llm/)
* [Using Stream](#using-stream)
* [LLMs and Chat Models](#llms-and-chat-models)
* [Chains](#chains)
* [Working with Input Streams](#working-with-input-streams)
* [Non-streaming components](#non-streaming-components)
* [Using Stream Events](#using-stream-events)
* [Event Reference](#event-reference)
* [Chat Model](#chat-model)
* [Chain](#chain)
* [Filtering Events](#filtering-events)
* [Non-streaming components](#non-streaming-components-1)
* [Propagating Callbacks](#propagating-callbacks)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_calling_parallel/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* tool\_calling\_parallel
On this page
tool\_calling\_parallel
=======================
### Disabling parallel tool calling (OpenAI only)[β](#disabling-parallel-tool-calling-openai-only "Direct link to Disabling parallel tool calling (OpenAI only)")
OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like "What is the weather in Tokyo, New York, and Chicago?" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using the `parallel_tool_call` parameter.
First let's set up our tools and model:
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now let's show a quick example of how disabling parallel tool calls work:
llm_with_tools = llm.bind_tools(tools, parallel_tool_calls=False)llm_with_tools.invoke("Please call the first tool two times").tool_calls
[{'name': 'add', 'args': {'a': 2, 'b': 2}, 'id': 'call_Hh4JOTCDM85Sm9Pr84VKrWu5'}]
As we can see, even though we explicitly told the model to call a tool twice, by disabling parallel tool calls the model was constrained to only calling one.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_calling_parallel.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use a model to call tools
](/v0.2/docs/how_to/tool_calling/)[
Next
How to force tool calling behavior
](/v0.2/docs/how_to/tool_choice/)
* [Disabling parallel tool calling (OpenAI only)](#disabling-parallel-tool-calling-openai-only) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_choice/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to force tool calling behavior
How to force tool calling behavior
==================================
In order to force our LLM to spelect a specific tool, we can use the `tool_choice` parameter to ensure certain behavior. First, let's define our model and tools:
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
For example, we can force our tool to call the multiply tool by using the following code:
llm_forced_to_multiply = llm.bind_tools(tools, tool_choice="Multiply")llm_forced_to_multiply.invoke("what is 2 + 4")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{"a":2,"b":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})
Even if we pass it something that doesn't require multiplcation - it will still call the tool!
We can also just force our tool to select at least one of our tools by passing in the "any" (or "required" which is OpenAI specific) keyword to the `tool_choice` parameter.
llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice="any")llm_forced_to_use_tool.invoke("What day is today?")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{"a":1,"b":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_choice.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
tool\_calling\_parallel
](/v0.2/docs/how_to/tool_calling_parallel/)[
Next
How to pass tool outputs to the model
](/v0.2/docs/how_to/tool_results_pass_to_model/) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_results_pass_to_model/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass tool outputs to the model
How to pass tool outputs to the model
=====================================
If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s. First, let's define our tools and our model.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now we can use `ToolMessage` to pass back the output of the tool calls to the model.
from langchain_core.messages import HumanMessage, ToolMessagequery = "What is 3 * 12? Also, what is 11 + 49?"messages = [HumanMessage(query)]ai_msg = llm_with_tools.invoke(messages)messages.append(ai_msg)for tool_call in ai_msg.tool_calls: selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()] tool_output = selected_tool.invoke(tool_call["args"]) messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))messages
**API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html)
[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{"a": 3, "b": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{"a": 11, "b": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]), ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'), ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]
llm_with_tools.invoke(messages)
AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')
Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_results_pass_to_model.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to force tool calling behavior
](/v0.2/docs/how_to/tool_choice/)[
Next
How to pass run time values to a tool
](/v0.2/docs/how_to/tool_runtime/) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_runtime/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass run time values to a tool
How to pass run time values to a tool
=====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Tools](/v0.2/docs/concepts/#tools)
* [How to create tools](/v0.2/docs/how_to/custom_tools/)
* [How to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling)
Supported models
This how-to guide uses models with native tool calling capability. You can find a [list of all models that support tool calling](/v0.2/docs/integrations/chat/).
Using with LangGraph
If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/) which shows how to create an agent that keeps track of a given user's favorite pets.
You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.
Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.
Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.
This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values.
We can bind them to chat models as follows:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0)
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
Passing request time information
================================
The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example, this information may be the user ID as resolved from the request itself.
from typing import Listfrom langchain_core.output_parsers import JsonOutputParserfrom langchain_core.tools import BaseTool, tool
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) | [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
user_to_pets = {}def generate_tools_for_user(user_id: str) -> List[BaseTool]: """Generate a set of tools that have a user id associated with them.""" @tool def update_favorite_pets(pets: List[str]) -> None: """Add the list of favorite pets.""" user_to_pets[user_id] = pets @tool def delete_favorite_pets() -> None: """Delete the list of favorite pets.""" if user_id in user_to_pets: del user_to_pets[user_id] @tool def list_favorite_pets() -> None: """List favorite pets if any.""" return user_to_pets.get(user_id, []) return [update_favorite_pets, delete_favorite_pets, list_favorite_pets]
Verify that the tools work correctly
update_pets, delete_pets, list_pets = generate_tools_for_user("eugene")update_pets.invoke({"pets": ["cat", "dog"]})print(user_to_pets)print(list_pets.invoke({}))
{'eugene': ['cat', 'dog']}['cat', 'dog']
from langchain_core.prompts import ChatPromptTemplatedef handle_run_time_request(user_id: str, query: str): """Handle run time request.""" tools = generate_tools_for_user(user_id) llm_with_tools = llm.bind_tools(tools) prompt = ChatPromptTemplate.from_messages( [("system", "You are a helpful assistant.")], ) chain = prompt | llm_with_tools return llm_with_tools.invoke(query)
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
This code will allow the LLM to invoke the tools, but the LLM is **unaware** of the fact that a **user ID** even exists!
ai_message = handle_run_time_request( "eugene", "my favorite animals are cats and parrots.")ai_message.tool_calls
[{'name': 'update_favorite_pets', 'args': {'pets': ['cats', 'parrots']}, 'id': 'call_jJvjPXsNbFO5MMgW0q84iqCN'}]
info
Chat models only output requests to invoke tools, they don't actually invoke the underlying tools.
To see how to invoke the tools, please refer to [how to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_runtime.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass tool outputs to the model
](/v0.2/docs/how_to/tool_results_pass_to_model/)[
Next
How to stream tool calls
](/v0.2/docs/how_to/tool_streaming/) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_streaming/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream tool calls
How to stream tool calls
========================
When tools are called in a streaming context, [message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes optional string fields for the tool `name`, `args`, and `id`, and includes an optional integer field `index` that can be used to join chunks together. Fields are optional because portions of a tool call may be streamed across different chunks (e.g., a chunk that includes a substring of the arguments may have null values for the tool name and id).
Because message chunks inherit from their parent message class, an [AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. These fields are parsed best-effort from the message's tool call chunks.
Note that not all providers currently support streaming for tool calls. Before we start let's define our tools and our model.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now let's define our query and stream our output:
query = "What is 3 * 12? Also, what is 11 + 49?"async for chunk in llm_with_tools.astream(query): print(chunk.tool_call_chunks)
[][{'name': 'Multiply', 'args': '', 'id': 'call_3aQwTP9CYlFxwOvQZPHDu6wL', 'index': 0}][{'name': None, 'args': '{"a"', 'id': None, 'index': 0}][{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}][{'name': None, 'args': '"b": 1', 'id': None, 'index': 0}][{'name': None, 'args': '2}', 'id': None, 'index': 0}][{'name': 'Add', 'args': '', 'id': 'call_SQUoSsJz2p9Kx2x73GOgN1ja', 'index': 1}][{'name': None, 'args': '{"a"', 'id': None, 'index': 1}][{'name': None, 'args': ': 11,', 'id': None, 'index': 1}][{'name': None, 'args': ' "b": ', 'id': None, 'index': 1}][{'name': None, 'args': '49}', 'id': None, 'index': 1}][]
Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/v0.2/docs/how_to/output_parser_structured/) support streaming.
For example, below we accumulate tool call chunks:
first = Trueasync for chunk in llm_with_tools.astream(query): if first: gathered = chunk first = False else: gathered = gathered + chunk print(gathered.tool_call_chunks)
[][{'name': 'Multiply', 'args': '', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a"', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, ', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 1', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a"', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11,', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": ', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]
print(type(gathered.tool_call_chunks[0]["args"]))
<class 'str'>
And below we accumulate tool calls to demonstrate partial parsing:
first = Trueasync for chunk in llm_with_tools.astream(query): if first: gathered = chunk first = False else: gathered = gathered + chunk print(gathered.tool_calls)
[][][{'name': 'Multiply', 'args': {}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]
print(type(gathered.tool_calls[0]["args"]))
<class 'dict'>
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_streaming.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass run time values to a tool
](/v0.2/docs/how_to/tool_runtime/)[
Next
How to convert tools to OpenAI Functions
](/v0.2/docs/how_to/tools_as_openai_functions/) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_as_openai_functions/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to convert tools to OpenAI Functions
How to convert tools to OpenAI Functions
========================================
This notebook goes over how to use LangChain tools as OpenAI functions.
%pip install -qU langchain-community langchain-openai
from langchain_community.tools import MoveFileToolfrom langchain_core.messages import HumanMessagefrom langchain_core.utils.function_calling import convert_to_openai_functionfrom langchain_openai import ChatOpenAI
**API Reference:**[MoveFileTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.file_management.move.MoveFileTool.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [convert\_to\_openai\_function](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.function_calling.convert_to_openai_function.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
model = ChatOpenAI(model="gpt-3.5-turbo")
tools = [MoveFileTool()]functions = [convert_to_openai_function(t) for t in tools]
functions[0]
{'name': 'move_file', 'description': 'Move or rename a file from one location to another', 'parameters': {'type': 'object', 'properties': {'source_path': {'description': 'Path of the file to move', 'type': 'string'}, 'destination_path': {'description': 'New path for the moved file', 'type': 'string'}}, 'required': ['source_path', 'destination_path']}}
message = model.invoke( [HumanMessage(content="move file foo to bar")], functions=functions)
message
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
message.additional_kwargs["function_call"]
{'name': 'move_file', 'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}'}
With OpenAI chat models we can also automatically bind and convert function-like objects with `bind_functions`
model_with_functions = model.bind_functions(tools)model_with_functions.invoke([HumanMessage(content="move file foo to bar")])
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
Or we can use the update OpenAI API that uses `tools` and `tool_choice` instead of `functions` and `function_call` by using `ChatOpenAI.bind_tools`:
model_with_tools = model.bind_tools(tools)model_with_tools.invoke([HumanMessage(content="move file foo to bar")])
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_btkY3xV71cEVAOHnNa5qwo44', 'function': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}, 'type': 'function'}]})
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_as_openai_functions.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream tool calls
](/v0.2/docs/how_to/tool_streaming/)[
Next
How to handle tool errors
](/v0.2/docs/how_to/tools_error/) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_few_shot/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use few-shot prompting with tool calling
How to use few-shot prompting with tool calling
===============================================
For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt.
First let's define our tools and model.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Let's run our model where we can notice that even with some special instructions our model can get tripped up by order of operations.
llm_with_tools.invoke( "Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations").tool_calls
[{'name': 'Multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_T88XN6ECucTgbXXkyDeC2CQj'}, {'name': 'Add', 'args': {'a': 952, 'b': -20}, 'id': 'call_licdlmGsRqzup8rhqJSb1yZ4'}]
The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 \* 8 yet.
By adding a prompt with some examples we can correct this behavior:
from langchain_core.messages import AIMessage, HumanMessage, ToolMessagefrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughexamples = [ HumanMessage( "What's the product of 317253 and 128472 plus four", name="example_user" ), AIMessage( "", name="example_assistant", tool_calls=[ {"name": "Multiply", "args": {"x": 317253, "y": 128472}, "id": "1"} ], ), ToolMessage("16505054784", tool_call_id="1"), AIMessage( "", name="example_assistant", tool_calls=[{"name": "Add", "args": {"x": 16505054784, "y": 4}, "id": "2"}], ), ToolMessage("16505054788", tool_call_id="2"), AIMessage( "The product of 317253 and 128472 plus four is 16505054788", name="example_assistant", ),]system = """You are bad at math but are an expert at using a calculator. Use past tool usage as an example of how to correctly use the tools."""few_shot_prompt = ChatPromptTemplate.from_messages( [ ("system", system), *examples, ("human", "{query}"), ])chain = {"query": RunnablePassthrough()} | few_shot_prompt | llm_with_toolschain.invoke("Whats 119 times 8 minus 20").tool_calls
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
[{'name': 'Multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_9MvuwQqg7dlJupJcoTWiEsDo'}]
And we get the correct output this time.
Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_few_shot.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle tool errors
](/v0.2/docs/how_to/tools_error/)[
Next
How to add a human-in-the-loop for tools
](/v0.2/docs/how_to/tools_human/) | null |
https://python.langchain.com/v0.2/docs/how_to/graph_constructing/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to construct knowledge graphs
On this page
How to construct knowledge graphs
=================================
In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application.
β οΈ Security note β οΈ[β](#οΈ-security-note-οΈ "Direct link to β οΈ Security note β οΈ")
-------------------------------------------------------------------------------
Constructing knowledge graphs requires executing write access to the database. There are inherent risks in doing this. Make sure that you verify and validate data before importing it. For more on general security best practices, [see here](/v0.2/docs/security/).
Architecture[β](#architecture "Direct link to Architecture")
------------------------------------------------------------
At a high-level, the steps of constructing a knowledge are from text are:
1. **Extracting structured information from text**: Model is used to extract structured graph information from text.
2. **Storing into graph database**: Storing the extracted structured graph information into a graph database enables downstream RAG applications
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables. In this example, we will be using Neo4j graph database.
%pip install --upgrade --quiet langchain langchain-community langchain-openai langchain-experimental neo4j
Note: you may need to restart the kernel to use updated packages.
We default to OpenAI models in this guide.
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Uncomment the below to use LangSmith. Not required.# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()# os.environ["LANGCHAIN_TRACING_V2"] = "true"
Β·Β·Β·Β·Β·Β·Β·Β·
Next, we need to define Neo4j credentials and connection. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
import osfrom langchain_community.graphs import Neo4jGraphos.environ["NEO4J_URI"] = "bolt://localhost:7687"os.environ["NEO4J_USERNAME"] = "neo4j"os.environ["NEO4J_PASSWORD"] = "password"graph = Neo4jGraph()
**API Reference:**[Neo4jGraph](https://api.python.langchain.com/en/latest/graphs/langchain_community.graphs.neo4j_graph.Neo4jGraph.html)
LLM Graph Transformer[β](#llm-graph-transformer "Direct link to LLM Graph Transformer")
---------------------------------------------------------------------------------------
Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The `LLMGraphTransformer` converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data.
import osfrom langchain_experimental.graph_transformers import LLMGraphTransformerfrom langchain_openai import ChatOpenAIllm = ChatOpenAI(temperature=0, model_name="gpt-4-turbo")llm_transformer = LLMGraphTransformer(llm=llm)
**API Reference:**[LLMGraphTransformer](https://api.python.langchain.com/en/latest/graph_transformers/langchain_experimental.graph_transformers.llm.LLMGraphTransformer.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now we can pass in example text and examine the results.
from langchain_core.documents import Documenttext = """Marie Curie, born in 1867, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.She was, in 1906, the first woman to become a professor at the University of Paris."""documents = [Document(page_content=text)]graph_documents = llm_transformer.convert_to_graph_documents(documents)print(f"Nodes:{graph_documents[0].nodes}")print(f"Relationships:{graph_documents[0].relationships}")
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')]
Examine the following image to better grasp the structure of the generated knowledge graph.
![graph_construction1.png](/v0.2/assets/images/graph_construction1-2b4d31978d58696d5a6a52ad92ae088f.png)
Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution.
Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements.
llm_transformer_filtered = LLMGraphTransformer( llm=llm, allowed_nodes=["Person", "Country", "Organization"], allowed_relationships=["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"],)graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents( documents)print(f"Nodes:{graph_documents_filtered[0].nodes}")print(f"Relationships:{graph_documents_filtered[0].relationships}")
Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]
For a better understanding of the generated graph, we can again visualize it.
![graph_construction2.png](/v0.2/assets/images/graph_construction2-8b43506ae0fb3a006eaa4ba83fea8af5.png)
The `node_properties` parameter enables the extraction of node properties, allowing the creation of a more detailed graph. When set to `True`, LLM autonomously identifies and extracts relevant node properties. Conversely, if `node_properties` is defined as a list of strings, the LLM selectively retrieves only the specified properties from the text.
llm_transformer_props = LLMGraphTransformer( llm=llm, allowed_nodes=["Person", "Country", "Organization"], allowed_relationships=["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], node_properties=["born_year"],)graph_documents_props = llm_transformer_props.convert_to_graph_documents(documents)print(f"Nodes:{graph_documents_props[0].nodes}")print(f"Relationships:{graph_documents_props[0].relationships}")
Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]
Storing to graph database[β](#storing-to-graph-database "Direct link to Storing to graph database")
---------------------------------------------------------------------------------------------------
The generated graph documents can be stored to a graph database using the `add_graph_documents` method.
graph.add_graph_documents(graph_documents_props)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/graph_constructing.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build an Agent with AgentExecutor (Legacy)
](/v0.2/docs/how_to/agent_executor/)[
Next
How to partially format prompt templates
](/v0.2/docs/how_to/prompts_partial/)
* [β οΈ Security note β οΈ](#οΈ-security-note-οΈ)
* [Architecture](#architecture)
* [Setup](#setup)
* [LLM Graph Transformer](#llm-graph-transformer)
* [Storing to graph database](#storing-to-graph-database) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_error/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle tool errors
On this page
How to handle tool errors
=========================
Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that are valid.
We can build error handling into our chains to mitigate these failure modes.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
We'll need to install the following packages:
%pip install --upgrade --quiet langchain-core langchain-openai
If you'd like to trace your runs in [LangSmith](https://docs.smith.langchain.com/) uncomment and set the following environment variables:
import getpassimport os# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Chain[β](#chain "Direct link to Chain")
---------------------------------------
Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
# Define toolfrom langchain_core.tools import tool@tooldef complex_tool(int_arg: int, float_arg: float, dict_arg: dict) -> int: """Do something complex with a complex tool.""" return int_arg * float_arg
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
llm_with_tools = llm.bind_tools( [complex_tool],)
# Define chainchain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | complex_tool
We can see that when we try to invoke this chain with even a fairly explicit input, the model fails to correctly call the tool (it forgets the `dict_arg` argument).
chain.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg")
---------------------------------------------------------------------------``````outputValidationError Traceback (most recent call last)``````outputCell In[12], line 1----> 1 chain.invoke( 2 "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" 3 )``````outputFile ~/langchain/libs/core/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config) 2497 try: 2498 for i, step in enumerate(self.steps):-> 2499 input = step.invoke( 2500 input, 2501 # mark each step as a child run 2502 patch_config( 2503 config, callbacks=run_manager.get_child(f"seq:step:{i+1}") 2504 ), 2505 ) 2506 # finish the root run 2507 except BaseException as e:``````outputFile ~/langchain/libs/core/langchain_core/tools.py:241, in BaseTool.invoke(self, input, config, **kwargs) 234 def invoke( 235 self, 236 input: Union[str, Dict], 237 config: Optional[RunnableConfig] = None, 238 **kwargs: Any, 239 ) -> Any: 240 config = ensure_config(config)--> 241 return self.run( 242 input, 243 callbacks=config.get("callbacks"), 244 tags=config.get("tags"), 245 metadata=config.get("metadata"), 246 run_name=config.get("run_name"), 247 run_id=config.pop("run_id", None), 248 **kwargs, 249 )``````outputFile ~/langchain/libs/core/langchain_core/tools.py:387, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs) 385 except ValidationError as e: 386 if not self.handle_validation_error:--> 387 raise e 388 elif isinstance(self.handle_validation_error, bool): 389 observation = "Tool input validation error"``````outputFile ~/langchain/libs/core/langchain_core/tools.py:378, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs) 364 run_manager = callback_manager.on_tool_start( 365 {"name": self.name, "description": self.description}, 366 tool_input if isinstance(tool_input, str) else str(tool_input), (...) 375 **kwargs, 376 ) 377 try:--> 378 parsed_input = self._parse_input(tool_input) 379 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) 380 observation = ( 381 self._run(*tool_args, run_manager=run_manager, **tool_kwargs) 382 if new_arg_supported 383 else self._run(*tool_args, **tool_kwargs) 384 )``````outputFile ~/langchain/libs/core/langchain_core/tools.py:283, in BaseTool._parse_input(self, tool_input) 281 else: 282 if input_args is not None:--> 283 result = input_args.parse_obj(tool_input) 284 return { 285 k: getattr(result, k) 286 for k, v in result.dict().items() 287 if k in tool_input 288 } 289 return tool_input``````outputFile ~/langchain/.venv/lib/python3.9/site-packages/pydantic/v1/main.py:526, in BaseModel.parse_obj(cls, obj) 524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}') 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e--> 526 return cls(**obj)``````outputFile ~/langchain/.venv/lib/python3.9/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error:--> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values)``````outputValidationError: 1 validation error for complex_toolSchemadict_arg field required (type=value_error.missing)
Try/except tool call[β](#tryexcept-tool-call "Direct link to Try/except tool call")
-----------------------------------------------------------------------------------
The simplest way to more gracefully handle errors is to try/except the tool-calling step and return a helpful message on errors:
from typing import Anyfrom langchain_core.runnables import Runnable, RunnableConfigdef try_except_tool(tool_args: dict, config: RunnableConfig) -> Runnable: try: complex_tool.invoke(tool_args, config=config) except Exception as e: return f"Calling tool with arguments:\n\n{tool_args}\n\nraised the following error:\n\n{type(e)}: {e}"chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | try_except_tool
**API Reference:**[Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html)
print( chain.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" ))
Calling tool with arguments:{'int_arg': 5, 'float_arg': 2.1}raised the following error:<class 'pydantic.v1.error_wrappers.ValidationError'>: 1 validation error for complex_toolSchemadict_arg field required (type=value_error.missing)
Fallbacks[β](#fallbacks "Direct link to Fallbacks")
---------------------------------------------------
We can also try to fallback to a better model in the event of a tool invocation error. In this case we'll fall back to an identical chain that uses `gpt-4-1106-preview` instead of `gpt-3.5-turbo`.
chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | complex_toolbetter_model = ChatOpenAI(model="gpt-4-1106-preview", temperature=0).bind_tools( [complex_tool], tool_choice="complex_tool")better_chain = better_model | (lambda msg: msg.tool_calls[0]["args"]) | complex_toolchain_with_fallback = chain.with_fallbacks([better_chain])chain_with_fallback.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg")
10.5
Looking at the [Langsmith trace](https://smith.langchain.com/public/00e91fc2-e1a4-4b0f-a82e-e6b3119d196c/r) for this chain run, we can see that the first chain call fails as expected and it's the fallback that succeeds.
Retry with exception[β](#retry-with-exception "Direct link to Retry with exception")
------------------------------------------------------------------------------------
To take things one step further, we can try to automatically re-run the chain with the exception passed in, so that the model may be able to correct its behavior:
import jsonfrom typing import Anyfrom langchain_core.messages import AIMessage, HumanMessage, ToolCall, ToolMessagefrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import RunnablePassthroughclass CustomToolException(Exception): """Custom LangChain tool exception.""" def __init__(self, tool_call: ToolCall, exception: Exception) -> None: super().__init__() self.tool_call = tool_call self.exception = exceptiondef tool_custom_exception(msg: AIMessage, config: RunnableConfig) -> Runnable: try: return complex_tool.invoke(msg.tool_calls[0]["args"], config=config) except Exception as e: raise CustomToolException(msg.tool_calls[0], e)def exception_to_messages(inputs: dict) -> dict: exception = inputs.pop("exception") # Add historical messages to the original input, so the model knows that it made a mistake with the last tool call. messages = [ AIMessage(content="", tool_calls=[exception.tool_call]), ToolMessage( tool_call_id=exception.tool_call["id"], content=str(exception.exception) ), HumanMessage( content="The last tool call raised an exception. Try calling the tool again with corrected arguments. Do not repeat mistakes." ), ] inputs["last_output"] = messages return inputs# We add a last_output MessagesPlaceholder to our prompt which if not passed in doesn't# affect the prompt at all, but gives us the option to insert an arbitrary list of Messages# into the prompt if needed. We'll use this on retries to insert the error message.prompt = ChatPromptTemplate.from_messages( [("human", "{input}"), MessagesPlaceholder("last_output", optional=True)])chain = prompt | llm_with_tools | tool_custom_exception# If the initial chain call fails, we rerun it withe the exception passed in as a message.self_correcting_chain = chain.with_fallbacks( [exception_to_messages | chain], exception_key="exception")
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
self_correcting_chain.invoke( { "input": "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" })
10.5
And our chain succeeds! Looking at the [LangSmith trace](https://smith.langchain.com/public/c11e804c-e14f-4059-bd09-64766f999c14/r), we can see that indeed our initial chain still fails, and it's only on retrying that the chain succeeds.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_error.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to convert tools to OpenAI Functions
](/v0.2/docs/how_to/tools_as_openai_functions/)[
Next
How to use few-shot prompting with tool calling
](/v0.2/docs/how_to/tools_few_shot/)
* [Setup](#setup)
* [Chain](#chain)
* [Try/except tool call](#tryexcept-tool-call)
* [Fallbacks](#fallbacks)
* [Retry with exception](#retry-with-exception) | null |
https://python.langchain.com/v0.2/docs/how_to/agent_executor/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Build an Agent with AgentExecutor (Legacy)
On this page
Build an Agent with AgentExecutor (Legacy)
==========================================
info
This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/v0.2/docs/concepts/#langgraph) or the [migration guide](/v0.2/docs/how_to/migrate_agent/)
By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.
In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.
Concepts[β](#concepts "Direct link to Concepts")
------------------------------------------------
Concepts we will cover are:
* Using [language models](/v0.2/docs/concepts/#chat-models), in particular their tool calling ability
* Creating a [Retriever](/v0.2/docs/concepts/#retrievers) to expose specific information to our agent
* Using a Search [Tool](/v0.2/docs/concepts/#tools) to look up things online
* [`Chat History`](/v0.2/docs/concepts/#chat-history), which allows a chatbot to "remember" past interactions and take them into account when responding to follow-up questions.
* Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
Setup[β](#setup "Direct link to Setup")
---------------------------------------
### Jupyter Notebook[β](#jupyter-notebook "Direct link to Jupyter Notebook")
This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.
This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.
### Installation[β](#installation "Direct link to Installation")
To install LangChain run:
* Pip
* Conda
pip install langchain
conda install langchain -c conda-forge
For more details, see our [Installation guide](/v0.2/docs/how_to/installation/).
### LangSmith[β](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Or, if in a notebook, you can set them with:
import getpassimport osos.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Define tools[β](#define-tools "Direct link to Define tools")
------------------------------------------------------------
We first need to create the tools we want to use. We will use two tools: [Tavily](/v0.2/docs/integrations/tools/tavily_search/) (to search online) and then a retriever over a local index we will create
### [Tavily](/v0.2/docs/integrations/tools/tavily_search/)[β](#tavily "Direct link to tavily")
We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires an API key - they have a free tier, but if you don't have one or don't want to create one, you can always ignore this step.
Once you create your API key, you will need to export that as:
export TAVILY_API_KEY="..."
from langchain_community.tools.tavily_search import TavilySearchResults
**API Reference:**[TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html)
search = TavilySearchResults(max_results=2)
search.invoke("what is the weather in SF")
[{'url': 'https://www.weatherapi.com/', 'content': "{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714000492, 'localtime': '2024-04-24 16:14'}, 'current': {'last_updated_epoch': 1713999600, 'last_updated': '2024-04-24 16:00', 'temp_c': 15.6, 'temp_f': 60.1, 'is_day': 1, 'condition': {'text': 'Overcast', 'icon': '//cdn.weatherapi.com/weather/64x64/day/122.png', 'code': 1009}, 'wind_mph': 10.5, 'wind_kph': 16.9, 'wind_degree': 330, 'wind_dir': 'NNW', 'pressure_mb': 1018.0, 'pressure_in': 30.06, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 72, 'cloud': 100, 'feelslike_c': 15.6, 'feelslike_f': 60.1, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 14.8, 'gust_kph': 23.8}}"}, {'url': 'https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/', 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}]
### Retriever[β](#retriever "Direct link to Retriever")
We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/v0.2/docs/tutorials/rag/).
from langchain_community.document_loaders import WebBaseLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitterloader = WebBaseLoader("https://docs.smith.langchain.com/overview")docs = loader.load()documents = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200).split_documents(docs)vector = FAISS.from_documents(documents, OpenAIEmbeddings())retriever = vector.as_retriever()
**API Reference:**[WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
retriever.invoke("how to upload a dataset")[0]
Document(page_content='# The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix="sample-experiment", # The name of the experiment metadata={ "version": "1.0.0", "revision_id": "beta" },)import { Client, Run, Example } from \'langsmith\';import { runOnDataset } from \'langchain/smith\';import { EvaluationResult } from \'langsmith/evaluation\';const client = new Client();// Define dataset: these are your test casesconst datasetName = "Sample Dataset";const dataset = await client.createDataset(datasetName, { description: "A sample dataset in LangSmith."});await client.createExamples({ inputs: [ { postfix: "to LangSmith" }, { postfix: "to Evaluations in LangSmith" }, ], outputs: [ { output: "Welcome to LangSmith" }, { output: "Welcome to Evaluations in LangSmith" }, ], datasetId: dataset.id,});// Define your evaluatorconst exactMatch = async ({ run, example }: { run: Run; example?:', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'Getting started with LangSmith | \uf8ffΓΌΒΆΓΊΓβΓ¨\uf8ffΓΌΓ΅β ΓβΓ¨ LangSmith', 'description': 'Introduction', 'language': 'en'})
Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)
from langchain.tools.retriever import create_retriever_tool
**API Reference:**[create\_retriever\_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html)
retriever_tool = create_retriever_tool( retriever, "langsmith_search", "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",)
### Tools[β](#tools "Direct link to Tools")
Now that we have created both, we can create a list of tools that we will use downstream.
tools = [search, retriever_tool]
Using Language Models[β](#using-language-models "Direct link to Using Language Models")
---------------------------------------------------------------------------------------
Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-4")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
You can call the language model by passing in a list of messages. By default, the response is a `content` string.
from langchain_core.messages import HumanMessageresponse = model.invoke([HumanMessage(content="hi!")])response.content
**API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
'Hello! How can I assist you today?'
We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind_tools` to give the language model knowledge of these tools
model_with_tools = model.bind_tools(tools)
We can now call the model. Let's first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field.
response = model_with_tools.invoke([HumanMessage(content="Hi!")])print(f"ContentString: {response.content}")print(f"ToolCalls: {response.tool_calls}")
ContentString: Hello! How can I assist you today?ToolCalls: []
Now, let's try calling it with some input that would expect a tool to be called.
response = model_with_tools.invoke([HumanMessage(content="What's the weather in SF?")])print(f"ContentString: {response.content}")print(f"ToolCalls: {response.tool_calls}")
ContentString: ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_4HteVahXkRAkWjp6dGXryKZX'}]
We can see that there's now no content, but there is a tool call! It wants us to call the Tavily Search tool.
This isn't calling that tool yet - it's just telling us to. In order to actually calll it, we'll want to create our agent.
Create the agent[β](#create-the-agent "Direct link to Create the agent")
------------------------------------------------------------------------
Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/v0.2/docs/concepts/#agent_types/).
We can first choose the prompt we want to use to guide the agent.
If you want to see the contents of this prompt and have access to LangSmith, you can go to:
[https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent)
from langchain import hub# Get the prompt to use - you can modify this!prompt = hub.pull("hwchase17/openai-functions-agent")prompt.messages
[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]
Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/v0.2/docs/concepts/#agents).
Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_agent` will call `.bind_tools` for us under the hood.
from langchain.agents import create_tool_calling_agentagent = create_tool_calling_agent(model, tools, prompt)
**API Reference:**[create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html)
Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools).
from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools)
**API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html)
Run the agent[β](#run-the-agent "Direct link to Run the agent")
---------------------------------------------------------------
We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won't remember previous interactions).
First up, let's how it responds when there's no need to call a tool:
agent_executor.invoke({"input": "hi!"})
{'input': 'hi!', 'output': 'Hello! How can I assist you today?'}
In order to see exactly what is happening under the hood (and to make sure it's not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/8441812b-94ce-4832-93ec-e1114214553a/r)
Let's now try it out on an example where it should be invoking the retriever
agent_executor.invoke({"input": "how can langsmith help with testing?"})
{'input': 'how can langsmith help with testing?', 'output': 'LangSmith is a platform that aids in building production-grade Language Learning Model (LLM) applications. It can assist with testing in several ways:\n\n1. **Monitoring and Evaluation**: LangSmith allows close monitoring and evaluation of your application. This helps you to ensure the quality of your application and deploy it with confidence.\n\n2. **Tracing**: LangSmith has tracing capabilities that can be beneficial for debugging and understanding the behavior of your application.\n\n3. **Evaluation Capabilities**: LangSmith has built-in tools for evaluating the performance of your LLM. \n\n4. **Prompt Hub**: This is a prompt management tool built into LangSmith that can help in testing different prompts and their responses.\n\nPlease note that to use LangSmith, you would need to install it and create an API key. The platform offers Python and Typescript SDKs for utilization. It works independently and does not require the use of LangChain.'}
Let's take a look at the [LangSmith trace](https://smith.langchain.com/public/762153f6-14d4-4c98-8659-82650f860c62/r) to make sure it's actually calling that.
Now let's try one where it needs to call the search tool:
agent_executor.invoke({"input": "whats the weather in sf?"})
{'input': 'whats the weather in sf?', 'output': 'The current weather in San Francisco is partly cloudy with a temperature of 16.1Β°C (61.0Β°F). The wind is coming from the WNW at a speed of 10.5 mph. The humidity is at 67%. [source](https://www.weatherapi.com/)'}
We can check out the [LangSmith trace](https://smith.langchain.com/public/36df5b1a-9a0b-4185-bae2-964e1d53c665/r) to make sure it's calling the search tool effectively.
Adding in memory[β](#adding-in-memory "Direct link to Adding in memory")
------------------------------------------------------------------------
As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. Note: it needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name
# Here we pass in an empty list of messages for chat_history because it is the first message in the chatagent_executor.invoke({"input": "hi! my name is bob", "chat_history": []})
{'input': 'hi! my name is bob', 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'}
from langchain_core.messages import AIMessage, HumanMessage
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
agent_executor.invoke( { "chat_history": [ HumanMessage(content="hi! my name is bob"), AIMessage(content="Hello Bob! How can I assist you today?"), ], "input": "what's my name?", })
{'chat_history': [HumanMessage(content='hi! my name is bob'), AIMessage(content='Hello Bob! How can I assist you today?')], 'input': "what's my name?", 'output': 'Your name is Bob. How can I assist you further?'}
If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history/).
from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorystore = {}def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id]
**API Reference:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)
Because we have multiple inputs, we need to specify two things:
* `input_messages_key`: The input key to use to add to the conversation history.
* `history_messages_key`: The key to add the loaded messages into.
agent_with_chat_history = RunnableWithMessageHistory( agent_executor, get_session_history, input_messages_key="input", history_messages_key="chat_history",)
agent_with_chat_history.invoke( {"input": "hi! I'm bob"}, config={"configurable": {"session_id": "<foo>"}},)
{'input': "hi! I'm bob", 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'}
agent_with_chat_history.invoke( {"input": "what's my name?"}, config={"configurable": {"session_id": "<foo>"}},)
{'input': "what's my name?", 'chat_history': [HumanMessage(content="hi! I'm bob"), AIMessage(content='Hello Bob! How can I assist you today?')], 'output': 'Your name is Bob.'}
Example LangSmith trace: [https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r](https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r)
Conclusion[β](#conclusion "Direct link to Conclusion")
------------------------------------------------------
That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn!
info
This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph)
If you want to continue using LangChain agents, some good advanced guides are:
* [How to use LangGraph's built-in versions of `AgentExecutor`](/v0.2/docs/how_to/migrate_agent/)
* [How to create a custom agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/custom_agent/)
* [How to stream responses from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/)
* [How to return structured output from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/agent_structured/)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/agent_executor.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add ad-hoc tool calling capability to LLMs and Chat Models
](/v0.2/docs/how_to/tools_prompting/)[
Next
How to construct knowledge graphs
](/v0.2/docs/how_to/graph_constructing/)
* [Concepts](#concepts)
* [Setup](#setup)
* [Jupyter Notebook](#jupyter-notebook)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Define tools](#define-tools)
* [Tavily](#tavily)
* [Retriever](#retriever)
* [Tools](#tools)
* [Using Language Models](#using-language-models)
* [Create the agent](#create-the-agent)
* [Run the agent](#run-the-agent)
* [Adding in memory](#adding-in-memory)
* [Conclusion](#conclusion) | null |
https://python.langchain.com/v0.2/docs/how_to/prompts_partial/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to partially format prompt templates
On this page
How to partially format prompt templates
========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
Like partially binding arguments to a function, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways:
1. Partial formatting with string values.
2. Partial formatting with functions that return string values.
In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
Partial with strings[β](#partial-with-strings "Direct link to Partial with strings")
------------------------------------------------------------------------------------
One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
from langchain_core.prompts import PromptTemplateprompt = PromptTemplate.from_template("{foo}{bar}")partial_prompt = prompt.partial(foo="foo")print(partial_prompt.format(bar="baz"))
**API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
foobaz
You can also just initialize the prompt with the partialed variables.
prompt = PromptTemplate( template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})print(prompt.format(bar="baz"))
foobaz
Partial with functions[β](#partial-with-functions "Direct link to Partial with functions")
------------------------------------------------------------------------------------------
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is inconvenient. In this case, it's handy to be able to partial the prompt with a function that always returns the current date.
from datetime import datetimedef _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S")prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"],)partial_prompt = prompt.partial(date=_get_datetime)print(partial_prompt.format(adjective="funny"))
Tell me a funny joke about the day 04/21/2024, 19:43:57
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
prompt = PromptTemplate( templat |