url
stringlengths
30
161
markdown
stringlengths
27
670k
last_modified
stringclasses
1 value
https://python.langchain.com/v0.2/docs/how_to/streaming/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to stream runnables On this page How to stream runnables ======================= Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) * [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) * [Output parsers](/v0.2/docs/concepts/#output-parsers) Streaming is critical in making applications based on LLMs feel responsive to end-users. Important LangChain primitives like [chat models](/v0.2/docs/concepts/#chat-models), [output parsers](/v0.2/docs/concepts/#output-parsers), [prompts](/v0.2/docs/concepts/#prompt-templates), [retrievers](/v0.2/docs/concepts/#retrievers), and [agents](/v0.2/docs/concepts/#agents) implement the LangChain [Runnable Interface](/v0.2/docs/concepts/#interface). This interface provides two general approaches to stream content: 1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain. 2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain. Let's take a look at both approaches, and try to understand how to use them. info For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/v0.2/docs/concepts/#streaming). Using Stream[​](#using-stream "Direct link to Using Stream") ------------------------------------------------------------ All `Runnable` objects implement a sync method called `stream` and an async variant called `astream`. These methods are designed to stream the final output in chunks, yielding each chunk as soon as it is available. Streaming is only possible if all steps in the program know how to process an **input stream**; i.e., process an input chunk one at a time, and yield a corresponding output chunk. The complexity of this processing can vary, from straightforward tasks like emitting tokens produced by an LLM, to more challenging ones like streaming parts of JSON results before the entire JSON is complete. The best place to start exploring streaming is with the single most important components in LLMs apps-- the LLMs themselves! ### LLMs and Chat Models[​](#llms-and-chat-models "Direct link to LLMs and Chat Models") Large language models and their chat variants are the primary bottleneck in LLM based apps. Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user. The key strategy to make the application feel more responsive is to show intermediate progress; viz., to stream the output from the model **token by token**. We will show examples of streaming using a chat model. Choose one from the options below: * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) Let's start with the sync `stream` API: chunks = []for chunk in model.stream("what color is the sky?"): chunks.append(chunk) print(chunk.content, end="|", flush=True) The| sky| appears| blue| during| the| day|.| Alternatively, if you're working in an async environment, you may consider using the async `astream` API: chunks = []async for chunk in model.astream("what color is the sky?"): chunks.append(chunk) print(chunk.content, end="|", flush=True) The| sky| appears| blue| during| the| day|.| Let's inspect one of the chunks chunks[0] AIMessageChunk(content='The', id='run-b36bea64-5511-4d7a-b6a3-a07b3db0c8e7') We got back something called an `AIMessageChunk`. This chunk represents a part of an `AIMessage`. Message chunks are additive by design -- one can simply add them up to get the state of the response so far! chunks[0] + chunks[1] + chunks[2] + chunks[3] + chunks[4] AIMessageChunk(content='The sky appears blue during', id='run-b36bea64-5511-4d7a-b6a3-a07b3db0c8e7') ### Chains[​](#chains "Direct link to Chains") Virtually all LLM applications involve more steps than just a call to a language model. Let's build a simple chain using `LangChain Expression Language` (`LCEL`) that combines a prompt, model and a parser and verify that streaming works. We will use [`StrOutputParser`](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model. tip LCEL is a _declarative_ way to specify a "program" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface. from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for chunk in chain.astream({"topic": "parrot"}): print(chunk, end="|", flush=True) **API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) Here|'s| a| joke| about| a| par|rot|:|A man| goes| to| a| pet| shop| to| buy| a| par|rot|.| The| shop| owner| shows| him| two| stunning| pa|rr|ots| with| beautiful| pl|um|age|.|"|There|'s| a| talking| par|rot| an|d a| non|-|talking| par|rot|,"| the| owner| says|.| "|The| talking| par|rot| costs| $|100|,| an|d the| non|-|talking| par|rot| is| $|20|."|The| man| says|,| "|I|'ll| take| the| non|-|talking| par|rot| at| $|20|."|He| pays| an|d leaves| with| the| par|rot|.| As| he|'s| walking| down| the| street|,| the| par|rot| looks| up| at| him| an|d says|,| "|You| know|,| you| really| are| a| stupi|d man|!"|The| man| is| stun|ne|d an|d looks| at| the| par|rot| in| dis|bel|ief|.| The| par|rot| continues|,| "|Yes|,| you| got| r|ippe|d off| big| time|!| I| can| talk| just| as| well| as| that| other| par|rot|,| an|d you| only| pai|d $|20| |for| me|!"| Note that we're getting streaming output even though we're using `parser` at the end of the chain above. The `parser` operates on each streaming chunk individidually. Many of the [LCEL primitives](/v0.2/docs/how_to/#langchain-expression-language-lcel) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps. Custom functions can be [designed to return generators](/v0.2/docs/how_to/functions/#streaming), which are able to operate on streams. Certain runnables, like [prompt templates](/v0.2/docs/how_to/#prompt-templates) and [chat models](/v0.2/docs/how_to/#chat-models), cannot process individual chunks and instead aggregate all previous steps. Such runnables can interrupt the streaming process. note The LangChain Expression language allows you to separate the construction of a chain from the mode in which it is used (e.g., sync/async, batch/streaming etc.). If this is not relevant to what you're building, you can also rely on a standard **imperative** programming approach by caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit. ### Working with Input Streams[​](#working-with-input-streams "Direct link to Working with Input Streams") What if you wanted to stream JSON from the output as it was being generated? If you were to rely on `json.loads` to parse the partial json, the parsing would fail as the partial json wouldn't be valid json. You'd likely be at a complete loss of what to do and claim that it wasn't possible to stream JSON. Well, turns out there is a way to do it -- the parser needs to operate on the **input stream**, and attempt to "auto-complete" the partial json into a valid state. Let's see such a parser in action to understand what this means. from langchain_core.output_parsers import JsonOutputParserchain = ( model | JsonOutputParser()) # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`"): print(text, flush=True) **API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) {}{'countries': []}{'countries': [{}]}{'countries': [{'name': ''}]}{'countries': [{'name': 'France'}]}{'countries': [{'name': 'France', 'population': 67}]}{'countries': [{'name': 'France', 'population': 67413}]}{'countries': [{'name': 'France', 'population': 67413000}]}{'countries': [{'name': 'France', 'population': 67413000}, {}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': ''}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan'}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125584}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125584000}]} Now, let's **break** streaming. We'll use the previous example and append an extraction function at the end that extracts the country names from the finalized JSON. danger Any steps in the chain that operate on **finalized inputs** rather than on **input streams** can break streaming functionality via `stream` or `astream`. tip Later, we will discuss the `astream_events` API which streams results from intermediate steps. This API will stream results from intermediate steps even if the chain contains steps that only operate on **finalized inputs**. from langchain_core.output_parsers import ( JsonOutputParser,)# A function that operates on finalized inputs# rather than on an input_streamdef _extract_country_names(inputs): """A function that does not operates on input streams and breaks streaming.""" if not isinstance(inputs, dict): return "" if "countries" not in inputs: return "" countries = inputs["countries"] if not isinstance(countries, list): return "" country_names = [ country.get("name") for country in countries if isinstance(country, dict) ] return country_nameschain = model | JsonOutputParser() | _extract_country_namesasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`"): print(text, end="|", flush=True) **API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) ['France', 'Spain', 'Japan']| #### Generator Functions[​](#generator-functions "Direct link to Generator Functions") Le'ts fix the streaming using a generator function that can operate on the **input stream**. tip A generator function (a function that uses `yield`) allows writing code that operates on **input streams** from langchain_core.output_parsers import JsonOutputParserasync def _extract_country_names_streaming(input_stream): """A function that operates on input streams.""" country_names_so_far = set() async for input in input_stream: if not isinstance(input, dict): continue if "countries" not in input: continue countries = input["countries"] if not isinstance(countries, list): continue for country in countries: name = country.get("name") if not name: continue if name not in country_names_so_far: yield name country_names_so_far.add(name)chain = model | JsonOutputParser() | _extract_country_names_streamingasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`",): print(text, end="|", flush=True) **API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) France|Spain|Japan| note Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result! We're focusing on streaming concepts, not necessarily the results of the chains. ### Non-streaming components[​](#non-streaming-components "Direct link to Non-streaming components") Some built-in components like Retrievers do not offer any `streaming`. What happens if we try to `stream` them? 🀨 from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts( ["harrison worked at kensho", "harrison likes spicy food"], embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()chunks = [chunk for chunk in retriever.stream("where did harrison work?")]chunks **API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) [[Document(page_content='harrison worked at kensho'), Document(page_content='harrison likes spicy food')]] Stream just yielded the final result from that component. This is OK πŸ₯Ή! Not all components have to implement streaming -- in some cases streaming is either unnecessary, difficult or just doesn't make sense. tip An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain. retrieval_chain = ( { "context": retriever.with_config(run_name="Docs"), "question": RunnablePassthrough(), } | prompt | model | StrOutputParser()) for chunk in retrieval_chain.stream( "Where did harrison work? " "Write 3 made up sentences about this place."): print(chunk, end="|", flush=True) Base|d on| the| given| context|,| Harrison| worke|d at| K|ens|ho|.|Here| are| |3| |made| up| sentences| about| this| place|:|1|.| K|ens|ho| was| a| cutting|-|edge| technology| company| known| for| its| innovative| solutions| in| artificial| intelligence| an|d data| analytics|.|2|.| The| modern| office| space| at| K|ens|ho| feature|d open| floor| plans|,| collaborative| work|sp|aces|,| an|d a| vib|rant| atmosphere| that| fos|tere|d creativity| an|d team|work|.|3|.| With| its| prime| location| in| the| heart| of| the| city|,| K|ens|ho| attracte|d top| talent| from| aroun|d the| worl|d,| creating| a| diverse| an|d dynamic| work| environment|.| Now that we've seen how `stream` and `astream` work, let's venture into the world of streaming events. 🏞️ Using Stream Events[​](#using-stream-events "Direct link to Using Stream Events") --------------------------------------------------------------------------------- Event Streaming is a **beta** API. This API may change a bit based on feedback. note This guide demonstrates the `V2` API and requires langchain-core >= 0.2. For the `V1` API compatible with older versions of LangChain, see [here](https://python.langchain.com/v0.1/docs/expression_language/streaming/#using-stream-events). import langchain_corelangchain_core.__version__ For the `astream_events` API to work properly: * Use `async` throughout the code to the extent possible (e.g., async tools etc) * Propagate callbacks if defining custom functions / runnables * Whenever using runnables without LCEL, make sure to call `.astream()` on LLMs rather than `.ainvoke` to force the LLM to stream tokens. * Let us know if anything doesn't work as expected! :) ### Event Reference[​](#event-reference "Direct link to Event Reference") Below is a reference table that shows some events that might be emitted by the various Runnable objects. note When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events. event name chunk input output on\_chat\_model\_start \[model name\] {"messages": \[\[SystemMessage, HumanMessage\]\]} on\_chat\_model\_stream \[model name\] AIMessageChunk(content="hello") on\_chat\_model\_end \[model name\] {"messages": \[\[SystemMessage, HumanMessage\]\]} AIMessageChunk(content="hello world") on\_llm\_start \[model name\] {'input': 'hello'} on\_llm\_stream \[model name\] 'Hello' on\_llm\_end \[model name\] 'Hello human!' on\_chain\_start format\_docs on\_chain\_stream format\_docs "hello world!, goodbye world!" on\_chain\_end format\_docs \[Document(...)\] "hello world!, goodbye world!" on\_tool\_start some\_tool {"x": 1, "y": "2"} on\_tool\_end some\_tool {"x": 1, "y": "2"} on\_retriever\_start \[retriever name\] {"query": "hello"} on\_retriever\_end \[retriever name\] {"query": "hello"} \[Document(...), ..\] on\_prompt\_start \[template\_name\] {"question": "hello"} on\_prompt\_end \[template\_name\] {"question": "hello"} ChatPromptValue(messages: \[SystemMessage, ...\]) ### Chat Model[​](#chat-model "Direct link to Chat Model") Let's start off by looking at the events produced by a chat model. events = []async for event in model.astream_events("hello", version="v2"): events.append(event) /home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future. warn_beta( note Hey what's that funny version="v2" parameter in the API?! 😾 This is a **beta API**, and we're almost certainly going to make some changes to it (in fact, we already have!) This version parameter will allow us to minimize such breaking changes to your code. In short, we are annoying you now, so we don't have to annoy you later. `v2` is only available for langchain-core>=0.2.0. Let's take a look at the few of the start event and a few of the end events. events[:3] [{'event': 'on_chat_model_start', 'data': {'input': 'hello'}, 'name': 'ChatAnthropic', 'tags': [], 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Hello', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='!', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}] events[-2:] [{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}, {'event': 'on_chat_model_end', 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}] ### Chain[​](#chain "Direct link to Chain") Let's revisit the example chain that parsed streaming JSON to explore the streaming events API. chain = ( model | JsonOutputParser()) # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsevents = [ event async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2", )] If you examine at the first few events, you'll notice that there are **3** different start events rather than **2** start events. The three start events correspond to: 1. The chain (model + parser) 2. The model 3. The parser events[:3] [{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': [], 'run_id': '4765006b-16e2-4b1d-a523-edd9fd64cb92', 'metadata': {}}, {'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1'], 'run_id': '0320c234-7b52-4a14-ae4e-5f100949e589', 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-0320c234-7b52-4a14-ae4e-5f100949e589')}, 'run_id': '0320c234-7b52-4a14-ae4e-5f100949e589', 'name': 'ChatAnthropic', 'tags': ['seq:step:1'], 'metadata': {}}] What do you think you'd see if you looked at the last 3 events? what about the middle? Let's use this API to take output the stream events from the model and the parser. We're ignoring start events, end events and events from the chain. num_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2",): kind = event["event"] if kind == "on_chat_model_stream": print( f"Chat model chunk: {repr(event['data']['chunk'].content)}", flush=True, ) if kind == "on_parser_stream": print(f"Parser chunk: {event['data']['chunk']}", flush=True) num_events += 1 if num_events > 30: # Truncate the output print("...") break Chat model chunk: '{'Parser chunk: {}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'countries'Chat model chunk: '":'Chat model chunk: ' ['Parser chunk: {'countries': []}Chat model chunk: '\n 'Chat model chunk: '{'Parser chunk: {'countries': [{}]}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'name'Chat model chunk: '":'Chat model chunk: ' "'Parser chunk: {'countries': [{'name': ''}]}Chat model chunk: 'France'Parser chunk: {'countries': [{'name': 'France'}]}Chat model chunk: '",'Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'population'... Because both the model and the parser support streaming, we see streaming events from both components in real time! Kind of cool isn't it? 🦜 ### Filtering Events[​](#filtering-events "Direct link to Filtering Events") Because this API produces so many events, it is useful to be able to filter on events. You can filter by either component `name`, component `tags` or component `type`. #### By Name[​](#by-name "Direct link to By Name") chain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config( {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2", include_names=["my_parser"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break {'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': []}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}... #### By Type[​](#by-type "Direct link to By Type") chain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config( {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events( 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`', version="v2", include_types=["chat_model"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break {'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='":', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}... #### By Tags[​](#by-tags "Direct link to By Tags") caution Tags are inherited by child components of a given runnable. If you're using tags to filter, make sure that this is what you want. chain = (model | JsonOutputParser()).with_config({"tags": ["my_chain"]})max_events = 0async for event in chain.astream_events( 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`', version="v2", include_tags=["my_chain"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break {'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'metadata': {}}{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': {}}, 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='":', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}... ### Non-streaming components[​](#non-streaming-components-1 "Direct link to Non-streaming components") Remember how some components don't stream well because they don't operate on **input streams**? While such components can break streaming of the final output when using `astream`, `astream_events` will still yield streaming events from intermediate steps that support streaming! # Function that does not support streaming.# It operates on the finalizes inputs rather than# operating on the input stream.def _extract_country_names(inputs): """A function that does not operates on input streams and breaks streaming.""" if not isinstance(inputs, dict): return "" if "countries" not in inputs: return "" countries = inputs["countries"] if not isinstance(countries, list): return "" country_names = [ country.get("name") for country in countries if isinstance(country, dict) ] return country_nameschain = ( model | JsonOutputParser() | _extract_country_names) # This parser only works with OpenAI right now As expected, the `astream` API doesn't work correctly because `_extract_country_names` doesn't operate on streams. async for chunk in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`",): print(chunk, flush=True) ['France', 'Spain', 'Japan'] Now, let's confirm that with astream\_events we're still seeing streaming output from the model and the parser. num_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2",): kind = event["event"] if kind == "on_chat_model_stream": print( f"Chat model chunk: {repr(event['data']['chunk'].content)}", flush=True, ) if kind == "on_parser_stream": print(f"Parser chunk: {event['data']['chunk']}", flush=True) num_events += 1 if num_events > 30: # Truncate the output print("...") break Chat model chunk: '{'Parser chunk: {}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'countries'Chat model chunk: '":'Chat model chunk: ' ['Parser chunk: {'countries': []}Chat model chunk: '\n 'Chat model chunk: '{'Parser chunk: {'countries': [{}]}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'name'Chat model chunk: '":'Chat model chunk: ' "'Parser chunk: {'countries': [{'name': ''}]}Chat model chunk: 'France'Parser chunk: {'countries': [{'name': 'France'}]}Chat model chunk: '",'Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'population'Chat model chunk: '":'Chat model chunk: ' 'Chat model chunk: '67'Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}... ### Propagating Callbacks[​](#propagating-callbacks "Direct link to Propagating Callbacks") caution If you're using invoking runnables inside your tools, you need to propagate callbacks to the runnable; otherwise, no stream events will be generated. note When using `RunnableLambdas` or `@chain` decorator, callbacks are propagated automatically behind the scenes. from langchain_core.runnables import RunnableLambdafrom langchain_core.tools import tooldef reverse_word(word: str): return word[::-1]reverse_word = RunnableLambda(reverse_word)@tooldef bad_tool(word: str): """Custom tool that doesn't propagate callbacks.""" return reverse_word.invoke(word)async for event in bad_tool.astream_events("hello", version="v2"): print(event) **API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) {'event': 'on_tool_start', 'data': {'input': 'hello'}, 'name': 'bad_tool', 'tags': [], 'run_id': 'ea900472-a8f7-425d-b627-facdef936ee8', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': 'hello'}, 'name': 'reverse_word', 'tags': [], 'run_id': '77b01284-0515-48f4-8d7c-eb27c1882f86', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': 'olleh', 'input': 'hello'}, 'run_id': '77b01284-0515-48f4-8d7c-eb27c1882f86', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_tool_end', 'data': {'output': 'olleh'}, 'run_id': 'ea900472-a8f7-425d-b627-facdef936ee8', 'name': 'bad_tool', 'tags': [], 'metadata': {}} Here's a re-implementation that does propagate callbacks correctly. You'll notice that now we're getting events from the `reverse_word` runnable as well. @tooldef correct_tool(word: str, callbacks): """A tool that correctly propagates callbacks.""" return reverse_word.invoke(word, {"callbacks": callbacks})async for event in correct_tool.astream_events("hello", version="v2"): print(event) {'event': 'on_tool_start', 'data': {'input': 'hello'}, 'name': 'correct_tool', 'tags': [], 'run_id': 'd5ea83b9-9278-49cc-9f1d-aa302d671040', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': 'hello'}, 'name': 'reverse_word', 'tags': [], 'run_id': '44dafbf4-2f87-412b-ae0e-9f71713810df', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': 'olleh', 'input': 'hello'}, 'run_id': '44dafbf4-2f87-412b-ae0e-9f71713810df', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_tool_end', 'data': {'output': 'olleh'}, 'run_id': 'd5ea83b9-9278-49cc-9f1d-aa302d671040', 'name': 'correct_tool', 'tags': [], 'metadata': {}} If you're invoking runnables from within Runnable Lambdas or `@chains`, then callbacks will be passed automatically on your behalf. from langchain_core.runnables import RunnableLambdaasync def reverse_and_double(word: str): return await reverse_word.ainvoke(word) * 2reverse_and_double = RunnableLambda(reverse_and_double)await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v2"): print(event) **API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) {'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_and_double', 'tags': [], 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_word', 'tags': [], 'run_id': '5cf26fc8-840b-4642-98ed-623dda28707a', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '4321', 'input': '1234'}, 'run_id': '5cf26fc8-840b-4642-98ed-623dda28707a', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': '43214321'}, 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '43214321'}, 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}} And with the `@chain` decorator: from langchain_core.runnables import chain@chainasync def reverse_and_double(word: str): return await reverse_word.ainvoke(word) * 2await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v2"): print(event) **API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html) {'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_and_double', 'tags': [], 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_word', 'tags': [], 'run_id': '64fc99f0-5d7d-442b-b4f5-4537129f67d1', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '4321', 'input': '1234'}, 'run_id': '64fc99f0-5d7d-442b-b4f5-4537129f67d1', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': '43214321'}, 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '43214321'}, 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you've learned some ways to stream both final outputs and internal steps with LangChain. To learn more, check out the other how-to guides in this section, or the [conceptual guide on Langchain Expression Language](/v0.2/docs/concepts/#langchain-expression-language/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/streaming.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to do query validation as part of SQL question-answering ](/v0.2/docs/how_to/sql_query_checking/)[ Next How to stream responses from an LLM ](/v0.2/docs/how_to/streaming_llm/) * [Using Stream](#using-stream) * [LLMs and Chat Models](#llms-and-chat-models) * [Chains](#chains) * [Working with Input Streams](#working-with-input-streams) * [Non-streaming components](#non-streaming-components) * [Using Stream Events](#using-stream-events) * [Event Reference](#event-reference) * [Chat Model](#chat-model) * [Chain](#chain) * [Filtering Events](#filtering-events) * [Non-streaming components](#non-streaming-components-1) * [Propagating Callbacks](#propagating-callbacks) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/tool_calling_parallel/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * tool\_calling\_parallel On this page tool\_calling\_parallel ======================= ### Disabling parallel tool calling (OpenAI only)[​](#disabling-parallel-tool-calling-openai-only "Direct link to Disabling parallel tool calling (OpenAI only)") OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like "What is the weather in Tokyo, New York, and Chicago?" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using the `parallel_tool_call` parameter. First let's set up our tools and model: from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply] **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) Now let's show a quick example of how disabling parallel tool calls work: llm_with_tools = llm.bind_tools(tools, parallel_tool_calls=False)llm_with_tools.invoke("Please call the first tool two times").tool_calls [{'name': 'add', 'args': {'a': 2, 'b': 2}, 'id': 'call_Hh4JOTCDM85Sm9Pr84VKrWu5'}] As we can see, even though we explicitly told the model to call a tool twice, by disabling parallel tool calls the model was constrained to only calling one. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_calling_parallel.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use a model to call tools ](/v0.2/docs/how_to/tool_calling/)[ Next How to force tool calling behavior ](/v0.2/docs/how_to/tool_choice/) * [Disabling parallel tool calling (OpenAI only)](#disabling-parallel-tool-calling-openai-only)
null
https://python.langchain.com/v0.2/docs/how_to/tool_choice/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to force tool calling behavior How to force tool calling behavior ================================== In order to force our LLM to spelect a specific tool, we can use the `tool_choice` parameter to ensure certain behavior. First, let's define our model and tools: from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply] **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) For example, we can force our tool to call the multiply tool by using the following code: llm_forced_to_multiply = llm.bind_tools(tools, tool_choice="Multiply")llm_forced_to_multiply.invoke("what is 2 + 4") AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{"a":2,"b":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112}) Even if we pass it something that doesn't require multiplcation - it will still call the tool! We can also just force our tool to select at least one of our tools by passing in the "any" (or "required" which is OpenAI specific) keyword to the `tool_choice` parameter. llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice="any")llm_forced_to_use_tool.invoke("What day is today?") AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{"a":1,"b":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109}) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_choice.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous tool\_calling\_parallel ](/v0.2/docs/how_to/tool_calling_parallel/)[ Next How to pass tool outputs to the model ](/v0.2/docs/how_to/tool_results_pass_to_model/)
null
https://python.langchain.com/v0.2/docs/how_to/tool_results_pass_to_model/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to pass tool outputs to the model How to pass tool outputs to the model ===================================== If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s. First, let's define our tools and our model. from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply] **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools) **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) Now we can use `ToolMessage` to pass back the output of the tool calls to the model. from langchain_core.messages import HumanMessage, ToolMessagequery = "What is 3 * 12? Also, what is 11 + 49?"messages = [HumanMessage(query)]ai_msg = llm_with_tools.invoke(messages)messages.append(ai_msg)for tool_call in ai_msg.tool_calls: selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()] tool_output = selected_tool.invoke(tool_call["args"]) messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))messages **API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) [HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{"a": 3, "b": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{"a": 11, "b": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]), ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'), ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')] llm_with_tools.invoke(messages) AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0') Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_results_pass_to_model.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to force tool calling behavior ](/v0.2/docs/how_to/tool_choice/)[ Next How to pass run time values to a tool ](/v0.2/docs/how_to/tool_runtime/)
null
https://python.langchain.com/v0.2/docs/how_to/tool_runtime/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to pass run time values to a tool How to pass run time values to a tool ===================================== Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) * [LangChain Tools](/v0.2/docs/concepts/#tools) * [How to create tools](/v0.2/docs/how_to/custom_tools/) * [How to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling) Supported models This how-to guide uses models with native tool calling capability. You can find a [list of all models that support tool calling](/v0.2/docs/integrations/chat/). Using with LangGraph If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/) which shows how to create an agent that keeps track of a given user's favorite pets. You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request. Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk. Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic. This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values. We can bind them to chat models as follows: * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0) pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) Passing request time information ================================ The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example, this information may be the user ID as resolved from the request itself. from typing import Listfrom langchain_core.output_parsers import JsonOutputParserfrom langchain_core.tools import BaseTool, tool **API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) | [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) user_to_pets = {}def generate_tools_for_user(user_id: str) -> List[BaseTool]: """Generate a set of tools that have a user id associated with them.""" @tool def update_favorite_pets(pets: List[str]) -> None: """Add the list of favorite pets.""" user_to_pets[user_id] = pets @tool def delete_favorite_pets() -> None: """Delete the list of favorite pets.""" if user_id in user_to_pets: del user_to_pets[user_id] @tool def list_favorite_pets() -> None: """List favorite pets if any.""" return user_to_pets.get(user_id, []) return [update_favorite_pets, delete_favorite_pets, list_favorite_pets] Verify that the tools work correctly update_pets, delete_pets, list_pets = generate_tools_for_user("eugene")update_pets.invoke({"pets": ["cat", "dog"]})print(user_to_pets)print(list_pets.invoke({})) {'eugene': ['cat', 'dog']}['cat', 'dog'] from langchain_core.prompts import ChatPromptTemplatedef handle_run_time_request(user_id: str, query: str): """Handle run time request.""" tools = generate_tools_for_user(user_id) llm_with_tools = llm.bind_tools(tools) prompt = ChatPromptTemplate.from_messages( [("system", "You are a helpful assistant.")], ) chain = prompt | llm_with_tools return llm_with_tools.invoke(query) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) This code will allow the LLM to invoke the tools, but the LLM is **unaware** of the fact that a **user ID** even exists! ai_message = handle_run_time_request( "eugene", "my favorite animals are cats and parrots.")ai_message.tool_calls [{'name': 'update_favorite_pets', 'args': {'pets': ['cats', 'parrots']}, 'id': 'call_jJvjPXsNbFO5MMgW0q84iqCN'}] info Chat models only output requests to invoke tools, they don't actually invoke the underlying tools. To see how to invoke the tools, please refer to [how to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_runtime.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to pass tool outputs to the model ](/v0.2/docs/how_to/tool_results_pass_to_model/)[ Next How to stream tool calls ](/v0.2/docs/how_to/tool_streaming/)
null
https://python.langchain.com/v0.2/docs/how_to/tool_streaming/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to stream tool calls How to stream tool calls ======================== When tools are called in a streaming context, [message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes optional string fields for the tool `name`, `args`, and `id`, and includes an optional integer field `index` that can be used to join chunks together. Fields are optional because portions of a tool call may be streamed across different chunks (e.g., a chunk that includes a substring of the arguments may have null values for the tool name and id). Because message chunks inherit from their parent message class, an [AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. These fields are parsed best-effort from the message's tool call chunks. Note that not all providers currently support streaming for tool calls. Before we start let's define our tools and our model. from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply] **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools) **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) Now let's define our query and stream our output: query = "What is 3 * 12? Also, what is 11 + 49?"async for chunk in llm_with_tools.astream(query): print(chunk.tool_call_chunks) [][{'name': 'Multiply', 'args': '', 'id': 'call_3aQwTP9CYlFxwOvQZPHDu6wL', 'index': 0}][{'name': None, 'args': '{"a"', 'id': None, 'index': 0}][{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}][{'name': None, 'args': '"b": 1', 'id': None, 'index': 0}][{'name': None, 'args': '2}', 'id': None, 'index': 0}][{'name': 'Add', 'args': '', 'id': 'call_SQUoSsJz2p9Kx2x73GOgN1ja', 'index': 1}][{'name': None, 'args': '{"a"', 'id': None, 'index': 1}][{'name': None, 'args': ': 11,', 'id': None, 'index': 1}][{'name': None, 'args': ' "b": ', 'id': None, 'index': 1}][{'name': None, 'args': '49}', 'id': None, 'index': 1}][] Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/v0.2/docs/how_to/output_parser_structured/) support streaming. For example, below we accumulate tool call chunks: first = Trueasync for chunk in llm_with_tools.astream(query): if first: gathered = chunk first = False else: gathered = gathered + chunk print(gathered.tool_call_chunks) [][{'name': 'Multiply', 'args': '', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a"', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, ', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 1', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a"', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11,', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": ', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}] print(type(gathered.tool_call_chunks[0]["args"])) <class 'str'> And below we accumulate tool calls to demonstrate partial parsing: first = Trueasync for chunk in llm_with_tools.astream(query): if first: gathered = chunk first = False else: gathered = gathered + chunk print(gathered.tool_calls) [][][{'name': 'Multiply', 'args': {}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}] print(type(gathered.tool_calls[0]["args"])) <class 'dict'> [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_streaming.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to pass run time values to a tool ](/v0.2/docs/how_to/tool_runtime/)[ Next How to convert tools to OpenAI Functions ](/v0.2/docs/how_to/tools_as_openai_functions/)
null
https://python.langchain.com/v0.2/docs/how_to/tools_as_openai_functions/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to convert tools to OpenAI Functions How to convert tools to OpenAI Functions ======================================== This notebook goes over how to use LangChain tools as OpenAI functions. %pip install -qU langchain-community langchain-openai from langchain_community.tools import MoveFileToolfrom langchain_core.messages import HumanMessagefrom langchain_core.utils.function_calling import convert_to_openai_functionfrom langchain_openai import ChatOpenAI **API Reference:**[MoveFileTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.file_management.move.MoveFileTool.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [convert\_to\_openai\_function](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.function_calling.convert_to_openai_function.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) model = ChatOpenAI(model="gpt-3.5-turbo") tools = [MoveFileTool()]functions = [convert_to_openai_function(t) for t in tools] functions[0] {'name': 'move_file', 'description': 'Move or rename a file from one location to another', 'parameters': {'type': 'object', 'properties': {'source_path': {'description': 'Path of the file to move', 'type': 'string'}, 'destination_path': {'description': 'New path for the moved file', 'type': 'string'}}, 'required': ['source_path', 'destination_path']}} message = model.invoke( [HumanMessage(content="move file foo to bar")], functions=functions) message AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}}) message.additional_kwargs["function_call"] {'name': 'move_file', 'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}'} With OpenAI chat models we can also automatically bind and convert function-like objects with `bind_functions` model_with_functions = model.bind_functions(tools)model_with_functions.invoke([HumanMessage(content="move file foo to bar")]) AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}}) Or we can use the update OpenAI API that uses `tools` and `tool_choice` instead of `functions` and `function_call` by using `ChatOpenAI.bind_tools`: model_with_tools = model.bind_tools(tools)model_with_tools.invoke([HumanMessage(content="move file foo to bar")]) AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_btkY3xV71cEVAOHnNa5qwo44', 'function': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}, 'type': 'function'}]}) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_as_openai_functions.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to stream tool calls ](/v0.2/docs/how_to/tool_streaming/)[ Next How to handle tool errors ](/v0.2/docs/how_to/tools_error/)
null
https://python.langchain.com/v0.2/docs/how_to/tools_few_shot/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use few-shot prompting with tool calling How to use few-shot prompting with tool calling =============================================== For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt. First let's define our tools and model. from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply] **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools) **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) Let's run our model where we can notice that even with some special instructions our model can get tripped up by order of operations. llm_with_tools.invoke( "Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations").tool_calls [{'name': 'Multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_T88XN6ECucTgbXXkyDeC2CQj'}, {'name': 'Add', 'args': {'a': 952, 'b': -20}, 'id': 'call_licdlmGsRqzup8rhqJSb1yZ4'}] The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 \* 8 yet. By adding a prompt with some examples we can correct this behavior: from langchain_core.messages import AIMessage, HumanMessage, ToolMessagefrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughexamples = [ HumanMessage( "What's the product of 317253 and 128472 plus four", name="example_user" ), AIMessage( "", name="example_assistant", tool_calls=[ {"name": "Multiply", "args": {"x": 317253, "y": 128472}, "id": "1"} ], ), ToolMessage("16505054784", tool_call_id="1"), AIMessage( "", name="example_assistant", tool_calls=[{"name": "Add", "args": {"x": 16505054784, "y": 4}, "id": "2"}], ), ToolMessage("16505054788", tool_call_id="2"), AIMessage( "The product of 317253 and 128472 plus four is 16505054788", name="example_assistant", ),]system = """You are bad at math but are an expert at using a calculator. Use past tool usage as an example of how to correctly use the tools."""few_shot_prompt = ChatPromptTemplate.from_messages( [ ("system", system), *examples, ("human", "{query}"), ])chain = {"query": RunnablePassthrough()} | few_shot_prompt | llm_with_toolschain.invoke("Whats 119 times 8 minus 20").tool_calls **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) [{'name': 'Multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_9MvuwQqg7dlJupJcoTWiEsDo'}] And we get the correct output this time. Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_few_shot.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle tool errors ](/v0.2/docs/how_to/tools_error/)[ Next How to add a human-in-the-loop for tools ](/v0.2/docs/how_to/tools_human/)
null
https://python.langchain.com/v0.2/docs/how_to/graph_constructing/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to construct knowledge graphs On this page How to construct knowledge graphs ================================= In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application. ⚠️ Security note ⚠️[​](#️-security-note-️ "Direct link to ⚠️ Security note ⚠️") ------------------------------------------------------------------------------- Constructing knowledge graphs requires executing write access to the database. There are inherent risks in doing this. Make sure that you verify and validate data before importing it. For more on general security best practices, [see here](/v0.2/docs/security/). Architecture[​](#architecture "Direct link to Architecture") ------------------------------------------------------------ At a high-level, the steps of constructing a knowledge are from text are: 1. **Extracting structured information from text**: Model is used to extract structured graph information from text. 2. **Storing into graph database**: Storing the extracted structured graph information into a graph database enables downstream RAG applications Setup[​](#setup "Direct link to Setup") --------------------------------------- First, get required packages and set environment variables. In this example, we will be using Neo4j graph database. %pip install --upgrade --quiet langchain langchain-community langchain-openai langchain-experimental neo4j Note: you may need to restart the kernel to use updated packages. We default to OpenAI models in this guide. import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Uncomment the below to use LangSmith. Not required.# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()# os.environ["LANGCHAIN_TRACING_V2"] = "true" Β·Β·Β·Β·Β·Β·Β·Β· Next, we need to define Neo4j credentials and connection. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database. import osfrom langchain_community.graphs import Neo4jGraphos.environ["NEO4J_URI"] = "bolt://localhost:7687"os.environ["NEO4J_USERNAME"] = "neo4j"os.environ["NEO4J_PASSWORD"] = "password"graph = Neo4jGraph() **API Reference:**[Neo4jGraph](https://api.python.langchain.com/en/latest/graphs/langchain_community.graphs.neo4j_graph.Neo4jGraph.html) LLM Graph Transformer[​](#llm-graph-transformer "Direct link to LLM Graph Transformer") --------------------------------------------------------------------------------------- Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The `LLMGraphTransformer` converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data. import osfrom langchain_experimental.graph_transformers import LLMGraphTransformerfrom langchain_openai import ChatOpenAIllm = ChatOpenAI(temperature=0, model_name="gpt-4-turbo")llm_transformer = LLMGraphTransformer(llm=llm) **API Reference:**[LLMGraphTransformer](https://api.python.langchain.com/en/latest/graph_transformers/langchain_experimental.graph_transformers.llm.LLMGraphTransformer.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) Now we can pass in example text and examine the results. from langchain_core.documents import Documenttext = """Marie Curie, born in 1867, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.She was, in 1906, the first woman to become a professor at the University of Paris."""documents = [Document(page_content=text)]graph_documents = llm_transformer.convert_to_graph_documents(documents)print(f"Nodes:{graph_documents[0].nodes}")print(f"Relationships:{graph_documents[0].relationships}") **API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')] Examine the following image to better grasp the structure of the generated knowledge graph. ![graph_construction1.png](/v0.2/assets/images/graph_construction1-2b4d31978d58696d5a6a52ad92ae088f.png) Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution. Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements. llm_transformer_filtered = LLMGraphTransformer( llm=llm, allowed_nodes=["Person", "Country", "Organization"], allowed_relationships=["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"],)graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents( documents)print(f"Nodes:{graph_documents_filtered[0].nodes}")print(f"Relationships:{graph_documents_filtered[0].relationships}") Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')] For a better understanding of the generated graph, we can again visualize it. ![graph_construction2.png](/v0.2/assets/images/graph_construction2-8b43506ae0fb3a006eaa4ba83fea8af5.png) The `node_properties` parameter enables the extraction of node properties, allowing the creation of a more detailed graph. When set to `True`, LLM autonomously identifies and extracts relevant node properties. Conversely, if `node_properties` is defined as a list of strings, the LLM selectively retrieves only the specified properties from the text. llm_transformer_props = LLMGraphTransformer( llm=llm, allowed_nodes=["Person", "Country", "Organization"], allowed_relationships=["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], node_properties=["born_year"],)graph_documents_props = llm_transformer_props.convert_to_graph_documents(documents)print(f"Nodes:{graph_documents_props[0].nodes}")print(f"Relationships:{graph_documents_props[0].relationships}") Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')] Storing to graph database[​](#storing-to-graph-database "Direct link to Storing to graph database") --------------------------------------------------------------------------------------------------- The generated graph documents can be stored to a graph database using the `add_graph_documents` method. graph.add_graph_documents(graph_documents_props) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/graph_constructing.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Build an Agent with AgentExecutor (Legacy) ](/v0.2/docs/how_to/agent_executor/)[ Next How to partially format prompt templates ](/v0.2/docs/how_to/prompts_partial/) * [⚠️ Security note ⚠️](#️-security-note-️) * [Architecture](#architecture) * [Setup](#setup) * [LLM Graph Transformer](#llm-graph-transformer) * [Storing to graph database](#storing-to-graph-database)
null
https://python.langchain.com/v0.2/docs/how_to/tools_error/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle tool errors On this page How to handle tool errors ========================= Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that are valid. We can build error handling into our chains to mitigate these failure modes. Setup[​](#setup "Direct link to Setup") --------------------------------------- We'll need to install the following packages: %pip install --upgrade --quiet langchain-core langchain-openai If you'd like to trace your runs in [LangSmith](https://docs.smith.langchain.com/) uncomment and set the following environment variables: import getpassimport os# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() Chain[​](#chain "Direct link to Chain") --------------------------------------- Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model. * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) # Define toolfrom langchain_core.tools import tool@tooldef complex_tool(int_arg: int, float_arg: float, dict_arg: dict) -> int: """Do something complex with a complex tool.""" return int_arg * float_arg **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) llm_with_tools = llm.bind_tools( [complex_tool],) # Define chainchain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | complex_tool We can see that when we try to invoke this chain with even a fairly explicit input, the model fails to correctly call the tool (it forgets the `dict_arg` argument). chain.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg") ---------------------------------------------------------------------------``````outputValidationError Traceback (most recent call last)``````outputCell In[12], line 1----> 1 chain.invoke( 2 "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" 3 )``````outputFile ~/langchain/libs/core/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config) 2497 try: 2498 for i, step in enumerate(self.steps):-> 2499 input = step.invoke( 2500 input, 2501 # mark each step as a child run 2502 patch_config( 2503 config, callbacks=run_manager.get_child(f"seq:step:{i+1}") 2504 ), 2505 ) 2506 # finish the root run 2507 except BaseException as e:``````outputFile ~/langchain/libs/core/langchain_core/tools.py:241, in BaseTool.invoke(self, input, config, **kwargs) 234 def invoke( 235 self, 236 input: Union[str, Dict], 237 config: Optional[RunnableConfig] = None, 238 **kwargs: Any, 239 ) -> Any: 240 config = ensure_config(config)--> 241 return self.run( 242 input, 243 callbacks=config.get("callbacks"), 244 tags=config.get("tags"), 245 metadata=config.get("metadata"), 246 run_name=config.get("run_name"), 247 run_id=config.pop("run_id", None), 248 **kwargs, 249 )``````outputFile ~/langchain/libs/core/langchain_core/tools.py:387, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs) 385 except ValidationError as e: 386 if not self.handle_validation_error:--> 387 raise e 388 elif isinstance(self.handle_validation_error, bool): 389 observation = "Tool input validation error"``````outputFile ~/langchain/libs/core/langchain_core/tools.py:378, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs) 364 run_manager = callback_manager.on_tool_start( 365 {"name": self.name, "description": self.description}, 366 tool_input if isinstance(tool_input, str) else str(tool_input), (...) 375 **kwargs, 376 ) 377 try:--> 378 parsed_input = self._parse_input(tool_input) 379 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) 380 observation = ( 381 self._run(*tool_args, run_manager=run_manager, **tool_kwargs) 382 if new_arg_supported 383 else self._run(*tool_args, **tool_kwargs) 384 )``````outputFile ~/langchain/libs/core/langchain_core/tools.py:283, in BaseTool._parse_input(self, tool_input) 281 else: 282 if input_args is not None:--> 283 result = input_args.parse_obj(tool_input) 284 return { 285 k: getattr(result, k) 286 for k, v in result.dict().items() 287 if k in tool_input 288 } 289 return tool_input``````outputFile ~/langchain/.venv/lib/python3.9/site-packages/pydantic/v1/main.py:526, in BaseModel.parse_obj(cls, obj) 524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}') 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e--> 526 return cls(**obj)``````outputFile ~/langchain/.venv/lib/python3.9/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error:--> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values)``````outputValidationError: 1 validation error for complex_toolSchemadict_arg field required (type=value_error.missing) Try/except tool call[​](#tryexcept-tool-call "Direct link to Try/except tool call") ----------------------------------------------------------------------------------- The simplest way to more gracefully handle errors is to try/except the tool-calling step and return a helpful message on errors: from typing import Anyfrom langchain_core.runnables import Runnable, RunnableConfigdef try_except_tool(tool_args: dict, config: RunnableConfig) -> Runnable: try: complex_tool.invoke(tool_args, config=config) except Exception as e: return f"Calling tool with arguments:\n\n{tool_args}\n\nraised the following error:\n\n{type(e)}: {e}"chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | try_except_tool **API Reference:**[Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html) print( chain.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" )) Calling tool with arguments:{'int_arg': 5, 'float_arg': 2.1}raised the following error:<class 'pydantic.v1.error_wrappers.ValidationError'>: 1 validation error for complex_toolSchemadict_arg field required (type=value_error.missing) Fallbacks[​](#fallbacks "Direct link to Fallbacks") --------------------------------------------------- We can also try to fallback to a better model in the event of a tool invocation error. In this case we'll fall back to an identical chain that uses `gpt-4-1106-preview` instead of `gpt-3.5-turbo`. chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | complex_toolbetter_model = ChatOpenAI(model="gpt-4-1106-preview", temperature=0).bind_tools( [complex_tool], tool_choice="complex_tool")better_chain = better_model | (lambda msg: msg.tool_calls[0]["args"]) | complex_toolchain_with_fallback = chain.with_fallbacks([better_chain])chain_with_fallback.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg") 10.5 Looking at the [Langsmith trace](https://smith.langchain.com/public/00e91fc2-e1a4-4b0f-a82e-e6b3119d196c/r) for this chain run, we can see that the first chain call fails as expected and it's the fallback that succeeds. Retry with exception[​](#retry-with-exception "Direct link to Retry with exception") ------------------------------------------------------------------------------------ To take things one step further, we can try to automatically re-run the chain with the exception passed in, so that the model may be able to correct its behavior: import jsonfrom typing import Anyfrom langchain_core.messages import AIMessage, HumanMessage, ToolCall, ToolMessagefrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import RunnablePassthroughclass CustomToolException(Exception): """Custom LangChain tool exception.""" def __init__(self, tool_call: ToolCall, exception: Exception) -> None: super().__init__() self.tool_call = tool_call self.exception = exceptiondef tool_custom_exception(msg: AIMessage, config: RunnableConfig) -> Runnable: try: return complex_tool.invoke(msg.tool_calls[0]["args"], config=config) except Exception as e: raise CustomToolException(msg.tool_calls[0], e)def exception_to_messages(inputs: dict) -> dict: exception = inputs.pop("exception") # Add historical messages to the original input, so the model knows that it made a mistake with the last tool call. messages = [ AIMessage(content="", tool_calls=[exception.tool_call]), ToolMessage( tool_call_id=exception.tool_call["id"], content=str(exception.exception) ), HumanMessage( content="The last tool call raised an exception. Try calling the tool again with corrected arguments. Do not repeat mistakes." ), ] inputs["last_output"] = messages return inputs# We add a last_output MessagesPlaceholder to our prompt which if not passed in doesn't# affect the prompt at all, but gives us the option to insert an arbitrary list of Messages# into the prompt if needed. We'll use this on retries to insert the error message.prompt = ChatPromptTemplate.from_messages( [("human", "{input}"), MessagesPlaceholder("last_output", optional=True)])chain = prompt | llm_with_tools | tool_custom_exception# If the initial chain call fails, we rerun it withe the exception passed in as a message.self_correcting_chain = chain.with_fallbacks( [exception_to_messages | chain], exception_key="exception") **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) self_correcting_chain.invoke( { "input": "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" }) 10.5 And our chain succeeds! Looking at the [LangSmith trace](https://smith.langchain.com/public/c11e804c-e14f-4059-bd09-64766f999c14/r), we can see that indeed our initial chain still fails, and it's only on retrying that the chain succeeds. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_error.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to convert tools to OpenAI Functions ](/v0.2/docs/how_to/tools_as_openai_functions/)[ Next How to use few-shot prompting with tool calling ](/v0.2/docs/how_to/tools_few_shot/) * [Setup](#setup) * [Chain](#chain) * [Try/except tool call](#tryexcept-tool-call) * [Fallbacks](#fallbacks) * [Retry with exception](#retry-with-exception)
null
https://python.langchain.com/v0.2/docs/how_to/agent_executor/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Build an Agent with AgentExecutor (Legacy) On this page Build an Agent with AgentExecutor (Legacy) ========================================== info This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/v0.2/docs/concepts/#langgraph) or the [migration guide](/v0.2/docs/how_to/migrate_agent/) By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish. In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it. Concepts[​](#concepts "Direct link to Concepts") ------------------------------------------------ Concepts we will cover are: * Using [language models](/v0.2/docs/concepts/#chat-models), in particular their tool calling ability * Creating a [Retriever](/v0.2/docs/concepts/#retrievers) to expose specific information to our agent * Using a Search [Tool](/v0.2/docs/concepts/#tools) to look up things online * [`Chat History`](/v0.2/docs/concepts/#chat-history), which allows a chatbot to "remember" past interactions and take them into account when responding to follow-up questions. * Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith) Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Jupyter Notebook[​](#jupyter-notebook "Direct link to Jupyter Notebook") This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them. This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install. ### Installation[​](#installation "Direct link to Installation") To install LangChain run: * Pip * Conda pip install langchain conda install langchain -c conda-forge For more details, see our [Installation guide](/v0.2/docs/how_to/installation/). ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." Or, if in a notebook, you can set them with: import getpassimport osos.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() Define tools[​](#define-tools "Direct link to Define tools") ------------------------------------------------------------ We first need to create the tools we want to use. We will use two tools: [Tavily](/v0.2/docs/integrations/tools/tavily_search/) (to search online) and then a retriever over a local index we will create ### [Tavily](/v0.2/docs/integrations/tools/tavily_search/)[​](#tavily "Direct link to tavily") We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires an API key - they have a free tier, but if you don't have one or don't want to create one, you can always ignore this step. Once you create your API key, you will need to export that as: export TAVILY_API_KEY="..." from langchain_community.tools.tavily_search import TavilySearchResults **API Reference:**[TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html) search = TavilySearchResults(max_results=2) search.invoke("what is the weather in SF") [{'url': 'https://www.weatherapi.com/', 'content': "{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714000492, 'localtime': '2024-04-24 16:14'}, 'current': {'last_updated_epoch': 1713999600, 'last_updated': '2024-04-24 16:00', 'temp_c': 15.6, 'temp_f': 60.1, 'is_day': 1, 'condition': {'text': 'Overcast', 'icon': '//cdn.weatherapi.com/weather/64x64/day/122.png', 'code': 1009}, 'wind_mph': 10.5, 'wind_kph': 16.9, 'wind_degree': 330, 'wind_dir': 'NNW', 'pressure_mb': 1018.0, 'pressure_in': 30.06, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 72, 'cloud': 100, 'feelslike_c': 15.6, 'feelslike_f': 60.1, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 14.8, 'gust_kph': 23.8}}"}, {'url': 'https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/', 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}] ### Retriever[​](#retriever "Direct link to Retriever") We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/v0.2/docs/tutorials/rag/). from langchain_community.document_loaders import WebBaseLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitterloader = WebBaseLoader("https://docs.smith.langchain.com/overview")docs = loader.load()documents = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200).split_documents(docs)vector = FAISS.from_documents(documents, OpenAIEmbeddings())retriever = vector.as_retriever() **API Reference:**[WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) retriever.invoke("how to upload a dataset")[0] Document(page_content='# The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix="sample-experiment", # The name of the experiment metadata={ "version": "1.0.0", "revision_id": "beta" },)import { Client, Run, Example } from \'langsmith\';import { runOnDataset } from \'langchain/smith\';import { EvaluationResult } from \'langsmith/evaluation\';const client = new Client();// Define dataset: these are your test casesconst datasetName = "Sample Dataset";const dataset = await client.createDataset(datasetName, { description: "A sample dataset in LangSmith."});await client.createExamples({ inputs: [ { postfix: "to LangSmith" }, { postfix: "to Evaluations in LangSmith" }, ], outputs: [ { output: "Welcome to LangSmith" }, { output: "Welcome to Evaluations in LangSmith" }, ], datasetId: dataset.id,});// Define your evaluatorconst exactMatch = async ({ run, example }: { run: Run; example?:', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'Getting started with LangSmith | \uf8ffΓΌΒΆΓΊΓ”βˆΓ¨\uf8ffΓΌΓ΅β€ Γ”βˆΓ¨ LangSmith', 'description': 'Introduction', 'language': 'en'}) Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it) from langchain.tools.retriever import create_retriever_tool **API Reference:**[create\_retriever\_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html) retriever_tool = create_retriever_tool( retriever, "langsmith_search", "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",) ### Tools[​](#tools "Direct link to Tools") Now that we have created both, we can create a list of tools that we will use downstream. tools = [search, retriever_tool] Using Language Models[​](#using-language-models "Direct link to Using Language Models") --------------------------------------------------------------------------------------- Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below! * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-4") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) You can call the language model by passing in a list of messages. By default, the response is a `content` string. from langchain_core.messages import HumanMessageresponse = model.invoke([HumanMessage(content="hi!")])response.content **API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) 'Hello! How can I assist you today?' We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind_tools` to give the language model knowledge of these tools model_with_tools = model.bind_tools(tools) We can now call the model. Let's first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field. response = model_with_tools.invoke([HumanMessage(content="Hi!")])print(f"ContentString: {response.content}")print(f"ToolCalls: {response.tool_calls}") ContentString: Hello! How can I assist you today?ToolCalls: [] Now, let's try calling it with some input that would expect a tool to be called. response = model_with_tools.invoke([HumanMessage(content="What's the weather in SF?")])print(f"ContentString: {response.content}")print(f"ToolCalls: {response.tool_calls}") ContentString: ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_4HteVahXkRAkWjp6dGXryKZX'}] We can see that there's now no content, but there is a tool call! It wants us to call the Tavily Search tool. This isn't calling that tool yet - it's just telling us to. In order to actually calll it, we'll want to create our agent. Create the agent[​](#create-the-agent "Direct link to Create the agent") ------------------------------------------------------------------------ Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/v0.2/docs/concepts/#agent_types/). We can first choose the prompt we want to use to guide the agent. If you want to see the contents of this prompt and have access to LangSmith, you can go to: [https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent) from langchain import hub# Get the prompt to use - you can modify this!prompt = hub.pull("hwchase17/openai-functions-agent")prompt.messages [SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')] Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/v0.2/docs/concepts/#agents). Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_agent` will call `.bind_tools` for us under the hood. from langchain.agents import create_tool_calling_agentagent = create_tool_calling_agent(model, tools, prompt) **API Reference:**[create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools) **API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) Run the agent[​](#run-the-agent "Direct link to Run the agent") --------------------------------------------------------------- We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won't remember previous interactions). First up, let's how it responds when there's no need to call a tool: agent_executor.invoke({"input": "hi!"}) {'input': 'hi!', 'output': 'Hello! How can I assist you today?'} In order to see exactly what is happening under the hood (and to make sure it's not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/8441812b-94ce-4832-93ec-e1114214553a/r) Let's now try it out on an example where it should be invoking the retriever agent_executor.invoke({"input": "how can langsmith help with testing?"}) {'input': 'how can langsmith help with testing?', 'output': 'LangSmith is a platform that aids in building production-grade Language Learning Model (LLM) applications. It can assist with testing in several ways:\n\n1. **Monitoring and Evaluation**: LangSmith allows close monitoring and evaluation of your application. This helps you to ensure the quality of your application and deploy it with confidence.\n\n2. **Tracing**: LangSmith has tracing capabilities that can be beneficial for debugging and understanding the behavior of your application.\n\n3. **Evaluation Capabilities**: LangSmith has built-in tools for evaluating the performance of your LLM. \n\n4. **Prompt Hub**: This is a prompt management tool built into LangSmith that can help in testing different prompts and their responses.\n\nPlease note that to use LangSmith, you would need to install it and create an API key. The platform offers Python and Typescript SDKs for utilization. It works independently and does not require the use of LangChain.'} Let's take a look at the [LangSmith trace](https://smith.langchain.com/public/762153f6-14d4-4c98-8659-82650f860c62/r) to make sure it's actually calling that. Now let's try one where it needs to call the search tool: agent_executor.invoke({"input": "whats the weather in sf?"}) {'input': 'whats the weather in sf?', 'output': 'The current weather in San Francisco is partly cloudy with a temperature of 16.1Β°C (61.0Β°F). The wind is coming from the WNW at a speed of 10.5 mph. The humidity is at 67%. [source](https://www.weatherapi.com/)'} We can check out the [LangSmith trace](https://smith.langchain.com/public/36df5b1a-9a0b-4185-bae2-964e1d53c665/r) to make sure it's calling the search tool effectively. Adding in memory[​](#adding-in-memory "Direct link to Adding in memory") ------------------------------------------------------------------------ As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. Note: it needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name # Here we pass in an empty list of messages for chat_history because it is the first message in the chatagent_executor.invoke({"input": "hi! my name is bob", "chat_history": []}) {'input': 'hi! my name is bob', 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'} from langchain_core.messages import AIMessage, HumanMessage **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) agent_executor.invoke( { "chat_history": [ HumanMessage(content="hi! my name is bob"), AIMessage(content="Hello Bob! How can I assist you today?"), ], "input": "what's my name?", }) {'chat_history': [HumanMessage(content='hi! my name is bob'), AIMessage(content='Hello Bob! How can I assist you today?')], 'input': "what's my name?", 'output': 'Your name is Bob. How can I assist you further?'} If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history/). from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorystore = {}def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id] **API Reference:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) Because we have multiple inputs, we need to specify two things: * `input_messages_key`: The input key to use to add to the conversation history. * `history_messages_key`: The key to add the loaded messages into. agent_with_chat_history = RunnableWithMessageHistory( agent_executor, get_session_history, input_messages_key="input", history_messages_key="chat_history",) agent_with_chat_history.invoke( {"input": "hi! I'm bob"}, config={"configurable": {"session_id": "<foo>"}},) {'input': "hi! I'm bob", 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'} agent_with_chat_history.invoke( {"input": "what's my name?"}, config={"configurable": {"session_id": "<foo>"}},) {'input': "what's my name?", 'chat_history': [HumanMessage(content="hi! I'm bob"), AIMessage(content='Hello Bob! How can I assist you today?')], 'output': 'Your name is Bob.'} Example LangSmith trace: [https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r](https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r) Conclusion[​](#conclusion "Direct link to Conclusion") ------------------------------------------------------ That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! info This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph) If you want to continue using LangChain agents, some good advanced guides are: * [How to use LangGraph's built-in versions of `AgentExecutor`](/v0.2/docs/how_to/migrate_agent/) * [How to create a custom agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/custom_agent/) * [How to stream responses from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/) * [How to return structured output from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/agent_structured/) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/agent_executor.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add ad-hoc tool calling capability to LLMs and Chat Models ](/v0.2/docs/how_to/tools_prompting/)[ Next How to construct knowledge graphs ](/v0.2/docs/how_to/graph_constructing/) * [Concepts](#concepts) * [Setup](#setup) * [Jupyter Notebook](#jupyter-notebook) * [Installation](#installation) * [LangSmith](#langsmith) * [Define tools](#define-tools) * [Tavily](#tavily) * [Retriever](#retriever) * [Tools](#tools) * [Using Language Models](#using-language-models) * [Create the agent](#create-the-agent) * [Run the agent](#run-the-agent) * [Adding in memory](#adding-in-memory) * [Conclusion](#conclusion)
null
https://python.langchain.com/v0.2/docs/how_to/prompts_partial/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to partially format prompt templates On this page How to partially format prompt templates ======================================== Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) Like partially binding arguments to a function, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: 1. Partial formatting with string values. 2. Partial formatting with functions that return string values. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain. Partial with strings[​](#partial-with-strings "Direct link to Partial with strings") ------------------------------------------------------------------------------------ One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: from langchain_core.prompts import PromptTemplateprompt = PromptTemplate.from_template("{foo}{bar}")partial_prompt = prompt.partial(foo="foo")print(partial_prompt.format(bar="baz")) **API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) foobaz You can also just initialize the prompt with the partialed variables. prompt = PromptTemplate( template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})print(prompt.format(bar="baz")) foobaz Partial with functions[​](#partial-with-functions "Direct link to Partial with functions") ------------------------------------------------------------------------------------------ The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is inconvenient. In this case, it's handy to be able to partial the prompt with a function that always returns the current date. from datetime import datetimedef _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S")prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"],)partial_prompt = prompt.partial(date=_get_datetime)print(partial_prompt.format(adjective="funny")) Tell me a funny joke about the day 04/21/2024, 19:43:57 You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow. prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime},)print(prompt.format(adjective="funny")) Tell me a funny joke about the day 04/21/2024, 19:43:57 Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to partially apply variables to your prompt templates. Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/prompts_partial.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to construct knowledge graphs ](/v0.2/docs/how_to/graph_constructing/)[ Next How to handle multiple queries when doing query analysis ](/v0.2/docs/how_to/query_multiple_queries/) * [Partial with strings](#partial-with-strings) * [Partial with functions](#partial-with-functions) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/query_multiple_queries/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle multiple queries when doing query analysis On this page How to handle multiple queries when doing query analysis ======================================================== Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that. Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") # %pip install -qU langchain langchain-community langchain-openai langchain-chroma #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We'll use OpenAI in this example: import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() ### Create Index[​](#create-index "Direct link to Create Index") We will create a vectorstore over fake information. from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplittertexts = ["Harrison worked at Kensho", "Ankush worked at Facebook"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts( texts, embeddings,)retriever = vectorstore.as_retriever(search_kwargs={"k": 1}) **API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) Query analysis[​](#query-analysis "Direct link to Query analysis") ------------------------------------------------------------------ We will use function calling to structure the output. We will let it return multiple queries. from typing import List, Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Search(BaseModel): """Search over a database of job records.""" queries: List[str] = Field( ..., description="Distinct queries to search for", ) from langchain_core.output_parsers.openai_tools import PydanticToolsParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIoutput_parser = PydanticToolsParser(tools=[Search])system = """You have the ability to issue search queries to get information to help answer user information.If you need to look up two distinct pieces of information, you are allowed to do that!"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.with_structured_output(Search)query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm **API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) /Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change. warn_beta( We can see that this allows for creating multiple queries query_analyzer.invoke("where did Harrison Work") Search(queries=['Harrison work location']) query_analyzer.invoke("where did Harrison and ankush Work") Search(queries=['Harrison work place', 'Ankush work place']) Retrieval with query analysis[​](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis") --------------------------------------------------------------------------------------------------------------- So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time. from langchain_core.runnables import chain **API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html) @chainasync def custom_chain(question): response = await query_analyzer.ainvoke(question) docs = [] for query in response.queries: new_docs = await retriever.ainvoke(query) docs.extend(new_docs) # You probably want to think about reranking or deduplicating documents here # But that is a separate topic return docs await custom_chain.ainvoke("where did Harrison Work") [Document(page_content='Harrison worked at Kensho')] await custom_chain.ainvoke("where did Harrison and ankush Work") [Document(page_content='Harrison worked at Kensho'), Document(page_content='Ankush worked at Facebook')] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_multiple_queries.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to partially format prompt templates ](/v0.2/docs/how_to/prompts_partial/)[ Next How to use built-in tools and toolkits ](/v0.2/docs/how_to/tools_builtin/) * [Setup](#setup) * [Create Index](#create-index) * [Query analysis](#query-analysis) * [Retrieval with query analysis](#retrieval-with-query-analysis)
null
https://python.langchain.com/v0.2/docs/how_to/tools_model_specific/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to bind model-specific tools How to bind model-specific tools ================================ Providers adopt different conventions for formatting tool schemas. For instance, OpenAI uses a format like this: * `type`: The type of the tool. At the time of writing, this is always `"function"`. * `function`: An object containing tool parameters. * `function.name`: The name of the schema to output. * `function.description`: A high level description of the schema to output. * `function.parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) dict. We can bind this model-specific format directly to the model as well if preferred. Here's an example: from langchain_openai import ChatOpenAImodel = ChatOpenAI()model_with_tools = model.bind( tools=[ { "type": "function", "function": { "name": "multiply", "description": "Multiply two integers together.", "parameters": { "type": "object", "properties": { "a": {"type": "number", "description": "First integer"}, "b": {"type": "number", "description": "Second integer"}, }, "required": ["a", "b"], }, }, } ])model_with_tools.invoke("Whats 119 times 8?") **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe', 'function': {'arguments': '{"a":119,"b":8}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 62, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-353e8a9a-7125-4f94-8c68-4f3da4c21120-0', tool_calls=[{'name': 'multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe'}]) This is functionally equivalent to the `bind_tools()` method. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_model_specific.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add a human-in-the-loop for tools ](/v0.2/docs/how_to/tools_human/)[ Next How to trim messages ](/v0.2/docs/how_to/trim_messages/)
null
https://python.langchain.com/v0.2/docs/how_to/tools_human/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add a human-in-the-loop for tools On this page How to add a human-in-the-loop for tools ======================================== There are certain tools that we don't trust a model to execute on its own. One thing we can do in such situations is require human approval before the tool is invoked. info This how-to guide shows a simple way to add human-in-the-loop for code running in a jupyter notebook or in a terminal. To build a production application, you will need to do more work to keep track of application state appropriately. We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/). Setup[​](#setup "Direct link to Setup") --------------------------------------- We'll need to install the following packages: %pip install --upgrade --quiet langchain And set these environment variables: import getpassimport os# If you'd like to use LangSmith, uncomment the below:# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() Chain[​](#chain "Direct link to Chain") --------------------------------------- Let's create a few simple (dummy) tools and a tool-calling chain: * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) from typing import Dict, Listfrom langchain_core.messages import AIMessagefrom langchain_core.runnables import Runnable, RunnablePassthroughfrom langchain_core.tools import tool@tooldef count_emails(last_n_days: int) -> int: """Multiply two integers together.""" return last_n_days * 2@tooldef send_email(message: str, recipient: str) -> str: "Add two integers." return f"Successfully sent email to {recipient}."tools = [count_emails, send_email]llm_with_tools = llm.bind_tools(tools)def call_tools(msg: AIMessage) -> List[Dict]: """Simple sequential tool calling helper.""" tool_map = {tool.name: tool for tool in tools} tool_calls = msg.tool_calls.copy() for tool_call in tool_calls: tool_call["output"] = tool_map[tool_call["name"]].invoke(tool_call["args"]) return tool_callschain = llm_with_tools | call_toolschain.invoke("how many emails did i get in the last 5 days?") **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) [{'name': 'count_emails', 'args': {'last_n_days': 5}, 'id': 'toolu_01QYZdJ4yPiqsdeENWHqioFW', 'output': 10}] Adding human approval[​](#adding-human-approval "Direct link to Adding human approval") --------------------------------------------------------------------------------------- Let's add a step in the chain that will ask a person to approve or reject the tall call request. On rejection, the step will raise an exception which will stop execution of the rest of the chain. import jsonclass NotApproved(Exception): """Custom exception."""def human_approval(msg: AIMessage) -> AIMessage: """Responsible for passing through its input or raising an exception. Args: msg: output from the chat model Returns: msg: original output from the msg """ tool_strs = "\n\n".join( json.dumps(tool_call, indent=2) for tool_call in msg.tool_calls ) input_msg = ( f"Do you approve of the following tool invocations\n\n{tool_strs}\n\n" "Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.\n >>>" ) resp = input(input_msg) if resp.lower() not in ("yes", "y"): raise NotApproved(f"Tool invocations not approved:\n\n{tool_strs}") return msg chain = llm_with_tools | human_approval | call_toolschain.invoke("how many emails did i get in the last 5 days?") Do you approve of the following tool invocations{ "name": "count_emails", "args": { "last_n_days": 5 }, "id": "toolu_01WbD8XeMoQaRFtsZezfsHor"}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no. >>> yes [{'name': 'count_emails', 'args': {'last_n_days': 5}, 'id': 'toolu_01WbD8XeMoQaRFtsZezfsHor', 'output': 10}] try: chain.invoke("Send sally@gmail.com an email saying 'What's up homie'")except NotApproved as e: print() print(e) Do you approve of the following tool invocations{ "name": "send_email", "args": { "recipient": "sally@gmail.com", "message": "What's up homie" }, "id": "toolu_014XccHFzBiVcc9GV1harV9U"}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no. >>> no``````outputTool invocations not approved:{ "name": "send_email", "args": { "recipient": "sally@gmail.com", "message": "What's up homie" }, "id": "toolu_014XccHFzBiVcc9GV1harV9U"} [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_human.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use few-shot prompting with tool calling ](/v0.2/docs/how_to/tools_few_shot/)[ Next How to bind model-specific tools ](/v0.2/docs/how_to/tools_model_specific/) * [Setup](#setup) * [Chain](#chain) * [Adding human approval](#adding-human-approval)
null
https://python.langchain.com/v0.2/docs/how_to/trim_messages/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to trim messages On this page How to trim messages ==================== Prerequisites This guide assumes familiarity with the following concepts: * [Messages](/v0.2/docs/concepts/#messages) * [Chat models](/v0.2/docs/concepts/#chat-models) * [Chaining](/v0.2/docs/how_to/sequence/) * [Chat history](/v0.2/docs/concepts/#chat-history) The methods in this guide also require `langchain-core>=0.2.9`. All models have finite context windows, meaning there's a limit to how many tokens they can take as input. If you have very long messages or a chain/agent that accumulates a long message is history, you'll need to manage the length of the messages you're passing in to the model. The `trim_messages` util provides some basic strategies for trimming a list of messages to be of a certain token length. Getting the last `max_tokens` tokens[​](#getting-the-last-max_tokens-tokens "Direct link to getting-the-last-max_tokens-tokens") -------------------------------------------------------------------------------------------------------------------------------- To get the last `max_tokens` in the list of Messages we can set `strategy="last"`. Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model: # pip install -U langchain-openaifrom langchain_core.messages import ( AIMessage, HumanMessage, SystemMessage, trim_messages,)from langchain_openai import ChatOpenAImessages = [ SystemMessage("you're a good assistant, you always respond with a joke."), HumanMessage("i wonder why it's called langchain"), AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), HumanMessage("and who is harrison chasing anyways"), AIMessage( "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!" ), HumanMessage("what do you call a speechless parrot"),]trim_messages( messages, max_tokens=45, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"),) **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [trim\_messages](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.trim_messages.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) [AIMessage(content="Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot')] If we want to always keep the initial system message we can specify `include_system=True`: trim_messages( messages, max_tokens=45, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), include_system=True,) [SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot')] If we want to allow splitting up the contents of a message we can specify `allow_partial=True`: trim_messages( messages, max_tokens=56, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), include_system=True, allow_partial=True,) [SystemMessage(content="you're a good assistant, you always respond with a joke."), AIMessage(content="\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot')] If we need to make sure that our first message (excluding the system message) is always of a specific type, we can specify `start_on`: trim_messages( messages, max_tokens=60, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), include_system=True, start_on="human",) [SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot')] Getting the first `max_tokens` tokens[​](#getting-the-first-max_tokens-tokens "Direct link to getting-the-first-max_tokens-tokens") ----------------------------------------------------------------------------------------------------------------------------------- We can perform the flipped operation of getting the _first_ `max_tokens` by specifying `strategy="first"`: trim_messages( messages, max_tokens=45, strategy="first", token_counter=ChatOpenAI(model="gpt-4o"),) [SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content="i wonder why it's called langchain")] Writing a custom token counter[​](#writing-a-custom-token-counter "Direct link to Writing a custom token counter") ------------------------------------------------------------------------------------------------------------------ We can write a custom token counter function that takes in a list of messages and returns an int. from typing import List# pip install tiktokenimport tiktokenfrom langchain_core.messages import BaseMessage, ToolMessagedef str_token_counter(text: str) -> int: enc = tiktoken.get_encoding("o200k_base") return len(enc.encode(text))def tiktoken_counter(messages: List[BaseMessage]) -> int: """Approximately reproduce https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb For simplicity only supports str Message.contents. """ num_tokens = 3 # every reply is primed with <|start|>assistant<|message|> tokens_per_message = 3 tokens_per_name = 1 for msg in messages: if isinstance(msg, HumanMessage): role = "user" elif isinstance(msg, AIMessage): role = "assistant" elif isinstance(msg, ToolMessage): role = "tool" elif isinstance(msg, SystemMessage): role = "system" else: raise ValueError(f"Unsupported messages type {msg.__class__}") num_tokens += ( tokens_per_message + str_token_counter(role) + str_token_counter(msg.content) ) if msg.name: num_tokens += tokens_per_name + str_token_counter(msg.name) return num_tokenstrim_messages( messages, max_tokens=45, strategy="last", token_counter=tiktoken_counter,) **API Reference:**[BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) [AIMessage(content="Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot')] Chaining[​](#chaining "Direct link to Chaining") ------------------------------------------------ `trim_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain llm = ChatOpenAI(model="gpt-4o")# Notice we don't pass in messages. This creates# a RunnableLambda that takes messages as inputtrimmer = trim_messages( max_tokens=45, strategy="last", token_counter=llm, include_system=True,)chain = trimmer | llmchain.invoke(messages) AIMessage(content='A: A "Polly-gone"!', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 32, 'total_tokens': 41}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_66b29dffce', 'finish_reason': 'stop', 'logprobs': None}, id='run-83e96ddf-bcaa-4f63-824c-98b0f8a0d474-0', usage_metadata={'input_tokens': 32, 'output_tokens': 9, 'total_tokens': 41}) Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: [https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r](https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r) Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables: trimmer.invoke(messages) [SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot')] Using with ChatMessageHistory[​](#using-with-chatmessagehistory "Direct link to Using with ChatMessageHistory") --------------------------------------------------------------------------------------------------------------- Trimming messages is especially useful when [working with chat histories](/v0.2/docs/how_to/message_history/), which can get arbitrarily long: from langchain_core.chat_history import InMemoryChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorychat_history = InMemoryChatMessageHistory(messages=messages[:-1])def dummy_get_session_history(session_id): if session_id != "1": return InMemoryChatMessageHistory() return chat_historyllm = ChatOpenAI(model="gpt-4o")trimmer = trim_messages( max_tokens=45, strategy="last", token_counter=llm, include_system=True,)chain = trimmer | llmchain_with_history = RunnableWithMessageHistory(chain, dummy_get_session_history)chain_with_history.invoke( [HumanMessage("what do you call a speechless parrot")], config={"configurable": {"session_id": "1"}},) **API Reference:**[InMemoryChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.InMemoryChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) AIMessage(content='A "polly-no-wanna-cracker"!', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 32, 'total_tokens': 42}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_5bf7397cd3', 'finish_reason': 'stop', 'logprobs': None}, id='run-054dd309-3497-4e7b-b22a-c1859f11d32e-0', usage_metadata={'input_tokens': 32, 'output_tokens': 10, 'total_tokens': 42}) Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: [https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r](https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r) API reference[​](#api-reference "Direct link to API reference") --------------------------------------------------------------- For a complete description of all arguments head to the API reference: [https://api.python.langchain.com/en/latest/messages/langchain\_core.messages.utils.trim\_messages.html](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.trim_messages.html) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/trim_messages.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to bind model-specific tools ](/v0.2/docs/how_to/tools_model_specific/)[ Next How to create and query vector stores ](/v0.2/docs/how_to/vectorstores/) * [Getting the last `max_tokens` tokens](#getting-the-last-max_tokens-tokens) * [Getting the first `max_tokens` tokens](#getting-the-first-max_tokens-tokens) * [Writing a custom token counter](#writing-a-custom-token-counter) * [Chaining](#chaining) * [Using with ChatMessageHistory](#using-with-chatmessagehistory) * [API reference](#api-reference)
null
https://python.langchain.com/v0.2/docs/how_to/tools_builtin/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use built-in tools and toolkits On this page How to use built-in tools and toolkits ====================================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Tools](/v0.2/docs/concepts/#tools) * [LangChain Toolkits](/v0.2/docs/concepts/#tools) Tools[​](#tools "Direct link to Tools") --------------------------------------- LangChain has a large collection of 3rd party tools. Please visit [Tool Integrations](/v0.2/docs/integrations/tools/) for a list of the available tools. info When using 3rd party tools, make sure that you understand how the tool works, what permissions it has. Read over its documentation and check if anything is required from you from a security point of view. Please see our [security](https://python.langchain.com/v0.2/docs/security/) guidelines for more information. Let's try out the [Wikipedia integration](/v0.2/docs/integrations/tools/wikipedia/). !pip install -qU wikipedia from langchain_community.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapperapi_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)tool = WikipediaQueryRun(api_wrapper=api_wrapper)print(tool.invoke({"query": "langchain"})) **API Reference:**[WikipediaQueryRun](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.wikipedia.tool.WikipediaQueryRun.html) | [WikipediaAPIWrapper](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.wikipedia.WikipediaAPIWrapper.html) Page: LangChainSummary: LangChain is a framework designed to simplify the creation of applications The tool has the following defaults associated with it: print(f"Name: {tool.name}")print(f"Description: {tool.description}")print(f"args schema: {tool.args}")print(f"returns directly?: {tool.return_direct}") Name: wiki-toolDescription: look up things in wikipediaargs schema: {'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}returns directly?: True Customizing Default Tools[​](#customizing-default-tools "Direct link to Customizing Default Tools") --------------------------------------------------------------------------------------------------- We can also modify the built in name, description, and JSON schema of the arguments. When defining the JSON schema of the arguments, it is important that the inputs remain the same as the function, so you shouldn't change that. But you can define custom descriptions for each input easily. from langchain_community.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapperfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass WikiInputs(BaseModel): """Inputs to the wikipedia tool.""" query: str = Field( description="query to look up in Wikipedia, should be 3 or less words" )tool = WikipediaQueryRun( name="wiki-tool", description="look up things in wikipedia", args_schema=WikiInputs, api_wrapper=api_wrapper, return_direct=True,)print(tool.run("langchain")) **API Reference:**[WikipediaQueryRun](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.wikipedia.tool.WikipediaQueryRun.html) | [WikipediaAPIWrapper](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.wikipedia.WikipediaAPIWrapper.html) Page: LangChainSummary: LangChain is a framework designed to simplify the creation of applications print(f"Name: {tool.name}")print(f"Description: {tool.description}")print(f"args schema: {tool.args}")print(f"returns directly?: {tool.return_direct}") Name: wiki-toolDescription: look up things in wikipediaargs schema: {'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}returns directly?: True How to use built-in toolkits[​](#how-to-use-built-in-toolkits "Direct link to How to use built-in toolkits") ------------------------------------------------------------------------------------------------------------ Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. For a complete list of available ready-made toolkits, visit [Integrations](/v0.2/docs/integrations/toolkits/). All Toolkits expose a `get_tools` method which returns a list of tools. You're usually meant to use them this way: # Initialize a toolkittoolkit = ExampleTookit(...)# Get list of toolstools = toolkit.get_tools() [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_builtin.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle multiple queries when doing query analysis ](/v0.2/docs/how_to/query_multiple_queries/)[ Next How to pass through arguments from one step to the next ](/v0.2/docs/how_to/passthrough/) * [Tools](#tools) * [Customizing Default Tools](#customizing-default-tools) * [How to use built-in toolkits](#how-to-use-built-in-toolkits)
null
https://python.langchain.com/v0.2/docs/how_to/passthrough/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to pass through arguments from one step to the next On this page How to pass through arguments from one step to the next ======================================================= Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Calling runnables in parallel](/v0.2/docs/how_to/parallel/) * [Custom functions](/v0.2/docs/how_to/functions/) When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/v0.2/docs/how_to/parallel/) to pass data through to a later step in your constructed chains. See the example below: %pip install -qU langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass() from langchain_core.runnables import RunnableParallel, RunnablePassthroughrunnable = RunnableParallel( passed=RunnablePassthrough(), modified=lambda x: x["num"] + 1,)runnable.invoke({"num": 1}) **API Reference:**[RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) {'passed': {'num': 1}, 'modified': 2} As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. We also set a second key in the map with `modified`. This uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`. Retrieval Example[​](#retrieval-example "Direct link to Retrieval Example") --------------------------------------------------------------------------- In the example below, we see a more real-world use case where we use `RunnablePassthrough` along with `RunnableParallel` in a chain to properly format inputs to a prompt: from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()retrieval_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser())retrieval_chain.invoke("where did harrison work?") **API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) 'Harrison worked at Kensho.' Here the input to prompt is expected to be a map with keys "context" and "question". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the "question" key. The `RunnablePassthrough` allows us to pass on the user's question to the prompt and model. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you've learned how to pass data through your chains to help to help format the data flowing through your chains. To learn more, see the other how-to guides on runnables in this section. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/passthrough.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use built-in tools and toolkits ](/v0.2/docs/how_to/tools_builtin/)[ Next How to compose prompts together ](/v0.2/docs/how_to/prompts_composition/) * [Retrieval Example](#retrieval-example) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/vectorstores/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create and query vector stores On this page How to create and query vector stores ===================================== info Head to [Integrations](/v0.2/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores. One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- This guide showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model interfaces](/v0.2/docs/how_to/embed_text/) before diving into this. Before using the vectorstore at all, we need to load some data and initialize an embedding model. We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') from langchain_community.document_loaders import TextLoaderfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitter# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader('state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents) **API Reference:**[TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html) There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings. * Chroma * FAISS * Lance This walkthrough uses the `chroma` vector database, which runs on your local machine as a library. pip install langchain-chroma from langchain_chroma import Chromadb = Chroma.from_documents(documents, OpenAIEmbeddings()) This walkthrough uses the `FAISS` vector database, which makes use of the Facebook AI Similarity Search (FAISS) library. pip install faiss-cpu from langchain_community.vectorstores import FAISSdb = FAISS.from_documents(documents, OpenAIEmbeddings()) **API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format. pip install lancedb from langchain_community.vectorstores import LanceDBimport lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)db = LanceDB.from_documents(documents, OpenAIEmbeddings()) **API Reference:**[LanceDB](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.lancedb.LanceDB.html) Similarity search[​](#similarity-search "Direct link to Similarity search") --------------------------------------------------------------------------- All vectorstores expose a `similarity_search` method. This will take incoming documents, create an embedding of them, and then find all documents with the most similar embedding. query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerβ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ### Similarity search by vector[​](#similarity-search-by-vector "Direct link to Similarity search by vector") It is also possible to do a search for documents similar to a given embedding vector using `similarity_search_by_vector` which accepts an embedding vector as a parameter instead of a string. embedding_vector = OpenAIEmbeddings().embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerβ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Async Operations[​](#async-operations "Direct link to Async Operations") ------------------------------------------------------------------------ Vector stores are usually run as a separate service that requires some IO operations, and therefore they might be called asynchronously. That gives performance benefits as you don't waste time waiting for responses from external services. That might also be important if you work with an asynchronous framework, such as [FastAPI](https://fastapi.tiangolo.com/). LangChain supports async operation on vector stores. All the methods might be called using their async counterparts, with the prefix `a`, meaning `async`. docs = await db.asimilarity_search(query)docs [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerβ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of supportβ€”from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'state_of_the_union.txt'}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': 'state_of_the_union.txt'}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jillβ€”our First Lady who teaches full-timeβ€”calls America’s best-kept secret: community colleges.', metadata={'source': 'state_of_the_union.txt'})] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/vectorstores.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to trim messages ](/v0.2/docs/how_to/trim_messages/)[ Next Conceptual guide ](/v0.2/docs/concepts/) * [Get started](#get-started) * [Similarity search](#similarity-search) * [Similarity search by vector](#similarity-search-by-vector) * [Async Operations](#async-operations)
null
https://python.langchain.com/v0.2/docs/versions/overview/
* [](/v0.2/) * Versions * Overview On this page LangChain over time =================== What’s new in LangChain?[​](#whats-new-in-langchain "Direct link to What’s new in LangChain?") ---------------------------------------------------------------------------------------------- The following features have been added during the development of 0.1.x: * Better streaming support via the [Event Streaming API](https://python.langchain.com/docs/expression_language/streaming/#using-stream-events). * [Standardized tool calling support](https://blog.langchain.dev/tool-calling-with-langchain/) * A standardized interface for [structuring output](https://github.com/langchain-ai/langchain/discussions/18154) * [@chain decorator](https://python.langchain.com/docs/expression_language/how_to/decorator/) to more easily create **RunnableLambdas** * [https://python.langchain.com/docs/expression\_language/how\_to/inspect/](https://python.langchain.com/docs/expression_language/how_to/inspect/) * In Python, better async support for many core abstractions (thank you [@cbornet](https://github.com/cbornet)!!) * Include response metadata in `AIMessage` to make it easy to access raw output from the underlying models * Tooling to visualize [your runnables](https://python.langchain.com/docs/expression_language/how_to/inspect/) or [your langgraph app](https://github.com/langchain-ai/langgraph/blob/main/examples/visualization.ipynb) * Interoperability of chat message histories across most providers * [Over 20+ partner packages in python](https://python.langchain.com/docs/integrations/platforms/) for popular integrations What’s coming to LangChain?[​](#whats-coming-to-langchain "Direct link to What’s coming to LangChain?") ------------------------------------------------------------------------------------------------------- * We’ve been working hard on [langgraph](https://langchain-ai.github.io/langgraph/). We will be building more capabilities on top of it and focusing on making it the go-to framework for agent architectures. * Vectorstores V2! We’ll be revisiting our vectorstores abstractions to help improve usability and reliability. * Better documentation and versioned docs! * We’re planning a breaking release (0.3.0) sometime between July-September to [upgrade to full support of Pydantic 2](https://github.com/langchain-ai/langchain/discussions/19339), and will drop support for Pydantic 1 (including objects originating from the `v1` namespace of Pydantic 2). What changed?[​](#what-changed "Direct link to What changed?") -------------------------------------------------------------- Due to the rapidly evolving field, LangChain has also evolved rapidly. This document serves to outline at a high level what has changed and why. ### TLDR[​](#tldr "Direct link to TLDR") **As of 0.2.0:** * This release completes the work that we started with release 0.1.0 by removing the dependency of `langchain` on `langchain-community`. * `langchain` package no longer requires `langchain-community` . Instead `langchain-community` will now depend on `langchain-core` and `langchain` . * User code that still relies on deprecated imports from `langchain` will continue to work as long `langchain_community` is installed. These imports will start raising errors in release 0.4.x. **As of 0.1.0:** * `langchain` was split into the following component packages: `langchain-core`, `langchain`, `langchain-community`, `langchain-[partner]` to improve the usability of langchain code in production settings. You can read more about it on our [blog](https://blog.langchain.dev/langchain-v0-1-0/). ### Ecosystem organization[​](#ecosystem-organization "Direct link to Ecosystem organization") By the release of 0.1.0, LangChain had grown to a large ecosystem with many integrations and a large community. To improve the usability of LangChain in production, we split the single `langchain` package into multiple packages. This allowed us to create a good foundation architecture for the LangChain ecosystem and improve the usability of `langchain` in production. Here is the high level break down of the Eco-system: * **langchain-core**: contains core abstractions involving LangChain Runnables, tooling for observability, and base implementations of important abstractions (e.g., Chat Models). * **langchain:** contains generic code that is built using interfaces defined in `langchain-core`. This package is for code that generalizes well across different implementations of specific interfaces. For example, `create_tool_calling_agent` works across chat models that support [tool calling capabilities](https://blog.langchain.dev/tool-calling-with-langchain/). * **langchain-community**: community maintained 3rd party integrations. Contains integrations based on interfaces defined in **langchain-core**. Maintained by the LangChain community. * **Partner Packages (e.g., langchain-\[partner\])**: Partner packages are packages dedicated to especially popular integrations (e.g., `langchain-openai`, `langchain-anthropic` etc.). The dedicated packages generally benefit from better reliability and support. * `langgraph`: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. * `langserve`: Deploy LangChain chains as REST APIs. In the 0.1.0 release, `langchain-community` was retained as required a dependency of `langchain`. This allowed imports of vectorstores, chat models, and other integrations to continue working through `langchain` rather than forcing users to update all of their imports to `langchain-community`. For the 0.2.0 release, we’re removing the dependency of `langchain` on `langchain-community`. This is something we’ve been planning to do since the 0.1 release because we believe this is the right package architecture. Old imports will continue to work as long as `langchain-community` is installed. These imports will be removed in the 0.4.0 release. To understand why we think breaking the dependency of `langchain` on `langchain-community` is best we should understand what each package is meant to do. `langchain` is meant to contain high-level chains and agent architectures. The logic in these should be specified at the level of abstractions like `ChatModel` and `Retriever`, and should not be specific to any one integration. This has two main benefits: 1. `langchain` is fairly lightweight. Here is the full list of required dependencies (after the split) python = ">=3.8.1,<4.0"langchain-core = "^0.2.0"langchain-text-splitters = ">=0.0.1,<0.1"langsmith = "^0.1.17"pydantic = ">=1,<3"SQLAlchemy = ">=1.4,<3"requests = "^2"PyYAML = ">=5.3"numpy = "^1"aiohttp = "^3.8.3"tenacity = "^8.1.0"jsonpatch = "^1.33" 2. `langchain` chains/agents are largely integration-agnostic, which makes it easy to experiment with different integrations and future-proofs your code should there be issues with one specific integration. There is also a third less tangible benefit which is that being integration-agnostic forces us to find only those very generic abstractions and architectures which generalize well across integrations. Given how general the abilities of the foundational tech are, and how quickly the space is moving, having generic architectures is a good way of future-proofing your applications. `langchain-community` is intended to have all integration-specific components that are not yet being maintained in separate `langchain-{partner}` packages. Today this is still the majority of integrations and a lot of code. This code is primarily contributed by the community, while `langchain` is largely written by core maintainers. All of these integrations use optional dependencies and conditional imports, which prevents dependency bloat and conflicts but means compatible dependency versions are not made explicit. Given the volume of integrations in `langchain-community` and the speed at which integrations change, it’s very hard to follow semver versioning, and we currently don’t. All of which is to say that there’s no large benefits to `langchain` depending on `langchain-community` and some obvious downsides: the functionality in `langchain` should be integration agnostic anyways, `langchain-community` can’t be properly versioned, and depending on `langchain-community` increases the [vulnerability surface](https://github.com/langchain-ai/langchain/discussions/19083) of `langchain`. For more context about the reason for the organization please see our blog: [https://blog.langchain.dev/langchain-v0-1-0/](https://blog.langchain.dev/langchain-v0-1-0/) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/overview.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous πŸ¦œοΈπŸ“ LangServe ](/v0.2/docs/langserve/)[ Next Release Policy ](/v0.2/docs/versions/release_policy/) * [What’s new in LangChain?](#whats-new-in-langchain) * [What’s coming to LangChain?](#whats-coming-to-langchain) * [What changed?](#what-changed) * [TLDR](#tldr) * [Ecosystem organization](#ecosystem-organization)
null
https://python.langchain.com/v0.2/docs/how_to/prompts_composition/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to compose prompts together On this page How to compose prompts together =============================== Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components. String prompt composition[​](#string-prompt-composition "Direct link to String prompt composition") --------------------------------------------------------------------------------------------------- When working with string prompts, each template is joined together. You can work with either prompts directly or strings (the first element in the list needs to be a prompt). from langchain_core.prompts import PromptTemplateprompt = ( PromptTemplate.from_template("Tell me a joke about {topic}") + ", make it funny" + "\n\nand in {language}")prompt **API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) PromptTemplate(input_variables=['language', 'topic'], template='Tell me a joke about {topic}, make it funny\n\nand in {language}') prompt.format(topic="sports", language="spanish") 'Tell me a joke about sports, make it funny\n\nand in spanish' Chat prompt composition[​](#chat-prompt-composition "Direct link to Chat prompt composition") --------------------------------------------------------------------------------------------- A chat prompt is made up a of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt. First, let's initialize the a [`ChatPromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) with a [`SystemMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html). from langchain_core.messages import AIMessage, HumanMessage, SystemMessageprompt = SystemMessage(content="You are a nice pirate") **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) You can then easily create a pipeline combining it with other messages _or_ message templates. Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a [`HumanMessagePromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html).) new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}") Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before! new_prompt.format_messages(input="i said hi") [SystemMessage(content='You are a nice pirate'), HumanMessage(content='hi'), AIMessage(content='what?'), HumanMessage(content='i said hi')] Using PipelinePrompt[​](#using-pipelineprompt "Direct link to Using PipelinePrompt") ------------------------------------------------------------------------------------ LangChain includes a class called [`PipelinePromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html), which can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts: * Final prompt: The final prompt that is returned * Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name. from langchain_core.prompts import PipelinePromptTemplate, PromptTemplatefull_template = """{introduction}{example}{start}"""full_prompt = PromptTemplate.from_template(full_template)introduction_template = """You are impersonating {person}."""introduction_prompt = PromptTemplate.from_template(introduction_template)example_template = """Here's an example of an interaction:Q: {example_q}A: {example_a}"""example_prompt = PromptTemplate.from_template(example_template)start_template = """Now, do this for real!Q: {input}A:"""start_prompt = PromptTemplate.from_template(start_template)input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt),]pipeline_prompt = PipelinePromptTemplate( final_prompt=full_prompt, pipeline_prompts=input_prompts)pipeline_prompt.input_variables **API Reference:**[PipelinePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) ['person', 'example_a', 'example_q', 'input'] print( pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Tesla", input="What's your favorite social media site?", )) You are impersonating Elon Musk.Here's an example of an interaction:Q: What's your favorite car?A: TeslaNow, do this for real!Q: What's your favorite social media site?A: Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to compose prompts together. Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/prompts_composition.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to pass through arguments from one step to the next ](/v0.2/docs/how_to/passthrough/)[ Next How to handle multiple retrievers when doing query analysis ](/v0.2/docs/how_to/query_multiple_retrievers/) * [String prompt composition](#string-prompt-composition) * [Chat prompt composition](#chat-prompt-composition) * [Using PipelinePrompt](#using-pipelineprompt) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/assign/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add values to a chain's state On this page How to add values to a chain's state ==================================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Calling runnables in parallel](/v0.2/docs/how_to/parallel/) * [Custom functions](/v0.2/docs/how_to/functions/) * [Passing data through](/v0.2/docs/how_to/passthrough/) An alternate way of [passing data through](/v0.2/docs/how_to/passthrough/) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html#langchain_core.runnables.passthrough.RunnablePassthrough.assign) static method takes an input value and adds the extra arguments passed to the assign function. This is useful in the common [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step. Here's an example: %pip install --upgrade --quiet langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass() from langchain_core.runnables import RunnableParallel, RunnablePassthroughrunnable = RunnableParallel( extra=RunnablePassthrough.assign(mult=lambda x: x["num"] * 3), modified=lambda x: x["num"] + 1,)runnable.invoke({"num": 1}) **API Reference:**[RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) {'extra': {'num': 1, 'mult': 3}, 'modified': 2} Let's break down what's happening here. * The input to the chain is `{"num": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input. * The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{"num": 1}`), and assigns a new key called `mult`. The value is `lambda x: x["num"] * 3)`, which is `3`. Thus, the result is `{"num": 1, "mult": 3}`. * `{"num": 1, "mult": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`. * At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `"num"` from its input and adds one. Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`. Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- One convenient feature of this method is that it allows values to pass through as soon as they are available. To show this off, we'll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain: from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()generation_chain = prompt | model | StrOutputParser()retrieval_chain = { "context": retriever, "question": RunnablePassthrough(),} | RunnablePassthrough.assign(output=generation_chain)stream = retrieval_chain.stream("where did harrison work?")for chunk in stream: print(chunk) **API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) {'question': 'where did harrison work?'}{'context': [Document(page_content='harrison worked at kensho')]}{'output': ''}{'output': 'H'}{'output': 'arrison'}{'output': ' worked'}{'output': ' at'}{'output': ' Kens'}{'output': 'ho'}{'output': '.'}{'output': ''} We can see that the first chunk contains the original `"question"` since that is immediately available. The second chunk contains `"context"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you've learned how to pass data through your chains to help to help format the data flowing through your chains. To learn more, see the other how-to guides on runnables in this section. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/assign.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle multiple retrievers when doing query analysis ](/v0.2/docs/how_to/query_multiple_retrievers/)[ Next How to construct filters for query analysis ](/v0.2/docs/how_to/query_constructing_filters/) * [Streaming](#streaming) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/query_multiple_retrievers/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle multiple retrievers when doing query analysis On this page How to handle multiple retrievers when doing query analysis =========================================================== Sometimes, a query analysis technique may allow for selection of which retriever to use. To use this, you will need to add some logic to select the retriever to do. We will show a simple example (using mock data) of how to do that. Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") # %pip install -qU langchain langchain-community langchain-openai langchain-chroma #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We'll use OpenAI in this example: import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() ### Create Index[​](#create-index "Direct link to Create Index") We will create a vectorstore over fake information. from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplittertexts = ["Harrison worked at Kensho"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts(texts, embeddings, collection_name="harrison")retriever_harrison = vectorstore.as_retriever(search_kwargs={"k": 1})texts = ["Ankush worked at Facebook"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts(texts, embeddings, collection_name="ankush")retriever_ankush = vectorstore.as_retriever(search_kwargs={"k": 1}) **API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) Query analysis[​](#query-analysis "Direct link to Query analysis") ------------------------------------------------------------------ We will use function calling to structure the output. We will let it return multiple queries. from typing import List, Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Search(BaseModel): """Search for information about a person.""" query: str = Field( ..., description="Query to look up", ) person: str = Field( ..., description="Person to look things up for. Should be `HARRISON` or `ANKUSH`.", ) from langchain_core.output_parsers.openai_tools import PydanticToolsParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIoutput_parser = PydanticToolsParser(tools=[Search])system = """You have the ability to issue search queries to get information to help answer user information."""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.with_structured_output(Search)query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm **API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) We can see that this allows for routing between retrievers query_analyzer.invoke("where did Harrison Work") Search(query='workplace', person='HARRISON') query_analyzer.invoke("where did ankush Work") Search(query='workplace', person='ANKUSH') Retrieval with query analysis[​](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis") --------------------------------------------------------------------------------------------------------------- So how would we include this in a chain? We just need some simple logic to select the retriever and pass in the search query from langchain_core.runnables import chain **API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html) retrievers = { "HARRISON": retriever_harrison, "ANKUSH": retriever_ankush,} @chaindef custom_chain(question): response = query_analyzer.invoke(question) retriever = retrievers[response.person] return retriever.invoke(response.query) custom_chain.invoke("where did Harrison Work") [Document(page_content='Harrison worked at Kensho')] custom_chain.invoke("where did ankush Work") [Document(page_content='Ankush worked at Facebook')] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_multiple_retrievers.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to compose prompts together ](/v0.2/docs/how_to/prompts_composition/)[ Next How to add values to a chain's state ](/v0.2/docs/how_to/assign/) * [Setup](#setup) * [Create Index](#create-index) * [Query analysis](#query-analysis) * [Retrieval with query analysis](#retrieval-with-query-analysis)
null
https://python.langchain.com/v0.2/docs/langserve/
* [](/v0.2/) * Ecosystem * πŸ¦œοΈπŸ“ LangServe On this page πŸ¦œοΈπŸ“ LangServe =============== [![Release Notes](https://img.shields.io/github/release/langchain-ai/langserve)](https://github.com/langchain-ai/langserve/releases) [![Downloads](https://static.pepy.tech/badge/langserve/month)](https://pepy.tech/project/langserve) [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langserve)](https://github.com/langchain-ai/langserve/issues) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.com/channels/1038097195422978059/1170024642245832774) 🚩 We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://forms.gle/KC13Nzn76UeLaghK7) to get on the waitlist. Overview[​](#overview "Direct link to Overview") ------------------------------------------------ [LangServe](https://github.com/langchain-ai/langserve) helps developers deploy `LangChain` [runnables and chains](https://python.langchain.com/docs/expression_language/) as a REST API. This library is integrated with [FastAPI](https://fastapi.tiangolo.com/) and uses [pydantic](https://docs.pydantic.dev/latest/) for data validation. In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in [LangChain.js](https://js.langchain.com/docs/ecosystem/langserve). Features[​](#features "Direct link to Features") ------------------------------------------------ * Input and Output schemas automatically inferred from your LangChain object, and enforced on every API call, with rich error messages * API docs page with JSONSchema and Swagger (insert example link) * Efficient `/invoke`, `/batch` and `/stream` endpoints with support for many concurrent requests on a single server * `/stream_log` endpoint for streaming all (or some) intermediate steps from your chain/agent * **new** as of 0.0.40, supports `/stream_events` to make it easier to stream without needing to parse the output of `/stream_log`. * Playground page at `/playground/` with streaming output and intermediate steps * Built-in (optional) tracing to [LangSmith](https://www.langchain.com/langsmith), just add your API key (see [Instructions](https://docs.smith.langchain.com/)) * All built with battle-tested open-source Python libraries like FastAPI, Pydantic, uvloop and asyncio. * Use the client SDK to call a LangServe server as if it was a Runnable running locally (or call the HTTP API directly) * [LangServe Hub](https://github.com/langchain-ai/langchain/blob/master/templates/README.md) Limitations[​](#limitations "Direct link to Limitations") --------------------------------------------------------- * Client callbacks are not yet supported for events that originate on the server * OpenAPI docs will not be generated when using Pydantic V2. Fast API does not support [mixing pydantic v1 and v2 namespaces](https://github.com/tiangolo/fastapi/issues/10360). See section below for more details. Hosted LangServe[​](#hosted-langserve "Direct link to Hosted LangServe") ------------------------------------------------------------------------ We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://forms.gle/KC13Nzn76UeLaghK7) to get on the waitlist. Security[​](#security "Direct link to Security") ------------------------------------------------ * Vulnerability in Versions 0.0.13 - 0.0.15 -- playground endpoint allows accessing arbitrary files on server. [Resolved in 0.0.16](https://github.com/langchain-ai/langserve/pull/98). Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ For both client and server: pip install "langserve[all]" or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code. LangChain CLI πŸ› οΈ[​](#langchain-cli-️ "Direct link to LangChain CLI πŸ› οΈ") ------------------------------------------------------------------------- Use the `LangChain` CLI to bootstrap a `LangServe` project quickly. To use the langchain CLI make sure that you have a recent version of `langchain-cli` installed. You can install it with `pip install -U langchain-cli`. Setup[​](#setup "Direct link to Setup") --------------------------------------- **Note**: We use `poetry` for dependency management. Please follow poetry [doc](https://python-poetry.org/docs/) to learn more about it. ### 1\. Create new app using langchain cli command[​](#1-create-new-app-using-langchain-cli-command "Direct link to 1. Create new app using langchain cli command") langchain app new my-app ### 2\. Define the runnable in add\_routes. Go to server.py and edit[​](#2-define-the-runnable-in-add_routes-go-to-serverpy-and-edit "Direct link to 2. Define the runnable in add_routes. Go to server.py and edit") add_routes(app. NotImplemented) ### 3\. Use `poetry` to add 3rd party packages (e.g., langchain-openai, langchain-anthropic, langchain-mistral etc).[​](#3-use-poetry-to-add-3rd-party-packages-eg-langchain-openai-langchain-anthropic-langchain-mistral-etc "Direct link to 3-use-poetry-to-add-3rd-party-packages-eg-langchain-openai-langchain-anthropic-langchain-mistral-etc") poetry add [package-name] // e.g `poetry add langchain-openai` ### 4\. Set up relevant env variables. For example,[​](#4-set-up-relevant-env-variables-for-example "Direct link to 4. Set up relevant env variables. For example,") export OPENAI_API_KEY="sk-..." ### 5\. Serve your app[​](#5-serve-your-app "Direct link to 5. Serve your app") poetry run langchain serve --port=8100 Examples[​](#examples "Direct link to Examples") ------------------------------------------------ Get your LangServe instance started quickly with [LangChain Templates](https://github.com/langchain-ai/langchain/blob/master/templates/README.md). For more examples, see the templates [index](https://github.com/langchain-ai/langchain/blob/master/templates/docs/INDEX.md) or the [examples](https://github.com/langchain-ai/langserve/tree/main/examples) directory. Description Links **LLMs** Minimal example that reserves OpenAI and Anthropic chat models. Uses async, supports batching and streaming. [server](https://github.com/langchain-ai/langserve/tree/main/examples/llm/server.py), [client](https://github.com/langchain-ai/langserve/blob/main/examples/llm/client.ipynb) **Retriever** Simple server that exposes a retriever as a runnable. [server](https://github.com/langchain-ai/langserve/tree/main/examples/retrieval/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/retrieval/client.ipynb) **Conversational Retriever** A [Conversational Retriever](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) exposed via LangServe [server](https://github.com/langchain-ai/langserve/tree/main/examples/conversational_retrieval_chain/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/conversational_retrieval_chain/client.ipynb) **Agent** without **conversation history** based on [OpenAI tools](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent) [server](https://github.com/langchain-ai/langserve/tree/main/examples/agent/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/agent/client.ipynb) **Agent** with **conversation history** based on [OpenAI tools](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent) [server](https://github.com/langchain-ai/langserve/blob/main/examples/agent_with_history/server.py), [client](https://github.com/langchain-ai/langserve/blob/main/examples/agent_with_history/client.ipynb) [RunnableWithMessageHistory](https://python.langchain.com/docs/expression_language/how_to/message_history) to implement chat persisted on backend, keyed off a `session_id` supplied by client. [server](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence/client.ipynb) [RunnableWithMessageHistory](https://python.langchain.com/docs/expression_language/how_to/message_history) to implement chat persisted on backend, keyed off a `conversation_id` supplied by client, and `user_id` (see Auth for implementing `user_id` properly). [server](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence_and_user/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence_and_user/client.ipynb) [Configurable Runnable](https://python.langchain.com/docs/expression_language/how_to/configure) to create a retriever that supports run time configuration of the index name. [server](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_retrieval/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_retrieval/client.ipynb) [Configurable Runnable](https://python.langchain.com/docs/expression_language/how_to/configure) that shows configurable fields and configurable alternatives. [server](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_chain/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_chain/client.ipynb) **APIHandler** Shows how to use `APIHandler` instead of `add_routes`. This provides more flexibility for developers to define endpoints. Works well with all FastAPI patterns, but takes a bit more effort. [server](https://github.com/langchain-ai/langserve/tree/main/examples/api_handler_examples/server.py) **LCEL Example** Example that uses LCEL to manipulate a dictionary input. [server](https://github.com/langchain-ai/langserve/tree/main/examples/passthrough_dict/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/passthrough_dict/client.ipynb) **Auth** with `add_routes`: Simple authentication that can be applied across all endpoints associated with app. (Not useful on its own for implementing per user logic.) [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/global_deps/server.py) **Auth** with `add_routes`: Simple authentication mechanism based on path dependencies. (No useful on its own for implementing per user logic.) [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/path_dependencies/server.py) **Auth** with `add_routes`: Implement per user logic and auth for endpoints that use per request config modifier. (**Note**: At the moment, does not integrate with OpenAPI docs.) [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/client.ipynb) **Auth** with `APIHandler`: Implement per user logic and auth that shows how to search only within user owned documents. [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/client.ipynb) **Widgets** Different widgets that can be used with playground (file upload and chat) [server](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/tuples/server.py) **Widgets** File upload widget used for LangServe playground. [server](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/client.ipynb) Sample Application[​](#sample-application "Direct link to Sample Application") ------------------------------------------------------------------------------ ### Server[​](#server "Direct link to Server") Here's a server that deploys an OpenAI chat model, an Anthropic chat model, and a chain that uses the Anthropic model to tell a joke about a topic. #!/usr/bin/env pythonfrom fastapi import FastAPIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatAnthropic, ChatOpenAIfrom langserve import add_routesapp = FastAPI( title="LangChain Server", version="1.0", description="A simple api server using Langchain's Runnable interfaces",)add_routes( app, ChatOpenAI(model="gpt-3.5-turbo-0125"), path="/openai",)add_routes( app, ChatAnthropic(model="claude-3-haiku-20240307"), path="/anthropic",)model = ChatAnthropic(model="claude-3-haiku-20240307")prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")add_routes( app, prompt | model, path="/joke",)if __name__ == "__main__": import uvicorn uvicorn.run(app, host="localhost", port=8000) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.anthropic.ChatAnthropic.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.openai.ChatOpenAI.html) If you intend to call your endpoint from the browser, you will also need to set CORS headers. You can use FastAPI's built-in middleware for that: from fastapi.middleware.cors import CORSMiddleware# Set all CORS enabled originsapp.add_middleware( CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], expose_headers=["*"],) ### Docs[​](#docs "Direct link to Docs") If you've deployed the server above, you can view the generated OpenAPI docs using: > ⚠️ If using pydantic v2, docs will not be generated for _invoke_, _batch_, _stream_, _stream\_log_. See [Pydantic](#pydantic) section below for more details. curl localhost:8000/docs make sure to **add** the `/docs` suffix. > ⚠️ Index page `/` is not defined by **design**, so `curl localhost:8000` or visiting the URL will return a 404. If you want content at `/` define an endpoint `@app.get("/")`. ### Client[​](#client "Direct link to Client") Python SDK from langchain.schema import SystemMessage, HumanMessagefrom langchain.prompts import ChatPromptTemplatefrom langchain.schema.runnable import RunnableMapfrom langserve import RemoteRunnableopenai = RemoteRunnable("http://localhost:8000/openai/")anthropic = RemoteRunnable("http://localhost:8000/anthropic/")joke_chain = RemoteRunnable("http://localhost:8000/joke/")joke_chain.invoke({"topic": "parrots"})# or asyncawait joke_chain.ainvoke({"topic": "parrots"})prompt = [ SystemMessage(content='Act like either a cat or a parrot.'), HumanMessage(content='Hello!')]# Supports astreamasync for msg in anthropic.astream(prompt): print(msg, end="", flush=True)prompt = ChatPromptTemplate.from_messages( [("system", "Tell me a long story about {topic}")])# Can define custom chainschain = prompt | RunnableMap({ "openai": openai, "anthropic": anthropic,})chain.batch([{"topic": "parrots"}, {"topic": "cats"}]) **API Reference:**[SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnableMap](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableMap.html) In TypeScript (requires LangChain.js version 0.0.166 or later): import { RemoteRunnable } from "@langchain/core/runnables/remote";const chain = new RemoteRunnable({ url: `http://localhost:8000/joke/`,});const result = await chain.invoke({ topic: "cats",}); Python using `requests`: import requestsresponse = requests.post( "http://localhost:8000/joke/invoke", json={'input': {'topic': 'cats'}})response.json() You can also use `curl`: curl --location --request POST 'http://localhost:8000/joke/invoke' \ --header 'Content-Type: application/json' \ --data-raw '{ "input": { "topic": "cats" } }' Endpoints[​](#endpoints "Direct link to Endpoints") --------------------------------------------------- The following code: ...add_routes( app, runnable, path="/my_runnable",) adds of these endpoints to the server: * `POST /my_runnable/invoke` - invoke the runnable on a single input * `POST /my_runnable/batch` - invoke the runnable on a batch of inputs * `POST /my_runnable/stream` - invoke on a single input and stream the output * `POST /my_runnable/stream_log` - invoke on a single input and stream the output, including output of intermediate steps as it's generated * `POST /my_runnable/astream_events` - invoke on a single input and stream events as they are generated, including from intermediate steps. * `GET /my_runnable/input_schema` - json schema for input to the runnable * `GET /my_runnable/output_schema` - json schema for output of the runnable * `GET /my_runnable/config_schema` - json schema for config of the runnable These endpoints match the [LangChain Expression Language interface](https://python.langchain.com/docs/expression_language/interface) -- please reference this documentation for more details. Playground[​](#playground "Direct link to Playground") ------------------------------------------------------ You can find a playground page for your runnable at `/my_runnable/playground/`. This exposes a simple UI to [configure](https://python.langchain.com/docs/expression_language/how_to/configure) and invoke your runnable with streaming output and intermediate steps. ![](https://github.com/langchain-ai/langserve/assets/3205522/5ca56e29-f1bb-40f4-84b5-15916384a276) ### Widgets[​](#widgets "Direct link to Widgets") The playground supports [widgets](#playground-widgets) and can be used to test your runnable with different inputs. See the [widgets](#widgets) section below for more details. ### Sharing[​](#sharing "Direct link to Sharing") In addition, for configurable runnables, the playground will allow you to configure the runnable and share a link with the configuration: ![](https://github.com/langchain-ai/langserve/assets/3205522/86ce9c59-f8e4-4d08-9fa3-62030e0f521d) Chat playground[​](#chat-playground "Direct link to Chat playground") --------------------------------------------------------------------- LangServe also supports a chat-focused playground that opt into and use under `/my_runnable/playground/`. Unlike the general playground, only certain types of runnables are supported - the runnable's input schema must be a `dict` with either: * a single key, and that key's value must be a list of chat messages. * two keys, one whose value is a list of messages, and the other representing the most recent message. We recommend you use the first format. The runnable must also return either an `AIMessage` or a string. To enable it, you must set `playground_type="chat",` when adding your route. Here's an example: # Declare a chainprompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful, professional assistant named Cob."), MessagesPlaceholder(variable_name="messages"), ])chain = prompt | ChatAnthropic(model="claude-2")class InputChat(BaseModel): """Input for the chat endpoint.""" messages: List[Union[HumanMessage, AIMessage, SystemMessage]] = Field( ..., description="The chat messages representing the current conversation.", )add_routes( app, chain.with_types(input_type=InputChat), enable_feedback_endpoint=True, enable_public_trace_link_endpoint=True, playground_type="chat",) If you are using LangSmith, you can also set `enable_feedback_endpoint=True` on your route to enable thumbs-up/thumbs-down buttons after each message, and `enable_public_trace_link_endpoint=True` to add a button that creates a public traces for runs. Note that you will also need to set the following environment variables: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_PROJECT="YOUR_PROJECT_NAME"export LANGCHAIN_API_KEY="YOUR_API_KEY" Here's an example with the above two options turned on: ![](./.github/img/chat_playground.png) Note: If you enable public trace links, the internals of your chain will be exposed. We recommend only using this setting for demos or testing. Legacy Chains[​](#legacy-chains "Direct link to Legacy Chains") --------------------------------------------------------------- LangServe works with both Runnables (constructed via [LangChain Expression Language](https://python.langchain.com/docs/expression_language/)) and legacy chains (inheriting from `Chain`). However, some of the input schemas for legacy chains may be incomplete/incorrect, leading to errors. This can be fixed by updating the `input_schema` property of those chains in LangChain. If you encounter any errors, please open an issue on THIS repo, and we will work to address it. Deployment[​](#deployment "Direct link to Deployment") ------------------------------------------------------ ### Deploy to AWS[​](#deploy-to-aws "Direct link to Deploy to AWS") You can deploy to AWS using the [AWS Copilot CLI](https://aws.github.io/copilot-cli/) copilot init --app [application-name] --name [service-name] --type 'Load Balanced Web Service' --dockerfile './Dockerfile' --deploy Click [here](https://aws.amazon.com/containers/copilot/) to learn more. ### Deploy to Azure[​](#deploy-to-azure "Direct link to Deploy to Azure") You can deploy to Azure using Azure Container Apps (Serverless): az containerapp up --name [container-app-name] --source . --resource-group [resource-group-name] --environment [environment-name] --ingress external --target-port 8001 --env-vars=OPENAI_API_KEY=your_key You can find more info [here](https://learn.microsoft.com/en-us/azure/container-apps/containerapp-up) ### Deploy to GCP[​](#deploy-to-gcp "Direct link to Deploy to GCP") You can deploy to GCP Cloud Run using the following command: gcloud run deploy [your-service-name] --source . --port 8001 --allow-unauthenticated --region us-central1 --set-env-vars=OPENAI_API_KEY=your_key ### Community Contributed[​](#community-contributed "Direct link to Community Contributed") #### Deploy to Railway[​](#deploy-to-railway "Direct link to Deploy to Railway") [Example Railway Repo](https://github.com/PaulLockett/LangServe-Railway/tree/main) [![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/pW9tXP?referralCode=c-aq4K) Pydantic[​](#pydantic "Direct link to Pydantic") ------------------------------------------------ LangServe provides support for Pydantic 2 with some limitations. 1. OpenAPI docs will not be generated for invoke/batch/stream/stream\_log when using Pydantic V2. Fast API does not support \[mixing pydantic v1 and v2 namespaces\]. 2. LangChain uses the v1 namespace in Pydantic v2. Please read the [following guidelines to ensure compatibility with LangChain](https://github.com/langchain-ai/langchain/discussions/9337) Except for these limitations, we expect the API endpoints, the playground and any other features to work as expected. Advanced[​](#advanced "Direct link to Advanced") ------------------------------------------------ ### Handling Authentication[​](#handling-authentication "Direct link to Handling Authentication") If you need to add authentication to your server, please read Fast API's documentation about [dependencies](https://fastapi.tiangolo.com/tutorial/dependencies/) and [security](https://fastapi.tiangolo.com/tutorial/security/). The below examples show how to wire up authentication logic LangServe endpoints using FastAPI primitives. You are responsible for providing the actual authentication logic, the users table etc. If you're not sure what you're doing, you could try using an existing solution [Auth0](https://auth0.com/). #### Using add\_routes[​](#using-add_routes "Direct link to Using add_routes") If you're using `add_routes`, see examples [here](https://github.com/langchain-ai/langserve/tree/main/examples/auth). Description Links **Auth** with `add_routes`: Simple authentication that can be applied across all endpoints associated with app. (Not useful on its own for implementing per user logic.) [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/global_deps/server.py) **Auth** with `add_routes`: Simple authentication mechanism based on path dependencies. (No useful on its own for implementing per user logic.) [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/path_dependencies/server.py) **Auth** with `add_routes`: Implement per user logic and auth for endpoints that use per request config modifier. (**Note**: At the moment, does not integrate with OpenAPI docs.) [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/client.ipynb) Alternatively, you can use FastAPI's [middleware](https://fastapi.tiangolo.com/tutorial/middleware/). Using global dependencies and path dependencies has the advantage that auth will be properly supported in the OpenAPI docs page, but these are not sufficient for implement per user logic (e.g., making an application that can search only within user owned documents). If you need to implement per user logic, you can use the `per_req_config_modifier` or `APIHandler` (below) to implement this logic. **Per User** If you need authorization or logic that is user dependent, specify `per_req_config_modifier` when using `add_routes`. Use a callable receives the raw `Request` object and can extract relevant information from it for authentication and authorization purposes. #### Using APIHandler[​](#using-apihandler "Direct link to Using APIHandler") If you feel comfortable with FastAPI and python, you can use LangServe's [APIHandler](https://github.com/langchain-ai/langserve/blob/main/examples/api_handler_examples/server.py). Description Links **Auth** with `APIHandler`: Implement per user logic and auth that shows how to search only within user owned documents. [server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/client.ipynb) **APIHandler** Shows how to use `APIHandler` instead of `add_routes`. This provides more flexibility for developers to define endpoints. Works well with all FastAPI patterns, but takes a bit more effort. [server](https://github.com/langchain-ai/langserve/tree/main/examples/api_handler_examples/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/api_handler_examples/client.ipynb) It's a bit more work, but gives you complete control over the endpoint definitions, so you can do whatever custom logic you need for auth. ### Files[​](#files "Direct link to Files") LLM applications often deal with files. There are different architectures that can be made to implement file processing; at a high level: 1. The file may be uploaded to the server via a dedicated endpoint and processed using a separate endpoint 2. The file may be uploaded by either value (bytes of file) or reference (e.g., s3 url to file content) 3. The processing endpoint may be blocking or non-blocking 4. If significant processing is required, the processing may be offloaded to a dedicated process pool You should determine what is the appropriate architecture for your application. Currently, to upload files by value to a runnable, use base64 encoding for the file (`multipart/form-data` is not supported yet). Here's an [example](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing) that shows how to use base64 encoding to send a file to a remote runnable. Remember, you can always upload files by reference (e.g., s3 url) or upload them as multipart/form-data to a dedicated endpoint. ### Custom Input and Output Types[​](#custom-input-and-output-types "Direct link to Custom Input and Output Types") Input and Output types are defined on all runnables. You can access them via the `input_schema` and `output_schema` properties. `LangServe` uses these types for validation and documentation. If you want to override the default inferred types, you can use the `with_types` method. Here's a toy example to illustrate the idea: from typing import Anyfrom fastapi import FastAPIfrom langchain.schema.runnable import RunnableLambdaapp = FastAPI()def func(x: Any) -> int: """Mistyped function that should accept an int but accepts anything.""" return x + 1runnable = RunnableLambda(func).with_types( input_type=int,)add_routes(app, runnable) **API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) ### Custom User Types[​](#custom-user-types "Direct link to Custom User Types") Inherit from `CustomUserType` if you want the data to de-serialize into a pydantic model rather than the equivalent dict representation. At the moment, this type only works _server_ side and is used to specify desired _decoding_ behavior. If inheriting from this type the server will keep the decoded type as a pydantic model instead of converting it into a dict. from fastapi import FastAPIfrom langchain.schema.runnable import RunnableLambdafrom langserve import add_routesfrom langserve.schema import CustomUserTypeapp = FastAPI()class Foo(CustomUserType): bar: intdef func(foo: Foo) -> int: """Sample function that expects a Foo type which is a pydantic model""" assert isinstance(foo, Foo) return foo.bar# Note that the input and output type are automatically inferred!# You do not need to specify them.# runnable = RunnableLambda(func).with_types( # <-- Not needed in this case# input_type=Foo,# output_type=int,#add_routes(app, RunnableLambda(func), path="/foo") **API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) ### Playground Widgets[​](#playground-widgets "Direct link to Playground Widgets") The playground allows you to define custom widgets for your runnable from the backend. Here are a few examples: Description Links **Widgets** Different widgets that can be used with playground (file upload and chat) [server](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/tuples/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/client.ipynb) **Widgets** File upload widget used for LangServe playground. [server](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/client.ipynb) #### Schema[​](#schema "Direct link to Schema") * A widget is specified at the field level and shipped as part of the JSON schema of the input type * A widget must contain a key called `type` with the value being one of a well known list of widgets * Other widget keys will be associated with values that describe paths in a JSON object type JsonPath = number | string | (number | string)[];type NameSpacedPath = { title: string; path: JsonPath }; // Using title to mimick json schema, but can use namespacetype OneOfPath = { oneOf: JsonPath[] };type Widget = { type: string; // Some well known type (e.g., base64file, chat etc.) [key: string]: JsonPath | NameSpacedPath | OneOfPath;}; ### Available Widgets[​](#available-widgets "Direct link to Available Widgets") There are only two widgets that the user can specify manually right now: 1. File Upload Widget 2. Chat History Widget See below more information about these widgets. All other widgets on the playground UI are created and managed automatically by the UI based on the config schema of the Runnable. When you create Configurable Runnables, the playground should create appropriate widgets for you to control the behavior. #### File Upload Widget[​](#file-upload-widget "Direct link to File Upload Widget") Allows creation of a file upload input in the UI playground for files that are uploaded as base64 encoded strings. Here's the full [example](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing). Snippet: try: from pydantic.v1 import Fieldexcept ImportError: from pydantic import Fieldfrom langserve import CustomUserType# ATTENTION: Inherit from CustomUserType instead of BaseModel otherwise# the server will decode it into a dict instead of a pydantic model.class FileProcessingRequest(CustomUserType): """Request including a base64 encoded file.""" # The extra field is used to specify a widget for the playground UI. file: str = Field(..., extra={"widget": {"type": "base64file"}}) num_chars: int = 100 Example widget: ![](https://github.com/langchain-ai/langserve/assets/3205522/52199e46-9464-4c2e-8be8-222250e08c3f) ### Chat Widget[​](#chat-widget "Direct link to Chat Widget") Look at the [widget example](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/tuples/server.py). To define a chat widget, make sure that you pass "type": "chat". * "input" is JSONPath to the field in the _Request_ that has the new input message. * "output" is JSONPath to the field in the _Response_ that has new output message(s). * Don't specify these fields if the entire input or output should be used as they are ( e.g., if the output is a list of chat messages.) Here's a snippet: class ChatHistory(CustomUserType): chat_history: List[Tuple[str, str]] = Field( ..., examples=[[("human input", "ai response")]], extra={"widget": {"type": "chat", "input": "question", "output": "answer"}}, ) question: strdef _format_to_messages(input: ChatHistory) -> List[BaseMessage]: """Format the input to a list of messages.""" history = input.chat_history user_input = input.question messages = [] for human, ai in history: messages.append(HumanMessage(content=human)) messages.append(AIMessage(content=ai)) messages.append(HumanMessage(content=user_input)) return messagesmodel = ChatOpenAI()chat_model = RunnableParallel({"answer": (RunnableLambda(_format_to_messages) | model)})add_routes( app, chat_model.with_types(input_type=ChatHistory), config_keys=["configurable"], path="/chat",) Example widget: ![](https://github.com/langchain-ai/langserve/assets/3205522/a71ff37b-a6a9-4857-a376-cf27c41d3ca4) You can also specify a list of messages as your a parameter directly, as shown in this snippet: prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assisstant named Cob."), MessagesPlaceholder(variable_name="messages"), ])chain = prompt | ChatAnthropic(model="claude-2")class MessageListInput(BaseModel): """Input for the chat endpoint.""" messages: List[Union[HumanMessage, AIMessage]] = Field( ..., description="The chat messages representing the current conversation.", extra={"widget": {"type": "chat", "input": "messages"}}, )add_routes( app, chain.with_types(input_type=MessageListInput), path="/chat",) See [this sample file](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/message_list/server.py) for an example. ### Enabling / Disabling Endpoints (LangServe >=0.0.33)[​](#enabling--disabling-endpoints-langserve-0033 "Direct link to Enabling / Disabling Endpoints (LangServe >=0.0.33)") You can enable / disable which endpoints are exposed when adding routes for a given chain. Use `enabled_endpoints` if you want to make sure to never get a new endpoint when upgrading langserve to a newer verison. Enable: The code below will only enable `invoke`, `batch` and the corresponding `config_hash` endpoint variants. add_routes(app, chain, enabled_endpoints=["invoke", "batch", "config_hashes"], path="/mychain") Disable: The code below will disable the playground for the chain add_routes(app, chain, disabled_endpoints=["playground"], path="/mychain") * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Conceptual guide ](/v0.2/docs/concepts/)[ Next Overview ](/v0.2/docs/versions/overview/) * [Overview](#overview) * [Features](#features) * [Limitations](#limitations) * [Hosted LangServe](#hosted-langserve) * [Security](#security) * [Installation](#installation) * [LangChain CLI πŸ› οΈ](#langchain-cli-️) * [Setup](#setup) * [1\. Create new app using langchain cli command](#1-create-new-app-using-langchain-cli-command) * [2\. Define the runnable in add\_routes. Go to server.py and edit](#2-define-the-runnable-in-add_routes-go-to-serverpy-and-edit) * [3\. Use `poetry` to add 3rd party packages (e.g., langchain-openai, langchain-anthropic, langchain-mistral etc).](#3-use-poetry-to-add-3rd-party-packages-eg-langchain-openai-langchain-anthropic-langchain-mistral-etc) * [4\. Set up relevant env variables. For example,](#4-set-up-relevant-env-variables-for-example) * [5\. Serve your app](#5-serve-your-app) * [Examples](#examples) * [Sample Application](#sample-application) * [Server](#server) * [Docs](#docs) * [Client](#client) * [Endpoints](#endpoints) * [Playground](#playground) * [Widgets](#widgets) * [Sharing](#sharing) * [Chat playground](#chat-playground) * [Legacy Chains](#legacy-chains) * [Deployment](#deployment) * [Deploy to AWS](#deploy-to-aws) * [Deploy to Azure](#deploy-to-azure) * [Deploy to GCP](#deploy-to-gcp) * [Community Contributed](#community-contributed) * [Pydantic](#pydantic) * [Advanced](#advanced) * [Handling Authentication](#handling-authentication) * [Files](#files) * [Custom Input and Output Types](#custom-input-and-output-types) * [Custom User Types](#custom-user-types) * [Playground Widgets](#playground-widgets) * [Available Widgets](#available-widgets) * [Chat Widget](#chat-widget) * [Enabling / Disabling Endpoints (LangServe >=0.0.33)](#enabling--disabling-endpoints-langserve-0033)
null
https://python.langchain.com/v0.2/docs/concepts/
* [](/v0.2/) * Conceptual guide On this page Conceptual guide ================ This section contains introductions to key parts of LangChain. Architecture[​](#architecture "Direct link to Architecture") ------------------------------------------------------------ LangChain as a framework consists of a number of packages. ### `langchain-core`[​](#langchain-core "Direct link to langchain-core") This package contains base abstractions of different components and ways to compose them together. The interfaces for core components like LLMs, vector stores, retrievers and more are defined here. No third party integrations are defined here. The dependencies are kept purposefully very lightweight. ### Partner packages[​](#partner-packages "Direct link to Partner packages") While the long tail of integrations are in `langchain-community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations. ### `langchain`[​](#langchain "Direct link to langchain") The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture. These are NOT third party integrations. All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations. ### `langchain-community`[​](#langchain-community "Direct link to langchain-community") This package contains third party integrations that are maintained by the LangChain community. Key partner packages are separated out (see below). This contains all integrations for various components (LLMs, vector stores, retrievers). All dependencies in this package are optional to keep the package as lightweight as possible. ### [`langgraph`](https://langchain-ai.github.io/langgraph)[​](#langgraph "Direct link to langgraph") `langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows. ### [`langserve`](/v0.2/docs/langserve/)[​](#langserve "Direct link to langserve") A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running. ### [LangSmith](https://docs.smith.langchain.com)[​](#langsmith "Direct link to langsmith") A developer platform that lets you debug, test, evaluate, and monitor LLM applications. ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview") LangChain Expression Language (LCEL)[​](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)") ---------------------------------------------------------------------------------------------------------------------------------- LangChain Expression Language, or LCEL, is a declarative way to chain LangChain components. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest β€œprompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: **First-class streaming support** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. **Async support** Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/v0.2/docs/langserve/) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server. **Optimized parallel execution** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency. **Retries and fallbacks** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost. **Access intermediate results** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every [LangServe](/v0.2/docs/langserve/) server. **Input and output schemas** Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe. [**Seamless LangSmith tracing**](https://docs.smith.langchain.com) As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability. [**Seamless LangServe deployment**](/v0.2/docs/langserve/) Any chain created with LCEL can be easily deployed using [LangServe](/v0.2/docs/langserve/). ### Runnable interface[​](#runnable-interface "Direct link to Runnable interface") To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below. This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes: * `stream`: stream back chunks of the response * `invoke`: call the chain on an input * `batch`: call the chain on a list of inputs These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency: * `astream`: stream back chunks of the response async * `ainvoke`: call the chain on an input async * `abatch`: call the chain on a list of inputs async * `astream_log`: stream back intermediate steps as they happen, in addition to the final response * `astream_events`: **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.1.14) The **input type** and **output type** varies by component: Component Input Type Output Type Prompt Dictionary PromptValue ChatModel Single string, list of chat messages or a PromptValue ChatMessage LLM Single string, list of chat messages or a PromptValue String OutputParser The output of an LLM or ChatModel Depends on the parser Retriever Single string List of Documents Tool Single string or dictionary, depending on the tool Depends on the tool All runnables expose input and output **schemas** to inspect the inputs and outputs: * `input_schema`: an input Pydantic model auto-generated from the structure of the Runnable * `output_schema`: an output Pydantic model auto-generated from the structure of the Runnable Components[​](#components "Direct link to Components") ------------------------------------------------------ LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix. ### Chat models[​](#chat-models "Direct link to Chat models") Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are traditionally newer models (older models are generally `LLMs`, see below). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages. Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs. When a string is passed in as input, it is converted to a `HumanMessage` and then passed to the underlying model. LangChain does not host any Chat Models, rather we rely on third party integrations. We have some standardized parameters when constructing ChatModels: * `model`: the name of the model * `temperature`: the sampling temperature * `timeout`: request timeout * `max_tokens`: max tokens to generate * `stop`: default stop sequences * `max_retries`: max number of times to retry requests * `api_key`: API key for the model provider * `base_url`: endpoint to send requests to Some important things to note: * standard params only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max\_tokens can't be supported on these. * standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in `langchain-community`. ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model. info **Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see the [tool calling section](/v0.2/docs/concepts/#functiontool-calling) for more information. For specifics on how to use chat models, see the [relevant how-to guides here](/v0.2/docs/how_to/#chat-models). #### Multimodality[​](#multimodality "Direct link to Multimodality") Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures. In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations. For specifics on how to use multimodal models, see the [relevant how-to guides here](/v0.2/docs/how_to/#multimodal). For a full list of LangChain model providers with multimodal models, [check out this table](/v0.2/docs/integrations/chat/#advanced-features). ### LLMs[​](#llms "Direct link to LLMs") caution Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/v0.2/docs/concepts/#chat-models), even for non-chat use cases. You are probably looking for [the section above instead](/v0.2/docs/concepts/#chat-models). Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are [Chat Models](/v0.2/docs/concepts/#chat-models), see above). Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as [Chat Models](/v0.2/docs/concepts/#chat-models). When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model. LangChain does not host any LLMs, rather we rely on third party integrations. For specifics on how to use LLMs, see the [relevant how-to guides here](/v0.2/docs/how_to/#llms). ### Messages[​](#messages "Direct link to Messages") Some language models take a list of messages as input and return a message. There are a few different types of messages. All messages have a `role`, `content`, and `response_metadata` property. The `role` describes WHO is saying the message. LangChain has different message classes for different roles. The `content` property describes the content of the message. This can be a few different things: * A string (most models deal this type of content) * A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location) #### HumanMessage[​](#humanmessage "Direct link to HumanMessage") This represents a message from the user. #### AIMessage[​](#aimessage "Direct link to AIMessage") This represents a message from the model. In addition to the `content` property, these messages also have: **`response_metadata`** The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored. **`tool_calls`** These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. They can be accessed from there with the `.tool_calls` property. This property returns a list of dictionaries. Each dictionary has the following keys: * `name`: The name of the tool that should be called. * `args`: The arguments to that tool. * `id`: The id of that tool call. #### SystemMessage[​](#systemmessage "Direct link to SystemMessage") This represents a system message, which tells the model how to behave. Not every model provider supports this. #### FunctionMessage[​](#functionmessage "Direct link to FunctionMessage") This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result. #### ToolMessage[​](#toolmessage "Direct link to ToolMessage") This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result. ### Prompt templates[​](#prompt-templates "Direct link to Prompt templates") Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in. Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages. There are a few different types of prompt templates: #### String PromptTemplates[​](#string-prompttemplates "Direct link to String PromptTemplates") These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows: from langchain_core.prompts import PromptTemplateprompt_template = PromptTemplate.from_template("Tell me a joke about {topic}")prompt_template.invoke({"topic": "cats"}) **API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) #### ChatPromptTemplates[​](#chatprompttemplates "Direct link to ChatPromptTemplates") These prompt templates are used to format a list of messages. These "templates" consist of a list of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows: from langchain_core.prompts import ChatPromptTemplateprompt_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("user", "Tell me a joke about {topic}")])prompt_template.invoke({"topic": "cats"}) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in. #### MessagesPlaceholder[​](#messagesplaceholder "Direct link to MessagesPlaceholder") This prompt template is responsible for adding a list of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in a list of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder. from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.messages import HumanMessageprompt_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), MessagesPlaceholder("msgs")])prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]}) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). This is useful for letting a list of messages be slotted into a particular spot. An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is: prompt_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("placeholder", "{msgs}") # <-- This is the changed part]) For specifics on how to use prompt templates, see the [relevant how-to guides here](/v0.2/docs/how_to/#prompt-templates). ### Example selectors[​](#example-selectors "Direct link to Example selectors") One common prompting technique for achieving better performance is to include examples as part of the prompt. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts. For specifics on how to use example selectors, see the [relevant how-to guides here](/v0.2/docs/how_to/#example-selectors). ### Output parsers[​](#output-parsers "Direct link to Output parsers") note The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that [here](/v0.2/docs/concepts/#function-tool-calling). Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs. LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information: **Name**: The name of the output parser **Supports Streaming**: Whether the output parser supports streaming. **Has Format Instructions**: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser. **Calls LLM**: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output. **Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs. **Output Type**: The output type of the object returned by the parser. **Description**: Our commentary on this output parser and when to use it. Name Supports Streaming Has Format Instructions Calls LLM Input Type Output Type Description [JSON](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html#langchain_core.output_parsers.json.JsonOutputParser) βœ… βœ… `str` | `Message` JSON object Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling. [XML](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html#langchain_core.output_parsers.xml.XMLOutputParser) βœ… βœ… `str` | `Message` `dict` Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). [CSV](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.list.CommaSeparatedListOutputParser.html#langchain_core.output_parsers.list.CommaSeparatedListOutputParser) βœ… βœ… `str` | `Message` `List[str]` Returns a list of comma separated values. [OutputFixing](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html#langchain.output_parsers.fix.OutputFixingParser) βœ… `str` | `Message` Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output. [RetryWithError](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html#langchain.output_parsers.retry.RetryWithErrorOutputParser) βœ… `str` | `Message` Wraps another output parser. If that output parser errors, then this will pass the original inputs, the bad output, and the error message to an LLM and ask it to fix it. Compared to OutputFixingParser, this one also sends the original instructions. [Pydantic](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html#langchain_core.output_parsers.pydantic.PydanticOutputParser) βœ… `str` | `Message` `pydantic.BaseModel` Takes a user defined Pydantic model and returns data in that format. [YAML](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser) βœ… `str` | `Message` `pydantic.BaseModel` Takes a user defined Pydantic model and returns data in that format. Uses YAML to encode it. [PandasDataFrame](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser.html#langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser) βœ… `str` | `Message` `dict` Useful for doing operations with pandas DataFrames. [Enum](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html#langchain.output_parsers.enum.EnumOutputParser) βœ… `str` | `Message` `Enum` Parses response into one of the provided enum values. [Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) βœ… `str` | `Message` `datetime.datetime` Parses response into a datetime string. [Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) βœ… `str` | `Message` `Dict[str, str]` An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. For specifics on how to use output parsers, see the [relevant how-to guides here](/v0.2/docs/how_to/#output-parsers). ### Chat history[​](#chat-history "Direct link to Chat history") Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly. The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain. This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database. Future interactions will then load those messages and pass them into the chain as part of the input. ### Documents[​](#documents "Direct link to Documents") A Document object in LangChain contains information about some data. It has two attributes: * `page_content: str`: The content of this document. Currently is only a string. * `metadata: dict`: Arbitrary metadata associated with this document. Can track the document id, file name, etc. ### Document loaders[​](#document-loaders "Direct link to Document loaders") These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc. Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. An example use case is as follows: from langchain_community.document_loaders.csv_loader import CSVLoaderloader = CSVLoader( ... # <-- Integration specific parameters here)data = loader.load() **API Reference:**[CSVLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html) For specifics on how to use document loaders, see the [relevant how-to guides here](/v0.2/docs/how_to/#document-loaders). ### Text splitters[​](#text-splitters "Direct link to Text splitters") Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following: 1. Split the text up into small, semantically meaningful chunks (often sentences). 2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). 3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there are two different axes along which you can customize your text splitter: 1. How the text is split 2. How the chunk size is measured For specifics on how to use text splitters, see the [relevant how-to guides here](/v0.2/docs/how_to/#text-splitters). ### Embedding models[​](#embedding-models "Direct link to Embedding models") Embedding models create a vector representation of a piece of text. You can think of a vector as an array of numbers that captures the semantic meaning of the text. By representing the text in this way, you can perform mathematical operations that allow you to do things like search for other pieces of text that are most similar in meaning. These natural language search capabilities underpin many types of [context retrieval](/v0.2/docs/concepts/#retrieval), where we provide an LLM with the relevant data it needs to effectively respond to a query. ![](/v0.2/assets/images/embeddings-9c2616450a3b4f497a2d95a696b5f1a7.png) The `Embeddings` class is a class designed for interfacing with text embedding models. There are many different embedding model providers (OpenAI, Cohere, Hugging Face, etc) and local models, and this class is designed to provide a standard interface for all of them. The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). For specifics on how to use embedding models, see the [relevant how-to guides here](/v0.2/docs/how_to/#embedding-models). ### Vector stores[​](#vector-stores "Direct link to Vector stores") One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. Most vector stores can also store metadata about embedded vectors and support filtering on that metadata before similarity search, allowing you more control over returned documents. Vector stores can be converted to the retriever interface by doing: vectorstore = MyVectorStore()retriever = vectorstore.as_retriever() For specifics on how to use vector stores, see the [relevant how-to guides here](/v0.2/docs/how_to/#vector-stores). ### Retrievers[​](#retrievers "Direct link to Retrievers") A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/v0.2/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/v0.2/docs/integrations/retrievers/amazon_kendra_retriever/). Retrievers accept a string query as input and return a list of Document's as output. For specifics on how to use retrievers, see the [relevant how-to guides here](/v0.2/docs/how_to/#retrievers). ### Tools[​](#tools "Direct link to Tools") Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world. A tool consists of the following components: 1. The name of the tool 2. A description of what the tool does 3. JSON schema of what the inputs to the tool are 4. The function to call 5. Whether the result of a tool should be returned directly to the user (only relevant for agents) The name, description and JSON schema are provided as context to the LLM, allowing the LLM to determine how to use the tool appropriately. Given a list of available tools and a prompt, an LLM can request that one or more tools be invoked with appropriate arguments. Generally, when designing tools to be used by a chat model or LLM, it is important to keep in mind the following: * Chat models that have been fine-tuned for tool calling will be better at tool calling than non-fine-tuned models. * Non fine-tuned models may not be able to use tools at all, especially if the tools are complex or require multiple tool calls. * Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. * Simpler tools are generally easier for models to use than more complex tools. For specifics on how to use tools, see the [relevant how-to guides here](/v0.2/docs/how_to/#tools). ### Toolkits[​](#toolkits "Direct link to Toolkits") Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. All Toolkits expose a `get_tools` method which returns a list of tools. You can therefore do: # Initialize a toolkittoolkit = ExampleTookit(...)# Get list of toolstools = toolkit.get_tools() ### Agents[​](#agents "Direct link to Agents") By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. [LangGraph](https://github.com/langchain-ai/langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. Please check out that documentation for a more in depth overview of agent concepts. There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`. AgentExecutor was essentially a runtime for agents. It was a great place to get started, however, it was not flexible enough as you started to have more customized agents. In order to solve that we built LangGraph to be this flexible, highly-controllable runtime. If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/v0.2/docs/how_to/agent_executor/). It is recommended, however, that you start to transition to LangGraph. In order to assist in this we have put together a [transition guide on how to do so](/v0.2/docs/how_to/migrate_agent/). ### Callbacks[​](#callbacks "Direct link to Callbacks") LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. #### Callback Events[​](#callback-events "Direct link to Callback Events") Event Event Trigger Associated Method Chat model start When a chat model starts `on_chat_model_start` LLM start When a llm starts `on_llm_start` LLM new token When an llm OR chat model emits a new token `on_llm_new_token` LLM ends When an llm OR chat model ends `on_llm_end` LLM errors When an llm OR chat model errors `on_llm_error` Chain start When a chain starts running `on_chain_start` Chain end When a chain ends `on_chain_end` Chain error When a chain errors `on_chain_error` Tool start When a tool starts running `on_tool_start` Tool end When a tool ends `on_tool_end` Tool error When a tool errors `on_tool_error` Agent action When an agent takes an action `on_agent_action` Agent finish When an agent ends `on_agent_finish` Retriever start When a retriever starts `on_retriever_start` Retriever end When a retriever ends `on_retriever_end` Retriever error When a retriever errors `on_retriever_error` Text When arbitrary text is run `on_text` Retry When a retry event is run `on_retry` #### Callback handlers[​](#callback-handlers "Direct link to Callback handlers") Callback handlers can either be `sync` or `async`: * Sync callback handlers implement the [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface. * Async callback handlers implement the [AsyncCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface. During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered. #### Passing callbacks[​](#passing-callbacks "Direct link to Passing callbacks") The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places: The callbacks are available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places: * **Request time callbacks**: Passed at the time of the request in addition to the input data. Available on all standard `Runnable` objects. These callbacks are INHERITED by all children of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`. * **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks are passed as arguments to the constructor of the object. The callbacks are scoped only to the object they are defined on, and are **not** inherited by any children of the object. danger Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. If you're creating a custom chain or runnable, you need to remember to propagate request time callbacks to any child objects. Async in Python<=3.10 Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables and is running async in python<=3.10, will have to propagate callbacks to child objects manually. This is because LangChain cannot automatically propagate callbacks to child objects in this case. This is a common reason why you may fail to see events being emitted from custom runnables or tools. For specifics on how to use callbacks, see the [relevant how-to guides here](/v0.2/docs/how_to/#callbacks). Techniques[​](#techniques "Direct link to Techniques") ------------------------------------------------------ ### Streaming[​](#streaming "Direct link to Streaming") Individual LLM calls often run for much longer than traditional resource requests. This compounds when you build more complex chains or agents that require multiple reasoning steps. Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming. Below, we'll discuss some concepts and considerations around streaming in LangChain. #### `.stream()` and `.astream()`[​](#stream-and-astream "Direct link to stream-and-astream") Most modules in LangChain include the `.stream()` method (and the equivalent `.astream()` method for [async](https://docs.python.org/3/library/asyncio.html) environments) as an ergonomic streaming interface. `.stream()` returns an iterator, which you can consume with a simple `for` loop. Here's an example with a chat model: from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")for chunk in model.stream("what color is the sky?"): print(chunk.content, end="|", flush=True) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode without the need to provide additional config. The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html). Because this method is part of [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language-lcel), you can handle formatting differences from different outputs using an [output parser](/v0.2/docs/concepts/#output-parsers) to transform each yielded chunk. You can check out [this guide](/v0.2/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`. #### `.astream_events()`[​](#astream_events "Direct link to astream_events") While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output - for example, returning sources alongside the final generation when building a chat over documents app. There are ways to do this [using callbacks](/v0.2/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate values to the end with something like chained [`.assign()`](/v0.2/docs/how_to/passthrough/) calls, but LangChain also includes an `.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator which yields [various types of events](/v0.2/docs/how_to/streaming/#event-reference) that you can filter and process according to the needs of your project. Here's one small example that prints just events containing streamed chat model output: from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for event in chain.astream_events({"topic": "parrot"}, version="v2"): kind = event["event"] if kind == "on_chat_model_stream": print(event, end="|", flush=True) **API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components! See [this guide](/v0.2/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`, including a table listing available events. #### Callbacks[​](#callbacks-1 "Direct link to Callbacks") The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/v0.2/docs/concepts/#callbacks) system. You can pass a callback handler that handles the [`on_llm_new_token`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_new_token) event into LangChain components. When that component is invoked, any [LLM](/v0.2/docs/concepts/#llms) or [chat model](/v0.2/docs/concepts/#chat-models) contained in the component calls the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response. You can also handle the [`on_llm_end`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_end) event to perform any necessary cleanup. You can see [this how-to section](/v0.2/docs/how_to/#callbacks) for more specifics on using callbacks. Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable, they can be unwieldy for developers. For example: * You need to explicitly initialize and manage some aggregator or other stream to collect results. * The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes. * Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once. * You would often ignore the result of the actual model call in favor of callback results. #### Tokens[​](#tokens "Direct link to Tokens") The unit that most model providers use to measure input and output is via a unit called a **token**. Tokens are the basic units that language models read and generate when processing or producing text. The exact definition of a token can vary depending on the specific way the model was trained - for instance, in English, a token could be a single word like "apple", or a part of a word like "app". When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**. The model then streams back generated output tokens, which the tokenizer decodes into human-readable text. The below example shows how OpenAI models tokenize `LangChain is cool!`: ![](/v0.2/assets/images/tokenization-10f566ab6774724e63dd99646f69655c.png) You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries. The reason language models use tokens rather than something more immediately intuitive like "characters" has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on the initial input and their previous generations. Training the model using tokens language models to handle linguistic units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model to learn and understand the structure of the language, including grammar and context. Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing. ### Structured output[​](#structured-output "Direct link to Structured output") LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide range of inputs, but for some use-cases, it can be useful to constrain the LLM's output to a specific format or structure. This is referred to as **structured output**. For example, if the output is to be stored in a relational database, it is much easier if the model generates output that adheres to a defined schema or format. [Extracting specific information](/v0.2/docs/tutorials/extraction/) from unstructured text is another case where this is particularly useful. Most commonly, the output format will be JSON, though other formats such as [YAML](/v0.2/docs/how_to/output_parser_yaml/) can be useful too. Below, we'll discuss a few ways to get structured output from models in LangChain. #### `.with_structured_output()`[​](#with_structured_output "Direct link to with_structured_output") For convenience, some LangChain chat models support a `.with_structured_output()` method. This method only requires a schema as input, and returns a dict or Pydantic object. Generally, this method is only present on models that support one of the more advanced methods described below, and will use one of them under the hood. It takes care of importing a suitable output parser and formatting the schema in the right format for the model. For more information, check out this [how-to guide](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method). #### Raw prompting[​](#raw-prompting "Direct link to Raw prompting") The most intuitive way to get a model to structure output is to ask nicely. In addition to your query, you can give instructions describing what kind of output you'd like, then parse the output using an [output parser](/v0.2/docs/concepts/#output-parsers) to convert the raw model message or string output into something more easily manipulated. The biggest benefit to raw prompting is its flexibility: * Raw prompting does not require any special model features, only sufficient reasoning capability to understand the passed schema. * You can prompt for any format you'd like, not just JSON. This can be useful if the model you are using is more heavily trained on a certain type of data, such as XML or YAML. However, there are some drawbacks too: * LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format for smooth parsing can be surprisingly difficult and model-specific. * Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult. Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions, and still others may prefer XML. While we'll next go over some ways that you can take advantage of features offered by model providers to increase reliability, prompting techniques remain important for tuning your results no matter what method you choose. #### JSON mode[​](#json-mode "Direct link to JSON mode") Some models, such as [Mistral](/v0.2/docs/integrations/chat/mistralai/), [OpenAI](/v0.2/docs/integrations/chat/openai/), [Together AI](/v0.2/docs/integrations/chat/together/) and [Ollama](/v0.2/docs/integrations/chat/ollama/), support a feature called **JSON mode**, usually enabled via config. When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON. Often they require some custom prompting, but it's usually much less burdensome and along the lines of, `"you must always return JSON"`, and the [output is easier to parse](/v0.2/docs/how_to/output_parser_json/). It's also generally simpler and more commonly available than tool calling. Here's an example: from langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIfrom langchain.output_parsers.json import SimpleJsonOutputParsermodel = ChatOpenAI( model="gpt-4o", model_kwargs={ "response_format": { "type": "json_object" } },)prompt = ChatPromptTemplate.from_template( "Answer the user's question to the best of your ability." 'You must always output a JSON object with an "answer" key and a "followup_question" key.' "{question}")chain = prompt | model | SimpleJsonOutputParser()chain.invoke({ "question": "What is the powerhouse of the cell?" }) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [SimpleJsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.SimpleJsonOutputParser.html) {'answer': 'The powerhouse of the cell is the mitochondrion. It is responsible for producing energy in the form of ATP through cellular respiration.', 'followup_question': 'Would you like to know more about how mitochondria produce energy?'} For a full list of model providers that support JSON mode, see [this table](/v0.2/docs/integrations/chat/#advanced-features). #### Function/tool calling[​](#functiontool-calling "Direct link to Function/tool calling") info We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to [extract output matching some schema](/v0.2/docs/tutorials/extraction/) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result. For models that support it, tool calling can be very convenient. It removes the guesswork around how best to prompt schemas in favor of a built-in model feature. It can also more naturally support agentic flows, since you can just pass multiple tool schemas instead of fiddling with enums or unions. Many LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools/). LangChain provides a standardized interface for tool calling that is consistent across different models. The standard interface consists of: * `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/v0.2/docs/concepts/#tools) here. * `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model. The following how-to guides are good practical resources for using function/tool calling: * [How to return structured data from an LLM](/v0.2/docs/how_to/structured_output/) * [How to use a model to call tools](/v0.2/docs/how_to/tool_calling/) For a full list of model providers that support tool calling, [see this table](/v0.2/docs/integrations/chat/#advanced-features). ### Retrieval[​](#retrieval "Direct link to Retrieval") LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise). Retrieval is the process of providing relevant information to an LLM to improve its response for a given input. Retrieval augmented generation (RAG) is the process of grounding the LLM generation (output) using the retrieved information. tip * See our RAG from Scratch [code](https://github.com/langchain-ai/rag-from-scratch) and [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared). * For a high-level guide on retrieval, see this [tutorial on RAG](/v0.2/docs/tutorials/rag/). RAG is only as good as the retrieved documents’ relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections. You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app. ![](/v0.2/assets/images/rag_landscape-627f1d0fd46b92bc2db0af8f99ec3724.png) #### Query Translation[​](#query-translation "Direct link to Query Translation") First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries. **Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system. For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query. Name When to use Description [Multi-query](/v0.2/docs/how_to/MultiQueryRetriever/) When you need to cover multiple perspectives of a question. Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries. [Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) When a question can be broken down into smaller subproblems. Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). [Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) When a higher-level conceptual understanding is required. First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. [HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) If you have challenges retrieving relevant documents using the raw user inputs. Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. tip See our RAG from Scratch videos for a few different specific approaches: * [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared) * [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared) * [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared) * [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared) #### Routing[​](#routing "Direct link to Routing") Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.** Name When to use Description [Logical routing](/v0.2/docs/how_to/routing/) When you can prompt an LLM with rules to decide where to route the input. Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. [Semantic routing](/v0.2/docs/how_to/routing/#routing-by-semantic-similarity) When semantic similarity is an effective way to determine where to route the input. Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. tip See our RAG from Scratch video on [routing](https://youtu.be/pfpIndq7Fi8?feature=shared). #### Query Construction[​](#query-construction "Direct link to Query Construction") Third, consider whether any of your data sources require specific query formats. Many structured databases use SQL. Vector stores often have specific syntax for applying keyword filters to document metadata. **Using an LLM to convert a natural language query into a query syntax is a popular and powerful approach.** In particular, [text-to-SQL](/v0.2/docs/tutorials/sql_qa/), [text-to-Cypher](/v0.2/docs/tutorials/graph/), and [query analysis for metadata filters](/v0.2/docs/tutorials/query_analysis/#query-analysis) are useful ways to interact with structured, graph, and vector databases respectively. Name When to Use Description [Text to SQL](/v0.2/docs/tutorials/sql_qa/) If users are asking questions that require information housed in a relational database, accessible via SQL. This uses an LLM to transform user input into a SQL query. [Text-to-Cypher](/v0.2/docs/tutorials/graph/) If users are asking questions that require information housed in a graph database, accessible via Cypher. This uses an LLM to transform user input into a Cypher query. [Self Query](/v0.2/docs/how_to/self_query/) If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). tip See our [blog post overview](https://blog.langchain.dev/query-construction/) and RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared), the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries. #### Indexing[​](#indexing "Direct link to Indexing") Fouth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/v0.2/docs/concepts/#embedding-models). Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens. Two approaches can address this tension: (1) [Multi Vector](/v0.2/docs/how_to/multi_vector/) retriever using an LLM to translate documents into any form (e.g., often into a summary) that is well-suited for indexing, but returns full documents to the LLM for generation. (2) [ParentDocument](/v0.2/docs/how_to/parent_document_retriever/) retriever embeds document chunks, but also returns full documents. The idea is to get the best of both worlds: use concise representations (summaries or chunks) for retrieval, but use the full documents for answer generation. Name Index Type Uses an LLM When to Use Description [Vector store](/v0.2/docs/how_to/vectorstore_retriever/) Vector store No If you are just getting started and looking for something quick and easy. This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. [ParentDocument](/v0.2/docs/how_to/parent_document_retriever/) Vector store + Document Store No If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). [Multi Vector](/v0.2/docs/how_to/multi_vector/) Vector store + Document Store Sometimes during indexing If you are able to extract information from documents that you think is more relevant to index than the text itself. This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. [Time-Weighted Vector store](/v0.2/docs/how_to/time_weighted_vectorstore/) Vector store No If you have timestamps associated with your documents, and you want to retrieve the most recent ones This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) tip * See our RAG from Scratch video on [indexing fundamentals](https://youtu.be/bjb_EMsTDKI?feature=shared) * See our RAG from Scratch video on [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared) Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding. [ColBERT](https://docs.google.com/presentation/d/1IRhAdGjIevrrotdplHNcc4aXgIYyKamUKTWtB3m3aMU/edit?usp=sharing) is an interesting approach to address this with a higher granularity embeddings: (1) produce a contextually influenced embedding for each token in the document and query, (2) score similarity between each query token and all document tokens, (3) take the max, (4) do this for all query tokens, and (5) take the sum of the max scores (in step 3) for all query tokens to get a query-document similarity score; this token-wise scoring can yield strong results. ![](/v0.2/assets/images/colbert-0bf5bd7485724d0005a2f5bdadbdaedb.png) There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents. Name When to use Description [ColBERT](/v0.2/docs/integrations/providers/ragatouille/#using-colbert-as-a-reranker) When higher granularity embeddings are needed. ColBERT uses contextually influenced embeddings for each token in the document and query to get a granular query-document similarity score. [Hybrid search](/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/) When combining keyword-based and semantic similarity. Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Maximal Marginal Relevance (MMR)](/v0.2/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) When needing to diversify search results. MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. tip See our RAG from Scratch video on [ColBERT](https://youtu.be/cN6S0Ehm7_8?feature=shared%3E). #### Post-processing[​](#post-processing "Direct link to Post-processing") Sixth, consider ways to filter or rank retrieved documents. This is very useful if you are [combining documents returned from multiple sources](/v0.2/docs/integrations/retrievers/cohere-reranker/#doing-reranking-with-coherererank), since it can can down-rank less relevant documents and / or [compress similar documents](/v0.2/docs/how_to/contextual_compression/#more-built-in-compressors-filters). Name Index Type Uses an LLM When to Use Description [Contextual Compression](/v0.2/docs/how_to/contextual_compression/) Any Sometimes If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. [Ensemble](/v0.2/docs/how_to/ensemble_retriever/) Any No If you have multiple retrieval methods and want to try combining them. This fetches documents from multiple retrievers and then combines them. [Re-ranking](/v0.2/docs/integrations/retrievers/cohere-reranker/) Any Yes If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods . Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query. tip See our RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1). #### Generation[​](#generation "Direct link to Generation") **Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow. We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction): * **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above * **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query * **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question ![](/v0.2/assets/images/langgraph_rag-f039b41ef268bf46783706e58726fd9c.png) Name When to use Description Self-RAG When needing to fix answers with hallucinations or irrelevant content. Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors. Corrective-RAG When needing a fallback mechanism for low relevance docs. Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval. tip See several videos and cookbooks showcasing RAG with LangGraph: * [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck) * [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts) * [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag) See our LangGraph RAG recipes with partners: * [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/agents/langchain) * [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain) ### Text splitting[​](#text-splitting "Direct link to Text splitting") LangChain offers many different types of `text splitters`. These all live in the `langchain-text-splitters` package. Table columns: * **Name**: Name of the text splitter * **Classes**: Classes that implement this text splitter * **Splits On**: How this text splitter splits text * **Adds Metadata**: Whether or not this text splitter adds metadata about where each chunk came from. * **Description**: Description of the splitter, including recommendation on when to use it. Name Classes Splits On Adds Metadata Description Recursive [RecursiveCharacterTextSplitter](/v0.2/docs/how_to/recursive_text_splitter/), [RecursiveJsonSplitter](/v0.2/docs/how_to/recursive_json_splitter/) A list of user defined characters Recursively splits text. This splitting is trying to keep related pieces of text next to each other. This is the `recommended way` to start splitting text. HTML [HTMLHeaderTextSplitter](/v0.2/docs/how_to/HTML_header_metadata_splitter/), [HTMLSectionSplitter](/v0.2/docs/how_to/HTML_section_aware_splitter/) HTML specific characters βœ… Splits text based on HTML-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the HTML) Markdown [MarkdownHeaderTextSplitter](/v0.2/docs/how_to/markdown_header_metadata_splitter/), Markdown specific characters βœ… Splits text based on Markdown-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the Markdown) Code [many languages](/v0.2/docs/how_to/code_splitter/) Code (Python, JS) specific characters Splits text based on characters specific to coding languages. 15 different languages are available to choose from. Token [many classes](/v0.2/docs/how_to/split_by_token/) Tokens Splits text on tokens. There exist a few different ways to measure tokens. Character [CharacterTextSplitter](/v0.2/docs/how_to/character_text_splitter/) A user defined character Splits text based on a user defined character. One of the simpler methods. Semantic Chunker (Experimental) [SemanticChunker](/v0.2/docs/how_to/semantic-chunker/) Sentences First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) Integration: AI21 Semantic [AI21SemanticTextSplitter](/v0.2/docs/integrations/document_transformers/ai21_semantic_text_splitter/) βœ… Identifies distinct topics that form coherent pieces of text and splits along those. ### Evaluation[​](#evaluation "Direct link to Evaluation") Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications. It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications. ![](/v0.2/assets/images/langsmith_evaluate-7d48643f3e4c50d77234e13feb95144d.png) [LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways: * It makes it easier to create and curate datasets via its tracing and annotation features * It provides an evaluation framework that helps you define metrics and run your app against your dataset * It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/concepts.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create and query vector stores ](/v0.2/docs/how_to/vectorstores/)[ Next πŸ¦œοΈπŸ“ LangServe ](/v0.2/docs/langserve/) * [Architecture](#architecture) * [`langchain-core`](#langchain-core) * [Partner packages](#partner-packages) * [`langchain`](#langchain) * [`langchain-community`](#langchain-community) * [`langgraph`](#langgraph) * [`langserve`](#langserve) * [LangSmith](#langsmith) * [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel) * [Runnable interface](#runnable-interface) * [Components](#components) * [Chat models](#chat-models) * [LLMs](#llms) * [Messages](#messages) * [Prompt templates](#prompt-templates) * [Example selectors](#example-selectors) * [Output parsers](#output-parsers) * [Chat history](#chat-history) * [Documents](#documents) * [Document loaders](#document-loaders) * [Text splitters](#text-splitters) * [Embedding models](#embedding-models) * [Vector stores](#vector-stores) * [Retrievers](#retrievers) * [Tools](#tools) * [Toolkits](#toolkits) * [Agents](#agents) * [Callbacks](#callbacks) * [Techniques](#techniques) * [Streaming](#streaming) * [Structured output](#structured-output) * [Retrieval](#retrieval) * [Text splitting](#text-splitting) * [Evaluation](#evaluation)
null
https://python.langchain.com/v0.2/docs/how_to/query_constructing_filters/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to construct filters for query analysis How to construct filters for query analysis =========================================== We may want to do query analysis to extract filters to pass into retrievers. One way we ask the LLM to represent these filters is as a Pydantic model. There is then the issue of converting that Pydantic model into a filter that can be passed into a retriever. This can be done manually, but LangChain also provides some "Translators" that are able to translate from a common syntax into filters specific to each retriever. Here, we will cover how to use those translators. from typing import Optionalfrom langchain.chains.query_constructor.ir import ( Comparator, Comparison, Operation, Operator, StructuredQuery,)from langchain.retrievers.self_query.chroma import ChromaTranslatorfrom langchain.retrievers.self_query.elasticsearch import ElasticsearchTranslatorfrom langchain_core.pydantic_v1 import BaseModel **API Reference:**[Comparator](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Comparator.html) | [Comparison](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Comparison.html) | [Operation](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Operation.html) | [Operator](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Operator.html) | [StructuredQuery](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.StructuredQuery.html) | [ChromaTranslator](https://api.python.langchain.com/en/latest/query_constructors/langchain_community.query_constructors.chroma.ChromaTranslator.html) | [ElasticsearchTranslator](https://api.python.langchain.com/en/latest/query_constructors/langchain_community.query_constructors.elasticsearch.ElasticsearchTranslator.html) In this example, `year` and `author` are both attributes to filter on. class Search(BaseModel): query: str start_year: Optional[int] author: Optional[str] search_query = Search(query="RAG", start_year=2022, author="LangChain") def construct_comparisons(query: Search): comparisons = [] if query.start_year is not None: comparisons.append( Comparison( comparator=Comparator.GT, attribute="start_year", value=query.start_year, ) ) if query.author is not None: comparisons.append( Comparison( comparator=Comparator.EQ, attribute="author", value=query.author, ) ) return comparisons comparisons = construct_comparisons(search_query) _filter = Operation(operator=Operator.AND, arguments=comparisons) ElasticsearchTranslator().visit_operation(_filter) {'bool': {'must': [{'range': {'metadata.start_year': {'gt': 2022}}}, {'term': {'metadata.author.keyword': 'LangChain'}}]}} ChromaTranslator().visit_operation(_filter) {'$and': [{'start_year': {'$gt': 2022}}, {'author': {'$eq': 'LangChain'}}]} [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_constructing_filters.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add values to a chain's state ](/v0.2/docs/how_to/assign/)[ Next How to configure runtime chain internals ](/v0.2/docs/how_to/configure/)
null
https://python.langchain.com/v0.2/docs/how_to/chat_token_usage_tracking/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to track token usage in ChatModels On this page How to track token usage in ChatModels ====================================== Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls. This guide requires `langchain-openai >= 0.1.8`. %pip install --upgrade --quiet langchain langchain-openai Using LangSmith[​](#using-langsmith "Direct link to Using LangSmith") --------------------------------------------------------------------- You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/). Using AIMessage.usage\_metadata[​](#using-aimessageusage_metadata "Direct link to Using AIMessage.usage_metadata") ------------------------------------------------------------------------------------------------------------------ A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model. LangChain `AIMessage` objects include a [usage\_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `"input_tokens"` and `"output_tokens"`). Examples: **OpenAI**: # # !pip install -qU langchain-openaifrom langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")openai_response = llm.invoke("hello")openai_response.usage_metadata **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) {'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17} **Anthropic**: # !pip install -qU langchain-anthropicfrom langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-haiku-20240307")anthropic_response = llm.invoke("hello")anthropic_response.usage_metadata **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) {'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20} ### Using AIMessage.response\_metadata[​](#using-aimessageresponse_metadata "Direct link to Using AIMessage.response_metadata") Metadata from the model response is also included in the AIMessage [response\_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts: print(f'OpenAI: {openai_response.response_metadata["token_usage"]}\n')print(f'Anthropic: {anthropic_response.response_metadata["usage"]}') OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}Anthropic: {'input_tokens': 8, 'output_tokens': 12} ### Streaming[​](#streaming "Direct link to Streaming") Some providers support token count metadata in a streaming context. #### OpenAI[​](#openai "Direct link to OpenAI") For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={"include_usage": True}`. note By default, the last message chunk in a stream will include a `"finish_reason"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `"finish_reason"` appears on the second to last message chunk. llm = ChatOpenAI(model="gpt-3.5-turbo-0125")aggregate = Nonefor chunk in llm.stream("hello", stream_options={"include_usage": True}): print(chunk) aggregate = chunk if aggregate is None else aggregate + chunk content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17} Note that the usage metadata will be included in the sum of the individual message chunks: print(aggregate.content)print(aggregate.usage_metadata) Hello! How can I assist you today?{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17} To disable streaming token counts for OpenAI, set `"include_usage"` to False in `stream_options`, or omit it from the parameters: aggregate = Nonefor chunk in llm.stream("hello"): print(chunk) content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52' You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/v0.2/docs/concepts/#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/v0.2/docs/how_to/streaming/#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/). See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps. from langchain_core.pydantic_v1 import BaseModel, Fieldclass Joke(BaseModel): """Joke to tell user.""" setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke")llm = ChatOpenAI( model="gpt-3.5-turbo-0125", model_kwargs={"stream_options": {"include_usage": True}},)# Under the hood, .with_structured_output binds tools to the# chat model and appends a parser.structured_llm = llm.with_structured_output(Joke)async for event in structured_llm.astream_events("Tell me a joke", version="v2"): if event["event"] == "on_chat_model_end": print(f'Token usage: {event["data"]["output"].usage_metadata}\n') elif event["event"] == "on_chain_end": print(event["data"]["output"]) else: pass Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}setup='Why was the math book sad?' punchline='Because it had too many problems.' Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model. Using callbacks[​](#using-callbacks "Direct link to Using callbacks") --------------------------------------------------------------------- There are also some API-specific callback context managers that allow you to track token usage across multiple calls. It is currently only implemented for the OpenAI API and Bedrock Anthropic API. ### OpenAI[​](#openai-1 "Direct link to OpenAI") Let's first look at an extremely simple example of tracking token usage for a single Chat model call. # !pip install -qU langchain-community wikipediafrom langchain_community.callbacks.manager import get_openai_callbackllm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)with get_openai_callback() as cb: result = llm.invoke("Tell me a joke") print(cb) **API Reference:**[get\_openai\_callback](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.manager.get_openai_callback.html) Tokens Used: 27 Prompt Tokens: 11 Completion Tokens: 16Successful Requests: 1Total Cost (USD): $2.95e-05 Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence. with get_openai_callback() as cb: result = llm.invoke("Tell me a joke") result2 = llm.invoke("Tell me a joke") print(cb.total_tokens) 55 note Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available: with get_openai_callback() as cb: for chunk in llm.stream("Tell me a joke", stream_options={"include_usage": True}): pass print(cb.total_tokens) 28 If a chain or agent with multiple steps in it is used, it will track all those steps. from langchain.agents import AgentExecutor, create_tool_calling_agent, load_toolsfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages( [ ("system", "You're a helpful assistant"), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ])tools = load_tools(["wikipedia"])agent = create_tool_calling_agent(llm, tools, prompt)agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, stream_runnable=False) **API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) | [load\_tools](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.load_tools.load_tools.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) note We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream\_events. with get_openai_callback() as cb: response = agent_executor.invoke( { "input": "What's a hummingbird's scientific name and what's the fastest bird species?" } ) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") > Entering new AgentExecutor chain...Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`Page: HummingbirdSummary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.5–13 cm (3–5 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 18–24 grams (0.63–0.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 1⁄15 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.Page: Rufous hummingbirdSummary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.Page: Anna's hummingbirdSummary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna MassΓ©na, Duchess of Rivoli.It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.Invoking: `wikipedia` with `{'query': 'fastest bird species'}`Page: List of birds by flight speedSummary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.Page: Fastest animalsSummary: This is a list of the fastest animals in the world, by types of animal.Page: FalconSummary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial "tooth" on the side of their beaksβ€”unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.Some small falcons with long, narrow wings are called "hobbies" and some which hover while hunting are called "kestrels".As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).The scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.> Finished chain.Total Tokens: 1787Prompt Tokens: 1687Completion Tokens: 100Total Cost (USD): $0.0009935 ### Bedrock Anthropic[​](#bedrock-anthropic "Direct link to Bedrock Anthropic") The `get_bedrock_anthropic_callback` works very similarly: # !pip install langchain-awsfrom langchain_aws import ChatBedrockfrom langchain_community.callbacks.manager import get_bedrock_anthropic_callbackllm = ChatBedrock(model_id="anthropic.claude-v2")with get_bedrock_anthropic_callback() as cb: result = llm.invoke("Tell me a joke") result2 = llm.invoke("Tell me a joke") print(cb) **API Reference:**[get\_bedrock\_anthropic\_callback](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.manager.get_bedrock_anthropic_callback.html) Tokens Used: 96 Prompt Tokens: 26 Completion Tokens: 70Successful Requests: 2Total Cost (USD): $0.001888 Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now seen a few examples of how to track token usage for supported providers. Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output/) or [how to add caching to your chat models](/v0.2/docs/how_to/chat_model_caching/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chat_token_usage_tracking.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to init any model in one line ](/v0.2/docs/how_to/chat_models_universal_init/)[ Next How to add tools to chatbots ](/v0.2/docs/how_to/chatbots_tools/) * [Using LangSmith](#using-langsmith) * [Using AIMessage.usage\_metadata](#using-aimessageusage_metadata) * [Using AIMessage.response\_metadata](#using-aimessageresponse_metadata) * [Streaming](#streaming) * [Using callbacks](#using-callbacks) * [OpenAI](#openai-1) * [Bedrock Anthropic](#bedrock-anthropic) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/versions/release_policy/
* [](/v0.2/) * Versions * Release Policy On this page LangChain releases ================== The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.) Versioning[​](#versioning "Direct link to Versioning") ------------------------------------------------------ ### `langchain` and `langchain-core`[​](#langchain-and-langchain-core "Direct link to langchain-and-langchain-core") `langchain` and `langchain-core` follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0. Minor version increases will occur for: * Breaking changes for any public interfaces marked as `beta`. Patch version increases will occur for: * Bug fixes * New features * Any changes to private interfaces * Any changes to `beta` features When upgrading between minor versions, users should review the list of breaking changes and deprecations. From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**rc**N**. For example, 0.2.0rc1. If no issues are found, the release candidate will be released as a stable version with the same version number. If issues are found, we will release a new release candidate with an incremented `N` value (e.g., 0.2.0rc2). ### Other packages in the langchain ecosystem[​](#other-packages-in-the-langchain-ecosystem "Direct link to Other packages in the langchain ecosystem") Other packages in the ecosystem (including user packages) can follow a different versioning scheme, but are generally expected to pin to specific minor versions of `langchain` and `langchain-core`. Release cadence[​](#release-cadence "Direct link to Release cadence") --------------------------------------------------------------------- We expect to space out **minor** releases (e.g., from 0.2.0 to 0.3.0) of `langchain` and `langchain-core` by at least 2-3 months, as such releases may contain breaking changes. Patch versions are released frequently as they contain bug fixes and new features. API stability[​](#api-stability "Direct link to API stability") --------------------------------------------------------------- The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `langchain-core` will continue to evolve to better serve the needs of our users. Even though both `langchain` and `langchain-core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages. * Breaking changes to the public API will result in a minor version bump (the second digit) * Any bug fixes or new features will result in a patch version bump (the third digit) We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed. ### Stability of other packages[​](#stability-of-other-packages "Direct link to Stability of other packages") The stability of other packages in the LangChain ecosystem may vary: * `langchain-community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `langchain-community`, `langchain-community` is expected to experience more breaking changes than `langchain` and `langchain-core` as it contains many community contributions. * Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable. ### What is a "API stability"?[​](#what-is-a-api-stability "Direct link to What is a \"API stability\"?") API stability means: * All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases. * If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete." * If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called. ### **APIs marked as internal**[​](#apis-marked-as-internal "Direct link to apis-marked-as-internal") Certain APIs are explicitly marked as β€œinternal” in a couple of ways: * Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change. * Functions, methods, and other objects prefixed by a leading underscore (**`_`**). This is the standard Python convention of indicating that something is private; if any method starts with a single **`_`**, it’s an internal API. * **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are _meant_ to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain. Deprecation policy[​](#deprecation-policy "Direct link to Deprecation policy") ------------------------------------------------------------------------------ We will generally avoid deprecating features until a better alternative is available. When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `langchain-core`. After that, the feature will be removed. Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated. In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/release_policy.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Overview ](/v0.2/docs/versions/overview/)[ Next Packages ](/v0.2/docs/versions/packages/) * [Versioning](#versioning) * [`langchain` and `langchain-core`](#langchain-and-langchain-core) * [Other packages in the langchain ecosystem](#other-packages-in-the-langchain-ecosystem) * [Release cadence](#release-cadence) * [API stability](#api-stability) * [Stability of other packages](#stability-of-other-packages) * [What is a "API stability"?](#what-is-a-api-stability) * [**APIs marked as internal**](#apis-marked-as-internal) * [Deprecation policy](#deprecation-policy)
null
https://python.langchain.com/v0.2/docs/versions/v0_2/
* [](/v0.2/) * Versions * v0.2 On this page LangChain v0.2 ============== LangChain v0.2 was released in May 2024. This release includes a number of [breaking changes and deprecations](/v0.2/docs/versions/v0_2/deprecations/). This document contains a guide on upgrading to 0.2.x. Reference * [Breaking Changes & Deprecations](/v0.2/docs/versions/v0_2/deprecations/) * [Migrating to Astream Events v2](/v0.2/docs/versions/v0_2/migrating_astream_events/) Migration ========= This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps: 1. Install the 0.2.x versions of langchain-core, langchain and upgrade to recent versions of other packages that you may be using. (e.g. langgraph, langchain-community, langchain-openai, etc.) 2. Verify that your code runs properly with the new packages (e.g., unit tests pass). 3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.) 4. Manually resolve any remaining deprecation warnings. 5. Re-run unit tests. 6. If you are using `astream_events`, please review how to [migrate to astream events v2](/v0.2/docs/versions/v0_2/migrating_astream_events/). Upgrade to new imports[​](#upgrade-to-new-imports "Direct link to Upgrade to new imports") ------------------------------------------------------------------------------------------ We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but we hope that it will help you migrate your code more quickly. The migration script has the following limitations: 1. It’s limited to helping users move from old imports to new imports. It does not help address other deprecations. 2. It can’t handle imports that involve `as` . 3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body). 4. It will likely miss some deprecated imports. Here is an example of the import changes that the migration script can help apply automatically: From Package To Package Deprecated Import New Import langchain langchain-community from langchain.vectorstores import InMemoryVectorStore from langchain\_community.vectorstores import InMemoryVectorStore langchain-community langchain\_openai from langchain\_community.chat\_models import ChatOpenAI from langchain\_openai import ChatOpenAI langchain-community langchain-core from langchain\_community.document\_loaders import Blob from langchain\_core.document\_loaders import Blob langchain langchain-core from langchain.schema.document import Document from langchain\_core.documents import Document langchain langchain-text-splitters from langchain.text\_splitter import RecursiveCharacterTextSplitter from langchain\_text\_splitters import RecursiveCharacterTextSplitter Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ pip install langchain-clilangchain-cli --version # <-- Make sure the version is at least 0.0.22 Usage[​](#usage "Direct link to Usage") --------------------------------------- Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`). You will need to run the migration script **twice** as it only applies one import replacement per run. For example, say your code still uses `from langchain.chat_models import ChatOpenAI`: After the first run, you’ll get: `from langchain_community.chat_models import ChatOpenAI` After the second run, you’ll get: `from langchain_openai import ChatOpenAI` # Run a first time# Will replace from langchain.chat_models import ChatOpenAIlangchain-cli migrate --diff [path to code] # Previewlangchain-cli migrate [path to code] # Apply# Run a second time to apply more import replacementslangchain-cli migrate --diff [path to code] # Previewlangchain-cli migrate [path to code] # Apply ### Other options[​](#other-options "Direct link to Other options") # See help menulangchain-cli migrate --help# Preview Changes without applyinglangchain-cli migrate --diff [path to code]# Run on code including ipython notebooks# Apply all import updates except for updates from langchain to langchain-corelangchain-cli migrate --disable langchain_to_core --include-ipynb [path to code] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/v0_2/index.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Packages ](/v0.2/docs/versions/packages/)[ Next LangChain v0.2 ](/v0.2/docs/versions/v0_2/) * [Upgrade to new imports](#upgrade-to-new-imports) * [Installation](#installation) * [Usage](#usage) * [Other options](#other-options)
null
https://python.langchain.com/v0.2/docs/how_to/configure/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to configure runtime chain internals On this page How to configure runtime chain internals ======================================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Binding runtime arguments](/v0.2/docs/how_to/binding/) Sometimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things within your chains. This can include tweaking parameters such as temperature or even swapping out one model for another. In order to make this experience as easy as possible, we have defined two methods. * A `configurable_fields` method. This lets you configure particular fields of a runnable. * This is related to the [`.bind`](/v0.2/docs/how_to/binding/) method on runnables, but allows you to specify parameters for a given step in a chain at runtime rather than specifying them beforehand. * A `configurable_alternatives` method. With this method, you can list out alternatives for any particular runnable that can be set during runtime, and swap them for those specified alternatives. Configurable Fields[​](#configurable-fields "Direct link to Configurable Fields") --------------------------------------------------------------------------------- Let's walk through an example that configures chat model fields like temperature at runtime: %pip install --upgrade --quiet langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass() WARNING: You are using pip version 22.0.4; however, version 24.0 is available.You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.Note: you may need to restart the kernel to use updated packages. from langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAImodel = ChatOpenAI(temperature=0).configurable_fields( temperature=ConfigurableField( id="llm_temperature", name="LLM Temperature", description="The temperature of the LLM", ))model.invoke("pick a random number") **API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) AIMessage(content='17', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba26a0da-0a69-4533-ab7f-21178a73d303-0') Above, we defined `temperature` as a [`ConfigurableField`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html#langchain_core.runnables.utils.ConfigurableField) that we can set at runtime. To do so, we use the [`with_config`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method like this: model.with_config(configurable={"llm_temperature": 0.9}).invoke("pick a random number") AIMessage(content='12', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba8422ad-be77-4cb1-ac45-ad0aae74e3d9-0') Note that the passed `llm_temperature` entry in the dict has the same key as the `id` of the `ConfigurableField`. We can also do this to affect just one step that's part of a chain: prompt = PromptTemplate.from_template("Pick a random number above {x}")chain = prompt | modelchain.invoke({"x": 0}) AIMessage(content='27', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ecd4cadd-1b72-4f92-b9a0-15e08091f537-0') chain.with_config(configurable={"llm_temperature": 0.9}).invoke({"x": 0}) AIMessage(content='35', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-a916602b-3460-46d3-a4a8-7c926ec747c0-0') ### With HubRunnables[​](#with-hubrunnables "Direct link to With HubRunnables") This is useful to allow for switching of prompts from langchain.runnables.hub import HubRunnableprompt = HubRunnable("rlm/rag-prompt").configurable_fields( owner_repo_commit=ConfigurableField( id="hub_commit", name="Hub Commit", description="The Hub commit to pull from", ))prompt.invoke({"question": "foo", "context": "bar"}) **API Reference:**[HubRunnable](https://api.python.langchain.com/en/latest/runnables/langchain.runnables.hub.HubRunnable.html) ChatPromptValue(messages=[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: foo \nContext: bar \nAnswer:")]) prompt.with_config(configurable={"hub_commit": "rlm/rag-prompt-llama"}).invoke( {"question": "foo", "context": "bar"}) ChatPromptValue(messages=[HumanMessage(content="[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \nQuestion: foo \nContext: bar \nAnswer: [/INST]")]) Configurable Alternatives[​](#configurable-alternatives "Direct link to Configurable Alternatives") --------------------------------------------------------------------------------------------------- The `configurable_alternatives()` method allows us to swap out steps in a chain with an alternative. Below, we swap out one chat model for another: %pip install --upgrade --quiet langchain-anthropicimport osfrom getpass import getpassos.environ["ANTHROPIC_API_KEY"] = getpass() WARNING: You are using pip version 22.0.4; however, version 24.0 is available.You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.Note: you may need to restart the kernel to use updated packages. from langchain_anthropic import ChatAnthropicfrom langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAIllm = ChatAnthropic( model="claude-3-haiku-20240307", temperature=0).configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="llm"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="anthropic", # This adds a new option, with name `openai` that is equal to `ChatOpenAI()` openai=ChatOpenAI(), # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")` gpt4=ChatOpenAI(model="gpt-4"), # You can add more configuration options here)prompt = PromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | llm# By default it will call Anthropicchain.invoke({"topic": "bears"}) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!\n\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!", response_metadata={'id': 'msg_018edUHh5fUbWdiimhrC3dZD', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-775bc58c-28d7-4e6b-a268-48fa6661f02f-0') # We can use `.with_config(configurable={"llm": "openai"})` to specify an llm to usechain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"}) AIMessage(content="Why don't bears like fast food?\n\nBecause they can't catch it!", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 13, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-7bdaa992-19c9-4f0d-9a0c-1f326bc992d4-0') # If we use the `default_key` then it uses the defaultchain.with_config(configurable={"llm": "anthropic"}).invoke({"topic": "bears"}) AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!\n\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!", response_metadata={'id': 'msg_01BZvbmnEPGBtcxRWETCHkct', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-59b6ee44-a1cd-41b8-a026-28ee67cdd718-0') ### With Prompts[​](#with-prompts "Direct link to With Prompts") We can do a similar thing, but alternate between prompts llm = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)prompt = PromptTemplate.from_template( "Tell me a joke about {topic}").configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="prompt"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="joke", # This adds a new option, with name `poem` poem=PromptTemplate.from_template("Write a short poem about {topic}"), # You can add more configuration options here)chain = prompt | llm# By default it will write a jokechain.invoke({"topic": "bears"}) AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!", response_metadata={'id': 'msg_01DtM1cssjNFZYgeS3gMZ49H', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 28}}, id='run-8199af7d-ea31-443d-b064-483693f2e0a1-0') # We can configure it write a poemchain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"}) AIMessage(content="Here is a short poem about bears:\n\nMajestic bears, strong and true,\nRoaming the forests, wild and free.\nPowerful paws, fur soft and brown,\nCommanding respect, nature's crown.\n\nForaging for berries, fishing streams,\nProtecting their young, fierce and keen.\nMighty bears, a sight to behold,\nGuardians of the wilderness, untold.\n\nIn the wild they reign supreme,\nEmbodying nature's grand theme.\nBears, a symbol of strength and grace,\nCaptivating all who see their face.", response_metadata={'id': 'msg_01Wck3qPxrjURtutvtodaJFn', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 134}}, id='run-69414a1e-51d7-4bec-a307-b34b7d61025e-0') ### With Prompts and LLMs[​](#with-prompts-and-llms "Direct link to With Prompts and LLMs") We can also have multiple things configurable! Here's an example doing that with both prompts and LLMs. llm = ChatAnthropic( model="claude-3-haiku-20240307", temperature=0).configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="llm"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="anthropic", # This adds a new option, with name `openai` that is equal to `ChatOpenAI()` openai=ChatOpenAI(), # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")` gpt4=ChatOpenAI(model="gpt-4"), # You can add more configuration options here)prompt = PromptTemplate.from_template( "Tell me a joke about {topic}").configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="prompt"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="joke", # This adds a new option, with name `poem` poem=PromptTemplate.from_template("Write a short poem about {topic}"), # You can add more configuration options here)chain = prompt | llm# We can configure it write a poem with OpenAIchain.with_config(configurable={"prompt": "poem", "llm": "openai"}).invoke( {"topic": "bears"}) AIMessage(content="In the forest deep and wide,\nBears roam with grace and pride.\nWith fur as dark as night,\nThey rule the land with all their might.\n\nIn winter's chill, they hibernate,\nIn spring they emerge, hungry and great.\nWith claws sharp and eyes so keen,\nThey hunt for food, fierce and lean.\n\nBut beneath their tough exterior,\nLies a gentle heart, warm and superior.\nThey love their cubs with all their might,\nProtecting them through day and night.\n\nSo let us admire these majestic creatures,\nIn awe of their strength and features.\nFor in the wild, they reign supreme,\nThe mighty bears, a timeless dream.", response_metadata={'token_usage': {'completion_tokens': 133, 'prompt_tokens': 13, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-5eec0b96-d580-49fd-ac4e-e32a0803b49b-0') # We can always just configure only one if we wantchain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"}) AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 13, 'total_tokens': 26}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-c1b14c9c-4988-49b8-9363-15bfd479973a-0') ### Saving configurations[​](#saving-configurations "Direct link to Saving configurations") We can also easily save configured chains as their own objects openai_joke = chain.with_config(configurable={"llm": "openai"})openai_joke.invoke({"topic": "bears"}) AIMessage(content="Why did the bear break up with his girlfriend? \nBecause he couldn't bear the relationship anymore!", response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 13, 'total_tokens': 33}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-391ebd55-9137-458b-9a11-97acaff6a892-0') Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You now know how to configure a chain's internal steps at runtime. To learn more, see the other how-to guides on runnables in this section, including: * Using [.bind()](/v0.2/docs/how_to/binding/) as a simpler way to set a runnable's runtime parameters [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/configure.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to construct filters for query analysis ](/v0.2/docs/how_to/query_constructing_filters/)[ Next How deal with high cardinality categoricals when doing query analysis ](/v0.2/docs/how_to/query_high_cardinality/) * [Configurable Fields](#configurable-fields) * [With HubRunnables](#with-hubrunnables) * [Configurable Alternatives](#configurable-alternatives) * [With Prompts](#with-prompts) * [With Prompts and LLMs](#with-prompts-and-llms) * [Saving configurations](#saving-configurations) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/chatbots_tools/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add tools to chatbots On this page How to add tools to chatbots ============================ Prerequisites This guide assumes familiarity with the following concepts: * [Chatbots](/v0.2/docs/concepts/#messages) * [Agents](/v0.2/docs/tutorials/agents/) * [Chat history](/v0.2/docs/concepts/#chat-history) This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools. Setup[​](#setup "Direct link to Setup") --------------------------------------- For this guide, we'll be using a [tool calling agent](/v0.2/docs/how_to/agent_executor/) with a single tool for searching the web. The default will be powered by [Tavily](/v0.2/docs/integrations/tools/tavily_search/), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily. You'll need to [sign up for an account](https://tavily.com/) on the Tavily website, and install the following packages: %pip install --upgrade --quiet langchain-community langchain-openai tavily-python# Set env var OPENAI_API_KEY or load from a .env file:import dotenvdotenv.load_dotenv() You will also need your OpenAI key set as `OPENAI_API_KEY` and your Tavily API key set as `TAVILY_API_KEY`. Creating an agent[​](#creating-an-agent "Direct link to Creating an agent") --------------------------------------------------------------------------- Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed. First, let's initialize Tavily and an OpenAI chat model capable of tool calling: from langchain_community.tools.tavily_search import TavilySearchResultsfrom langchain_openai import ChatOpenAItools = [TavilySearchResults(max_results=1)]# Choose the LLM that will drive the agent# Only certain models support thischat = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0) **API Reference:**[TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) To make our agent conversational, we must also choose a prompt with a placeholder for our chat history. Here's an example: from langchain_core.prompts import ChatPromptTemplate# Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agentprompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ), ("placeholder", "{messages}"), ("placeholder", "{agent_scratchpad}"), ]) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) Great! Now let's assemble our agent: from langchain.agents import AgentExecutor, create_tool_calling_agentagent = create_tool_calling_agent(chat, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) **API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) Running the agent[​](#running-the-agent "Direct link to Running the agent") --------------------------------------------------------------------------- Now that we've set up our agent, let's try interacting with it! It can handle both trivial queries that require no lookup: from langchain_core.messages import HumanMessageagent_executor.invoke({"messages": [HumanMessage(content="I'm Nemo!")]}) **API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) > Entering new AgentExecutor chain...Hello Nemo! It's great to meet you. How can I assist you today?> Finished chain. {'messages': [HumanMessage(content="I'm Nemo!")], 'output': "Hello Nemo! It's great to meet you. How can I assist you today?"} Or, it can use of the passed search tool to get up to date information if needed: agent_executor.invoke( { "messages": [ HumanMessage( content="What is the current conservation status of the Great Barrier Reef?" ) ], }) > Entering new AgentExecutor chain...Invoking: `tavily_search_results_json` with `{'query': 'current conservation status of the Great Barrier Reef'}`[{'url': 'https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186', 'content': 'Great Barrier Reef hit with widespread and severe bleaching event\n\'Devastating\': Over 90pc of reefs on Great Barrier Reef suffered bleaching over summer, report reveals\nTop Stories\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career β€” as it happened\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\nOpenAI launches video model that can instantly create short clips from text prompts\nAntoinette Lattouf loses bid to force ABC to produce emails calling for her dismissal\nCategory one cyclone makes landfall in Gulf of Carpentaria off NT-Queensland border\nWhy the RBA may be forced to cut before the Fed\nBrisbane records \'wettest day since 2022\', as woman dies in floodwaters near Mount Isa\n$45m Sydney beachside home once owned by late radio star is demolished less than a year after sale\nAnnabel Sutherland\'s historic double century puts Australia within reach of Test victory over South Africa\nAlmighty defensive effort delivers Indigenous victory in NRL All Stars clash\nLisa Wilkinson feared she would have to sell home to pay legal costs of Bruce Lehrmann\'s defamation case, court documents reveal\nSupermarkets as you know them are disappearing from our cities\nNRL issues Broncos\' Reynolds, Carrigan with breach notices after public scrap\nPopular Now\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career β€” as it happened\n$45m Sydney beachside home once owned by late radio star is demolished less than a year after sale\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\nDealer sentenced for injecting children as young as 12 with methylamphetamine\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nTop Stories\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career β€” as it happened\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\nOpenAI launches video model that can instantly create short clips from text prompts\nJust In\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nTraveller alert after one-year-old in Adelaide reported with measles\nAntoinette Lattouf loses bid to force ABC to produce emails calling for her dismissal\nFooter\nWe acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn, and work.\n Increased coral cover could come at a cost\nThe rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora.\n Documents obtained by the ABC under Freedom of Information laws revealed the Morrison government had forced AIMS to rush the report\'s release and orchestrated a "leak" of the material to select media outlets ahead of the reef being considered for inclusion on the World Heritage In Danger list.\n The reef\'s status and potential inclusion on the In Danger list were due to be discussed at the 45th session of the World Heritage Committee in Russia in June this year, but the meeting was indefinitely postponed due to the war in Ukraine.\n More from ABC\nEditorial Policies\nGreat Barrier Reef coral cover at record levels after mass-bleaching events, report shows\nGreat Barrier Reef coral cover at record levels after mass-bleaching events, report shows\nRecord coral cover is being seen across much of the Great Barrier Reef as it recovers from past storms and mass-bleaching events.'}]The Great Barrier Reef is currently showing signs of recovery, with record coral cover being seen across much of the reef. This recovery comes after past storms and mass-bleaching events. However, the rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora. There were discussions about the reef's potential inclusion on the World Heritage In Danger list, but the meeting to consider this was indefinitely postponed due to the war in Ukraine.You can read more about it in this article: [Great Barrier Reef hit with widespread and severe bleaching event](https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186)> Finished chain. {'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?')], 'output': "The Great Barrier Reef is currently showing signs of recovery, with record coral cover being seen across much of the reef. This recovery comes after past storms and mass-bleaching events. However, the rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora. There were discussions about the reef's potential inclusion on the World Heritage In Danger list, but the meeting to consider this was indefinitely postponed due to the war in Ukraine.\n\nYou can read more about it in this article: [Great Barrier Reef hit with widespread and severe bleaching event](https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186)"} Conversational responses[​](#conversational-responses "Direct link to Conversational responses") ------------------------------------------------------------------------------------------------ Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot: from langchain_core.messages import AIMessage, HumanMessageagent_executor.invoke( { "messages": [ HumanMessage(content="I'm Nemo!"), AIMessage(content="Hello Nemo! How can I assist you today?"), HumanMessage(content="What is my name?"), ], }) **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) > Entering new AgentExecutor chain...Your name is Nemo!> Finished chain. {'messages': [HumanMessage(content="I'm Nemo!"), AIMessage(content='Hello Nemo! How can I assist you today?'), HumanMessage(content='What is my name?')], 'output': 'Your name is Nemo!'} If preferred, you can also wrap the agent executor in a [`RunnableWithMessageHistory`](/v0.2/docs/how_to/message_history/) class to internally manage history messages. Let's redeclare it this way: agent = create_tool_calling_agent(chat, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) Then, because our agent executor has multiple outputs, we also have to set the `output_messages_key` property when initializing the wrapper: from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorydemo_ephemeral_chat_history_for_chain = ChatMessageHistory()conversational_agent_executor = RunnableWithMessageHistory( agent_executor, lambda session_id: demo_ephemeral_chat_history_for_chain, input_messages_key="messages", output_messages_key="output",)conversational_agent_executor.invoke( {"messages": [HumanMessage("I'm Nemo!")]}, {"configurable": {"session_id": "unused"}},) **API Reference:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) > Entering new AgentExecutor chain...Hi Nemo! It's great to meet you. How can I assist you today?> Finished chain. {'messages': [HumanMessage(content="I'm Nemo!")], 'output': "Hi Nemo! It's great to meet you. How can I assist you today?"} And then if we rerun our wrapped agent executor: conversational_agent_executor.invoke( {"messages": [HumanMessage("What is my name?")]}, {"configurable": {"session_id": "unused"}},) > Entering new AgentExecutor chain...Your name is Nemo! How can I assist you today, Nemo?> Finished chain. {'messages': [HumanMessage(content="I'm Nemo!"), AIMessage(content="Hi Nemo! It's great to meet you. How can I assist you today?"), HumanMessage(content='What is my name?')], 'output': 'Your name is Nemo! How can I assist you today, Nemo?'} This [LangSmith trace](https://smith.langchain.com/public/1a9f712a-7918-4661-b3ff-d979bcc2af42/r) shows what's going on under the hood. Further reading[​](#further-reading "Direct link to Further reading") --------------------------------------------------------------------- Other types agents can also support conversational responses too - for more, check out the [agents section](/v0.2/docs/tutorials/agents/). For more on tool usage, you can also check out [this use case section](/v0.2/docs/how_to/#tools). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chatbots_tools.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to track token usage in ChatModels ](/v0.2/docs/how_to/chat_token_usage_tracking/)[ Next How to split code ](/v0.2/docs/how_to/code_splitter/) * [Setup](#setup) * [Creating an agent](#creating-an-agent) * [Running the agent](#running-the-agent) * [Conversational responses](#conversational-responses) * [Further reading](#further-reading)
null
https://python.langchain.com/v0.2/docs/how_to/query_high_cardinality/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How deal with high cardinality categoricals when doing query analysis On this page How deal with high cardinality categoricals when doing query analysis ===================================================================== You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to. In this notebook we take a look at how to approach this. Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") # %pip install -qU langchain langchain-community langchain-openai faker langchain-chroma #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We'll use OpenAI in this example: import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() #### Set up data[​](#set-up-data "Direct link to Set up data") We will generate a bunch of fake names from faker import Fakerfake = Faker()names = [fake.name() for _ in range(10000)] Let's look at some of the names names[0] 'Hayley Gonzalez' names[567] 'Jesse Knight' Query Analysis[​](#query-analysis "Direct link to Query Analysis") ------------------------------------------------------------------ We can now set up a baseline query analysis from langchain_core.pydantic_v1 import BaseModel, Field class Search(BaseModel): query: str author: str from langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIsystem = """Generate a relevant search query for a library system"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.with_structured_output(Search)query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) /Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change. warn_beta( We can see that if we spell the name exactly correctly, it knows how to handle it query_analyzer.invoke("what are books about aliens by Jesse Knight") Search(query='books about aliens', author='Jesse Knight') The issue is that the values you want to filter on may NOT be spelled exactly correctly query_analyzer.invoke("what are books about aliens by jess knight") Search(query='books about aliens', author='Jess Knight') ### Add in all values[​](#add-in-all-values "Direct link to Add in all values") One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction system = """Generate a relevant search query for a library system.`author` attribute MUST be one of:{authors}Do NOT hallucinate author name!"""base_prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])prompt = base_prompt.partial(authors=", ".join(names)) query_analyzer_all = {"question": RunnablePassthrough()} | prompt | structured_llm However... if the list of categoricals is long enough, it may error! try: res = query_analyzer_all.invoke("what are books about aliens by jess knight")except Exception as e: print(e) Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 33885 tokens (33855 in the messages, 30 in the functions). Please reduce the length of the messages or functions.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} We can try to use a longer context window... but with so much information in there, it is not garunteed to pick it up reliably llm_long = ChatOpenAI(model="gpt-4-turbo-preview", temperature=0)structured_llm_long = llm_long.with_structured_output(Search)query_analyzer_all = {"question": RunnablePassthrough()} | prompt | structured_llm_long query_analyzer_all.invoke("what are books about aliens by jess knight") Search(query='aliens', author='Kevin Knight') ### Find and all relevant values[​](#find-and-all-relevant-values "Direct link to Find and all relevant values") Instead, what we can do is create an index over the relevant values and then query that for the N most relevant values, from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts(names, embeddings, collection_name="author_names") **API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) def select_names(question): _docs = vectorstore.similarity_search(question, k=10) _names = [d.page_content for d in _docs] return ", ".join(_names) create_prompt = { "question": RunnablePassthrough(), "authors": select_names,} | base_prompt query_analyzer_select = create_prompt | structured_llm create_prompt.invoke("what are books by jess knight") ChatPromptValue(messages=[SystemMessage(content='Generate a relevant search query for a library system.\n\n`author` attribute MUST be one of:\n\nJesse Knight, Kelly Knight, Scott Knight, Richard Knight, Andrew Knight, Katherine Knight, Erica Knight, Ashley Knight, Becky Knight, Kevin Knight\n\nDo NOT hallucinate author name!'), HumanMessage(content='what are books by jess knight')]) query_analyzer_select.invoke("what are books about aliens by jess knight") Search(query='books about aliens', author='Jesse Knight') ### Replace after selection[​](#replace-after-selection "Direct link to Replace after selection") Another method is to let the LLM fill in whatever value, but then convert that value to a valid value. This can actually be done with the Pydantic class itself! from langchain_core.pydantic_v1 import validatorclass Search(BaseModel): query: str author: str @validator("author") def double(cls, v: str) -> str: return vectorstore.similarity_search(v, k=1)[0].page_content system = """Generate a relevant search query for a library system"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])corrective_structure_llm = llm.with_structured_output(Search)corrective_query_analyzer = ( {"question": RunnablePassthrough()} | prompt | corrective_structure_llm) corrective_query_analyzer.invoke("what are books about aliens by jes knight") Search(query='books about aliens', author='Jesse Knight') # TODO: show trigram similarity [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_high_cardinality.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to configure runtime chain internals ](/v0.2/docs/how_to/configure/)[ Next Custom Document Loader ](/v0.2/docs/how_to/document_loader_custom/) * [Setup](#setup) * [Query Analysis](#query-analysis) * [Add in all values](#add-in-all-values) * [Find and all relevant values](#find-and-all-relevant-values) * [Replace after selection](#replace-after-selection)
null
https://python.langchain.com/v0.2/docs/versions/v0_2/migrating_astream_events/
* [](/v0.2/) * Versions * [v0.2](/v0.2/docs/versions/v0_2/) * astream\_events v2 On this page Migrating to Astream Events v2 ============================== danger This migration guide is a work in progress and is not complete. Please wait to migrate astream\_events. We've added a `v2` of the astream\_events API with the release of `0.2.0`. You can see this [PR](https://github.com/langchain-ai/langchain/pull/21638) for more details. The `v2` version is a re-write of the `v1` version, and should be more efficient, with more consistent output for the events. The `v1` version of the API will be deprecated in favor of the `v2` version and will be removed in `0.4.0`. Below is a list of changes between the `v1` and `v2` versions of the API. ### output for `on_chat_model_end`[​](#output-for-on_chat_model_end "Direct link to output-for-on_chat_model_end") In `v1`, the outputs associated with `on_chat_model_end` changed depending on whether the chat model was run as a root level runnable or as part of a chain. As a root level runnable the output was: "data": {"output": AIMessageChunk(content="hello world!", id='some id')} As part of a chain the output was: "data": { "output": { "generations": [ [ { "generation_info": None, "message": AIMessageChunk( content="hello world!", id=AnyStr() ), "text": "hello world!", "type": "ChatGenerationChunk", } ] ], "llm_output": None, } }, As of `v2`, the output will always be the simpler representation: "data": {"output": AIMessageChunk(content="hello world!", id='some id')} note Non chat models (i.e., regular LLMs) are will be consistently associated with the more verbose format for now. ### output for `on_retriever_end`[​](#output-for-on_retriever_end "Direct link to output-for-on_retriever_end") `on_retriever_end` output will always return a list of `Documents`. Before: { "data": { "output": [ Document(...), Document(...), ... ] }} ### Removed `on_retriever_stream`[​](#removed-on_retriever_stream "Direct link to removed-on_retriever_stream") The `on_retriever_stream` event was an artifact of the implementation and has been removed. Full information associated with the event is already available in the `on_retriever_end` event. Please use `on_retriever_end` instead. ### Removed `on_tool_stream`[​](#removed-on_tool_stream "Direct link to removed-on_tool_stream") The `on_tool_stream` event was an artifact of the implementation and has been removed. Full information associated with the event is already available in the `on_tool_end` event. Please use `on_tool_end` instead. ### Propagating Names[​](#propagating-names "Direct link to Propagating Names") Names of runnables have been updated to be more consistent. model = GenericFakeChatModel(messages=infinite_cycle).configurable_fields( messages=ConfigurableField( id="messages", name="Messages", description="Messages return by the LLM", )) In `v1`, the event name was `RunnableConfigurableFields`. In `v2`, the event name is `GenericFakeChatModel`. If you're filtering by event names, check if you need to update your filters. ### RunnableRetry[​](#runnableretry "Direct link to RunnableRetry") Usage of [RunnableRetry](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.retry.RunnableRetry.html) within an LCEL chain being streamed generated an incorrect `on_chain_end` event in `v1` corresponding to the failed runnable invocation that was being retried. This event has been removed in `v2`. No action is required for this change. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/v0_2/migrating_astream_events.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous LangChain v0.2 ](/v0.2/docs/versions/v0_2/)[ Next Changes ](/v0.2/docs/versions/v0_2/deprecations/) * [output for `on_chat_model_end`](#output-for-on_chat_model_end) * [output for `on_retriever_end`](#output-for-on_retriever_end) * [Removed `on_retriever_stream`](#removed-on_retriever_stream) * [Removed `on_tool_stream`](#removed-on_tool_stream) * [Propagating Names](#propagating-names) * [RunnableRetry](#runnableretry)
null
https://python.langchain.com/v0.2/docs/how_to/code_splitter/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to split code On this page How to split code ================= [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) includes pre-built lists of separators that are useful for splitting text in a specific programming language. Supported languages are stored in the `langchain_text_splitters.Language` enum. They include: "cpp","go","java","kotlin","js","ts","php","proto","python","rst","ruby","rust","scala","swift","markdown","latex","html","sol","csharp","cobol","c","lua","perl","haskell" To view the list of separators for a given language, pass a value from this enum into RecursiveCharacterTextSplitter.get_separators_for_language` To instantiate a splitter that is tailored for a specific language, pass a value from the enum into RecursiveCharacterTextSplitter.from_language Below we demonstrate examples for the various languages. %pip install -qU langchain-text-splitters from langchain_text_splitters import ( Language, RecursiveCharacterTextSplitter,) **API Reference:**[Language](https://api.python.langchain.com/en/latest/base/langchain_text_splitters.base.Language.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) To view the full list of supported languages: [e.value for e in Language] ['cpp', 'go', 'java', 'kotlin', 'js', 'ts', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html', 'sol', 'csharp', 'cobol', 'c', 'lua', 'perl', 'haskell'] You can also see the separators used for a given language: RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON) ['\nclass ', '\ndef ', '\n\tdef ', '\n\n', '\n', ' ', ''] Python[​](#python "Direct link to Python") ------------------------------------------ Here's an example using the PythonTextSplitter: PYTHON_CODE = """def hello_world(): print("Hello, World!")# Call the functionhello_world()"""python_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PYTHON, chunk_size=50, chunk_overlap=0)python_docs = python_splitter.create_documents([PYTHON_CODE])python_docs [Document(page_content='def hello_world():\n print("Hello, World!")'), Document(page_content='# Call the function\nhello_world()')] JS[​](#js "Direct link to JS") ------------------------------ Here's an example using the JS text splitter: JS_CODE = """function helloWorld() { console.log("Hello, World!");}// Call the functionhelloWorld();"""js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)js_docs = js_splitter.create_documents([JS_CODE])js_docs [Document(page_content='function helloWorld() {\n console.log("Hello, World!");\n}'), Document(page_content='// Call the function\nhelloWorld();')] TS[​](#ts "Direct link to TS") ------------------------------ Here's an example using the TS text splitter: TS_CODE = """function helloWorld(): void { console.log("Hello, World!");}// Call the functionhelloWorld();"""ts_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.TS, chunk_size=60, chunk_overlap=0)ts_docs = ts_splitter.create_documents([TS_CODE])ts_docs [Document(page_content='function helloWorld(): void {'), Document(page_content='console.log("Hello, World!");\n}'), Document(page_content='// Call the function\nhelloWorld();')] Markdown[​](#markdown "Direct link to Markdown") ------------------------------------------------ Here's an example using the Markdown text splitter: markdown_text = """# πŸ¦œοΈπŸ”— LangChain⚑ Building applications with LLMs through composability ⚑## Quick Install```bash# Hopefully this code block isn't splitpip install langchain As an open-source project in a rapidly developing field, we are extremely open to contributions. """ ```pythonmd_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)md_docs = md_splitter.create_documents([markdown_text])md_docs [Document(page_content='# πŸ¦œοΈπŸ”— LangChain'), Document(page_content='⚑ Building applications with LLMs through composability ⚑'), Document(page_content='## Quick Install\n\n```bash'), Document(page_content="# Hopefully this code block isn't split"), Document(page_content='pip install langchain'), Document(page_content='```'), Document(page_content='As an open-source project in a rapidly developing field, we'), Document(page_content='are extremely open to contributions.')] Latex[​](#latex "Direct link to Latex") --------------------------------------- Here's an example on Latex text: latex_text = """\documentclass{article}\begin{document}\maketitle\section{Introduction}Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\subsection{History of LLMs}The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\subsection{Applications of LLMs}LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\end{document}""" latex_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)latex_docs = latex_splitter.create_documents([latex_text])latex_docs [Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle'), Document(page_content='\\section{Introduction}'), Document(page_content='Large language models (LLMs) are a type of machine learning'), Document(page_content='model that can be trained on vast amounts of text data to'), Document(page_content='generate human-like language. In recent years, LLMs have'), Document(page_content='made significant advances in a variety of natural language'), Document(page_content='processing tasks, including language translation, text'), Document(page_content='generation, and sentiment analysis.'), Document(page_content='\\subsection{History of LLMs}'), Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,'), Document(page_content='but they were limited by the amount of data that could be'), Document(page_content='processed and the computational power available at the'), Document(page_content='time. In the past decade, however, advances in hardware and'), Document(page_content='software have made it possible to train LLMs on massive'), Document(page_content='datasets, leading to significant improvements in'), Document(page_content='performance.'), Document(page_content='\\subsection{Applications of LLMs}'), Document(page_content='LLMs have many applications in industry, including'), Document(page_content='chatbots, content creation, and virtual assistants. They'), Document(page_content='can also be used in academia for research in linguistics,'), Document(page_content='psychology, and computational linguistics.'), Document(page_content='\\end{document}')] HTML[​](#html "Direct link to HTML") ------------------------------------ Here's an example using an HTML text splitter: html_text = """<!DOCTYPE html><html> <head> <title>πŸ¦œοΈπŸ”— LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head> <body> <div> <h1>πŸ¦œοΈπŸ”— LangChain</h1> <p>⚑ Building applications with LLMs through composability ⚑</p> </div> <div> As an open-source project in a rapidly developing field, we are extremely open to contributions. </div> </body></html>""" html_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.HTML, chunk_size=60, chunk_overlap=0)html_docs = html_splitter.create_documents([html_text])html_docs [Document(page_content='<!DOCTYPE html>\n<html>'), Document(page_content='<head>\n <title>πŸ¦œοΈπŸ”— LangChain</title>'), Document(page_content='<style>\n body {\n font-family: Aria'), Document(page_content='l, sans-serif;\n }\n h1 {'), Document(page_content='color: darkblue;\n }\n </style>\n </head'), Document(page_content='>'), Document(page_content='<body>'), Document(page_content='<div>\n <h1>πŸ¦œοΈπŸ”— LangChain</h1>'), Document(page_content='<p>⚑ Building applications with LLMs through composability ⚑'), Document(page_content='</p>\n </div>'), Document(page_content='<div>\n As an open-source project in a rapidly dev'), Document(page_content='eloping field, we are extremely open to contributions.'), Document(page_content='</div>\n </body>\n</html>')] Solidity[​](#solidity "Direct link to Solidity") ------------------------------------------------ Here's an example using the Solidity text splitter: SOL_CODE = """pragma solidity ^0.8.20;contract HelloWorld { function add(uint a, uint b) pure public returns(uint) { return a + b; }}"""sol_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.SOL, chunk_size=128, chunk_overlap=0)sol_docs = sol_splitter.create_documents([SOL_CODE])sol_docs [Document(page_content='pragma solidity ^0.8.20;'), Document(page_content='contract HelloWorld {\n function add(uint a, uint b) pure public returns(uint) {\n return a + b;\n }\n}')] C#[​](#c "Direct link to C#") ----------------------------- Here's an example using the C# text splitter: C_CODE = """using System;class Program{ static void Main() { int age = 30; // Change the age value as needed // Categorize the age without any console output if (age < 18) { // Age is under 18 } else if (age >= 18 && age < 65) { // Age is an adult } else { // Age is a senior citizen } }}"""c_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.CSHARP, chunk_size=128, chunk_overlap=0)c_docs = c_splitter.create_documents([C_CODE])c_docs [Document(page_content='using System;'), Document(page_content='class Program\n{\n static void Main()\n {\n int age = 30; // Change the age value as needed'), Document(page_content='// Categorize the age without any console output\n if (age < 18)\n {\n // Age is under 18'), Document(page_content='}\n else if (age >= 18 && age < 65)\n {\n // Age is an adult\n }\n else\n {'), Document(page_content='// Age is a senior citizen\n }\n }\n}')] Haskell[​](#haskell "Direct link to Haskell") --------------------------------------------- Here's an example using the Haskell text splitter: HASKELL_CODE = """main :: IO ()main = do putStrLn "Hello, World!"-- Some sample functionsadd :: Int -> Int -> Intadd x y = x + y"""haskell_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.HASKELL, chunk_size=50, chunk_overlap=0)haskell_docs = haskell_splitter.create_documents([HASKELL_CODE])haskell_docs [Document(page_content='main :: IO ()'), Document(page_content='main = do\n putStrLn "Hello, World!"\n-- Some'), Document(page_content='sample functions\nadd :: Int -> Int -> Int\nadd x y'), Document(page_content='= x + y')] PHP[​](#php "Direct link to PHP") --------------------------------- Here's an example using the PHP text splitter: PHP_CODE = """<?phpnamespace foo;class Hello { public function __construct() { }}function hello() { echo "Hello World!";}interface Human { public function breath();}trait Foo { }enum Color{ case Red; case Blue;}"""php_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PHP, chunk_size=50, chunk_overlap=0)haskell_docs = php_splitter.create_documents([PHP_CODE])haskell_docs [Document(page_content='<?php\nnamespace foo;'), Document(page_content='class Hello {'), Document(page_content='public function __construct() { }\n}'), Document(page_content='function hello() {\n echo "Hello World!";\n}'), Document(page_content='interface Human {\n public function breath();\n}'), Document(page_content='trait Foo { }\nenum Color\n{\n case Red;'), Document(page_content='case Blue;\n}')] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/code_splitter.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add tools to chatbots ](/v0.2/docs/how_to/chatbots_tools/)[ Next How to do retrieval with contextual compression ](/v0.2/docs/how_to/contextual_compression/) * [Python](#python) * [JS](#js) * [TS](#ts) * [Markdown](#markdown) * [Latex](#latex) * [HTML](#html) * [Solidity](#solidity) * [C#](#c) * [Haskell](#haskell) * [PHP](#php)
null
https://python.langchain.com/v0.2/docs/versions/v0_2/deprecations/
* [](/v0.2/) * Versions * [v0.2](/v0.2/docs/versions/v0_2/) * Changes On this page Deprecations and Breaking Changes ================================= This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages. New features and improvements are not listed here. See the [overview](/v0.2/docs/versions/overview/) for a summary of what's new in this release. Breaking changes[​](#breaking-changes "Direct link to Breaking changes") ------------------------------------------------------------------------ As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly. The following functions and classes require an explicit LLM to be passed as an argument: * `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit` * `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit` * `langchain.chains.openai_functions.get_openapi_chain` * `langchain.chains.router.MultiRetrievalQAChain.from_retrievers` * `langchain.indexes.VectorStoreIndexWrapper.query` * `langchain.indexes.VectorStoreIndexWrapper.query_with_sources` * `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources` * `langchain.chains.flare.FlareChain` The following classes now require passing an explicit Embedding model as an argument: * `langchain.indexes.VectostoreIndexCreator` The following code has been removed: * `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method. Behavior was changed for the following code: ### @tool decorator[​](#tool-decorator "Direct link to @tool decorator") `@tool` decorator now assigns the function doc-string as the tool description. Previously, the `@tool` decorator using to prepend the function signature. Before 0.2.0: @tooldef my_tool(x: str) -> str: """Some description.""" return "something"print(my_tool.description) Would result in: `my_tool: (x: str) -> str - Some description.` As of 0.2.0: It will result in: `Some description.` Code that moved to another package[​](#code-that-moved-to-another-package "Direct link to Code that moved to another package") ------------------------------------------------------------------------------------------------------------------------------ Code that was moved from `langchain` into another package (e.g, `langchain-community`) If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement. python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader" LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:>> from langchain.document_loaders import UnstructuredMarkdownLoaderwith new imports of:>> from langchain_community.document_loaders import UnstructuredMarkdownLoader We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.) However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide. Code targeted for removal[​](#code-targeted-for-removal "Direct link to Code targeted for removal") --------------------------------------------------------------------------------------------------- Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`). ### astream events V1[​](#astream-events-v1 "Direct link to astream events V1") If you are using `astream_events`, please review how to [migrate to astream events v2](/v0.2/docs/versions/v0_2/migrating_astream_events/). ### langchain\_core[​](#langchain_core "Direct link to langchain_core") #### try\_load\_from\_hub[​](#try_load_from_hub "Direct link to try_load_from_hub") In module: `utils.loading` Deprecated: 0.1.30 Removal: 0.3.0 Alternative: Using the hwchase17/langchain-hub repo for prompts is deprecated. Please use [https://smith.langchain.com/hub](https://smith.langchain.com/hub) instead. #### BaseLanguageModel.predict[​](#baselanguagemodelpredict "Direct link to BaseLanguageModel.predict") In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLanguageModel.predict\_messages[​](#baselanguagemodelpredict_messages "Direct link to BaseLanguageModel.predict_messages") In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLanguageModel.apredict[​](#baselanguagemodelapredict "Direct link to BaseLanguageModel.apredict") In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseLanguageModel.apredict\_messages[​](#baselanguagemodelapredict_messages "Direct link to BaseLanguageModel.apredict_messages") In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### RunTypeEnum[​](#runtypeenum "Direct link to RunTypeEnum") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Use string instead. #### TracerSessionV1Base[​](#tracersessionv1base "Direct link to TracerSessionV1Base") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSessionV1Create[​](#tracersessionv1create "Direct link to TracerSessionV1Create") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSessionV1[​](#tracersessionv1 "Direct link to TracerSessionV1") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSessionBase[​](#tracersessionbase "Direct link to TracerSessionBase") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSession[​](#tracersession "Direct link to TracerSession") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### BaseRun[​](#baserun "Direct link to BaseRun") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### LLMRun[​](#llmrun "Direct link to LLMRun") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### ChainRun[​](#chainrun "Direct link to ChainRun") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### ToolRun[​](#toolrun "Direct link to ToolRun") In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### BaseChatModel.**call**[​](#basechatmodelcall "Direct link to basechatmodelcall") In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.call\_as\_llm[​](#basechatmodelcall_as_llm "Direct link to BaseChatModel.call_as_llm") In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.predict[​](#basechatmodelpredict "Direct link to BaseChatModel.predict") In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.predict\_messages[​](#basechatmodelpredict_messages "Direct link to BaseChatModel.predict_messages") In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.apredict[​](#basechatmodelapredict "Direct link to BaseChatModel.apredict") In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseChatModel.apredict\_messages[​](#basechatmodelapredict_messages "Direct link to BaseChatModel.apredict_messages") In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseLLM.**call**[​](#basellmcall "Direct link to basellmcall") In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLLM.predict[​](#basellmpredict "Direct link to BaseLLM.predict") In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLLM.predict\_messages[​](#basellmpredict_messages "Direct link to BaseLLM.predict_messages") In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLLM.apredict[​](#basellmapredict "Direct link to BaseLLM.apredict") In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseLLM.apredict\_messages[​](#basellmapredict_messages "Direct link to BaseLLM.apredict_messages") In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseRetriever.get\_relevant\_documents[​](#baseretrieverget_relevant_documents "Direct link to BaseRetriever.get_relevant_documents") In module: `retrievers` Deprecated: 0.1.46 Removal: 0.3.0 Alternative: invoke #### BaseRetriever.aget\_relevant\_documents[​](#baseretrieveraget_relevant_documents "Direct link to BaseRetriever.aget_relevant_documents") In module: `retrievers` Deprecated: 0.1.46 Removal: 0.3.0 Alternative: ainvoke #### ChatPromptTemplate.from\_role\_strings[​](#chatprompttemplatefrom_role_strings "Direct link to ChatPromptTemplate.from_role_strings") In module: `prompts.chat` Deprecated: 0.0.1 Removal: Alternative: from\_messages classmethod #### ChatPromptTemplate.from\_strings[​](#chatprompttemplatefrom_strings "Direct link to ChatPromptTemplate.from_strings") In module: `prompts.chat` Deprecated: 0.0.1 Removal: Alternative: from\_messages classmethod #### BaseTool.**call**[​](#basetoolcall "Direct link to basetoolcall") In module: `tools` Deprecated: 0.1.47 Removal: 0.3.0 Alternative: invoke #### convert\_pydantic\_to\_openai\_function[​](#convert_pydantic_to_openai_function "Direct link to convert_pydantic_to_openai_function") In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0 Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_function() #### convert\_pydantic\_to\_openai\_tool[​](#convert_pydantic_to_openai_tool "Direct link to convert_pydantic_to_openai_tool") In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0 Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_tool() #### convert\_python\_function\_to\_openai\_function[​](#convert_python_function_to_openai_function "Direct link to convert_python_function_to_openai_function") In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0 Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_function() #### format\_tool\_to\_openai\_function[​](#format_tool_to_openai_function "Direct link to format_tool_to_openai_function") In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0 Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_function() #### format\_tool\_to\_openai\_tool[​](#format_tool_to_openai_tool "Direct link to format_tool_to_openai_tool") In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0 Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_tool() ### langchain[​](#langchain "Direct link to langchain") #### AgentType[​](#agenttype "Direct link to AgentType") In module: `agents.agent_types` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc. #### Chain.**call**[​](#chaincall "Direct link to chaincall") In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: invoke #### Chain.acall[​](#chainacall "Direct link to Chain.acall") In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: ainvoke #### Chain.run[​](#chainrun-1 "Direct link to Chain.run") In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: invoke #### Chain.arun[​](#chainarun "Direct link to Chain.arun") In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: ainvoke #### Chain.apply[​](#chainapply "Direct link to Chain.apply") In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: batch #### LLMChain[​](#llmchain "Direct link to LLMChain") In module: `chains.llm` Deprecated: 0.1.17 Removal: 0.3.0 Alternative: [RunnableSequence](/v0.2/docs/how_to/sequence/), e.g., `prompt | llm` #### LLMSingleActionAgent[​](#llmsingleactionagent "Direct link to LLMSingleActionAgent") In module: `agents.agent` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc. #### Agent[​](#agent "Direct link to Agent") In module: `agents.agent` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc. #### OpenAIFunctionsAgent[​](#openaifunctionsagent "Direct link to OpenAIFunctionsAgent") In module: `agents.openai_functions_agent.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_openai\_functions\_agent #### ZeroShotAgent[​](#zeroshotagent "Direct link to ZeroShotAgent") In module: `agents.mrkl.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_react\_agent #### MRKLChain[​](#mrklchain "Direct link to MRKLChain") In module: `agents.mrkl.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### ConversationalAgent[​](#conversationalagent "Direct link to ConversationalAgent") In module: `agents.conversational.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_react\_agent #### ConversationalChatAgent[​](#conversationalchatagent "Direct link to ConversationalChatAgent") In module: `agents.conversational_chat.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_json\_chat\_agent #### ChatAgent[​](#chatagent "Direct link to ChatAgent") In module: `agents.chat.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_react\_agent #### OpenAIMultiFunctionsAgent[​](#openaimultifunctionsagent "Direct link to OpenAIMultiFunctionsAgent") In module: `agents.openai_functions_multi_agent.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_openai\_tools\_agent #### ReActDocstoreAgent[​](#reactdocstoreagent "Direct link to ReActDocstoreAgent") In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### DocstoreExplorer[​](#docstoreexplorer "Direct link to DocstoreExplorer") In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### ReActTextWorldAgent[​](#reacttextworldagent "Direct link to ReActTextWorldAgent") In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### ReActChain[​](#reactchain "Direct link to ReActChain") In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### SelfAskWithSearchAgent[​](#selfaskwithsearchagent "Direct link to SelfAskWithSearchAgent") In module: `agents.self_ask_with_search.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_self\_ask\_with\_search #### SelfAskWithSearchChain[​](#selfaskwithsearchchain "Direct link to SelfAskWithSearchChain") In module: `agents.self_ask_with_search.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### StructuredChatAgent[​](#structuredchatagent "Direct link to StructuredChatAgent") In module: `agents.structured_chat.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_structured\_chat\_agent #### RetrievalQA[​](#retrievalqa "Direct link to RetrievalQA") In module: `chains.retrieval_qa.base` Deprecated: 0.1.17 Removal: 0.3.0 Alternative: [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) #### load\_agent\_from\_config[​](#load_agent_from_config "Direct link to load_agent_from_config") In module: `agents.loading` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### load\_agent[​](#load_agent "Direct link to load_agent") In module: `agents.loading` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### initialize\_agent[​](#initialize_agent "Direct link to initialize_agent") In module: `agents.initialize` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc. #### XMLAgent[​](#xmlagent "Direct link to XMLAgent") In module: `agents.xml.base` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: create\_xml\_agent #### CohereRerank[​](#coherererank "Direct link to CohereRerank") In module: `retrievers.document_compressors.cohere_rerank` Deprecated: 0.0.30 Removal: 0.3.0 Alternative: langchain\_cohere.CohereRerank #### ConversationalRetrievalChain[​](#conversationalretrievalchain "Direct link to ConversationalRetrievalChain") In module: `chains.conversational_retrieval.base` Deprecated: 0.1.17 Removal: 0.3.0 Alternative: [create\_history\_aware\_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring) #### create\_extraction\_chain\_pydantic[​](#create_extraction_chain_pydantic "Direct link to create_extraction_chain_pydantic") In module: `chains.openai_tools.extraction` Deprecated: 0.1.14 Removal: 0.3.0 Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. #### create\_openai\_fn\_runnable[​](#create_openai_fn_runnable "Direct link to create_openai_fn_runnable") In module: `chains.structured_output.base` Deprecated: 0.1.14 Removal: 0.3.0 Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. #### create\_structured\_output\_runnable[​](#create_structured_output_runnable "Direct link to create_structured_output_runnable") In module: `chains.structured_output.base` Deprecated: 0.1.17 Removal: 0.3.0 Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. #### create\_openai\_fn\_chain[​](#create_openai_fn_chain "Direct link to create_openai_fn_chain") In module: `chains.openai_functions.base` Deprecated: 0.1.1 Removal: 0.3.0 Alternative: create\_openai\_fn\_runnable #### create\_structured\_output\_chain[​](#create_structured_output_chain "Direct link to create_structured_output_chain") In module: `chains.openai_functions.base` Deprecated: 0.1.1 Removal: 0.3.0 Alternative: ChatOpenAI.with\_structured\_output #### create\_extraction\_chain[​](#create_extraction_chain "Direct link to create_extraction_chain") In module: `chains.openai_functions.extraction` Deprecated: 0.1.14 Removal: 0.3.0 Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. #### create\_extraction\_chain\_pydantic[​](#create_extraction_chain_pydantic-1 "Direct link to create_extraction_chain_pydantic") In module: `chains.openai_functions.extraction` Deprecated: 0.1.14 Removal: 0.3.0 Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/v0_2/deprecations.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous astream\_events v2 ](/v0.2/docs/versions/v0_2/migrating_astream_events/)[ Next Security ](/v0.2/docs/security/) * [Breaking changes](#breaking-changes) * [@tool decorator](#tool-decorator) * [Code that moved to another package](#code-that-moved-to-another-package) * [Code targeted for removal](#code-targeted-for-removal) * [astream events V1](#astream-events-v1) * [langchain\_core](#langchain_core) * [langchain](#langchain)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_custom/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Custom Document Loader On this page How to create a custom Document Loader ====================================== Overview[​](#overview "Direct link to Overview") ------------------------------------------------ Applications based on LLMs frequently entail extracting data from databases or files, like PDFs, and converting it into a format that LLMs can utilize. In LangChain, this usually involves creating Document objects, which encapsulate the extracted text (`page_content`) along with metadataβ€”a dictionary containing details about the document, such as the author's name or the date of publication. `Document` objects are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the `Document` to generate a desired response (e.g., summarizing the document). `Documents` can be either used immediately or indexed into a vectorstore for future retrieval and use. The main abstractions for Document Loading are: Component Description Document Contains `text` and `metadata` BaseLoader Use to convert raw data into `Documents` Blob A representation of binary data that's located either in a file or in memory BaseBlobParser Logic to parse a `Blob` to yield `Document` objects This guide will demonstrate how to write custom document loading and file parsing logic; specifically, we'll see how to: 1. Create a standard document Loader by sub-classing from `BaseLoader`. 2. Create a parser using `BaseBlobParser` and use it in conjunction with `Blob` and `BlobLoaders`. This is useful primarily when working with files. Standard Document Loader[​](#standard-document-loader "Direct link to Standard Document Loader") ------------------------------------------------------------------------------------------------ A document loader can be implemented by sub-classing from a `BaseLoader` which provides a standard interface for loading documents. ### Interface[​](#interface "Direct link to Interface") Method Name Explanation lazy\_load Used to load documents one by one **lazily**. Use for production code. alazy\_load Async variant of `lazy_load` load Used to load all the documents into memory **eagerly**. Use for prototyping or interactive work. aload Used to load all the documents into memory **eagerly**. Use for prototyping or interactive work. **Added in 2024-04 to LangChain.** * The `load` methods is a convenience method meant solely for prototyping work -- it just invokes `list(self.lazy_load())`. * The `alazy_load` has a default implementation that will delegate to `lazy_load`. If you're using async, we recommend overriding the default implementation and providing a native async implementation. ::: {.callout-important} When implementing a document loader do **NOT** provide parameters via the `lazy_load` or `alazy_load` methods. All configuration is expected to be passed through the initializer (**init**). This was a design choice made by LangChain to make sure that once a document loader has been instantiated it has all the information needed to load documents. ::: ### Implementation[​](#implementation "Direct link to Implementation") Let's create an example of a standard document loader that loads a file and creates a document from each line in the file. from typing import AsyncIterator, Iteratorfrom langchain_core.document_loaders import BaseLoaderfrom langchain_core.documents import Documentclass CustomDocumentLoader(BaseLoader): """An example document loader that reads a file line by line.""" def __init__(self, file_path: str) -> None: """Initialize the loader with a file path. Args: file_path: The path to the file to load. """ self.file_path = file_path def lazy_load(self) -> Iterator[Document]: # <-- Does not take any arguments """A lazy loader that reads a file line by line. When you're implementing lazy load methods, you should use a generator to yield documents one by one. """ with open(self.file_path, encoding="utf-8") as f: line_number = 0 for line in f: yield Document( page_content=line, metadata={"line_number": line_number, "source": self.file_path}, ) line_number += 1 # alazy_load is OPTIONAL. # If you leave out the implementation, a default implementation which delegates to lazy_load will be used! async def alazy_load( self, ) -> AsyncIterator[Document]: # <-- Does not take any arguments """An async lazy loader that reads a file line by line.""" # Requires aiofiles # Install with `pip install aiofiles` # https://github.com/Tinche/aiofiles import aiofiles async with aiofiles.open(self.file_path, encoding="utf-8") as f: line_number = 0 async for line in f: yield Document( page_content=line, metadata={"line_number": line_number, "source": self.file_path}, ) line_number += 1 **API Reference:**[BaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.base.BaseLoader.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) ### Test πŸ§ͺ[​](#test- "Direct link to Test πŸ§ͺ") To test out the document loader, we need a file with some quality content. with open("./meow.txt", "w", encoding="utf-8") as f: quality_content = "meow meow🐱 \n meow meow🐱 \n meow😻😻" f.write(quality_content)loader = CustomDocumentLoader("./meow.txt") ## Test out the lazy load interfacefor doc in loader.lazy_load(): print() print(type(doc)) print(doc) <class 'langchain_core.documents.base.Document'>page_content='meow meow🐱 \n' metadata={'line_number': 0, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow meow🐱 \n' metadata={'line_number': 1, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'} ## Test out the async implementationasync for doc in loader.alazy_load(): print() print(type(doc)) print(doc) <class 'langchain_core.documents.base.Document'>page_content='meow meow🐱 \n' metadata={'line_number': 0, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow meow🐱 \n' metadata={'line_number': 1, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'} ::: {.callout-tip} `load()` can be helpful in an interactive environment such as a jupyter notebook. Avoid using it for production code since eager loading assumes that all the content can fit into memory, which is not always the case, especially for enterprise data. ::: loader.load() [Document(page_content='meow meow🐱 \n', metadata={'line_number': 0, 'source': './meow.txt'}), Document(page_content=' meow meow🐱 \n', metadata={'line_number': 1, 'source': './meow.txt'}), Document(page_content=' meow😻😻', metadata={'line_number': 2, 'source': './meow.txt'})] Working with Files[​](#working-with-files "Direct link to Working with Files") ------------------------------------------------------------------------------ Many document loaders invovle parsing files. The difference between such loaders usually stems from how the file is parsed rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text. As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded. ### BaseBlobParser[​](#baseblobparser "Direct link to BaseBlobParser") A `BaseBlobParser` is an interface that accepts a `blob` and outputs a list of `Document` objects. A `blob` is a representation of data that lives either in memory or in a file. LangChain python has a `Blob` primitive which is inspired by the [Blob WebAPI spec](https://developer.mozilla.org/en-US/docs/Web/API/Blob). from langchain_core.document_loaders import BaseBlobParser, Blobclass MyParser(BaseBlobParser): """A simple parser that creates a document from each line.""" def lazy_parse(self, blob: Blob) -> Iterator[Document]: """Parse a blob into a document line by line.""" line_number = 0 with blob.as_bytes_io() as f: for line in f: line_number += 1 yield Document( page_content=line, metadata={"line_number": line_number, "source": blob.source}, ) **API Reference:**[BaseBlobParser](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.base.BaseBlobParser.html) | [Blob](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.blob_loaders.Blob.html) blob = Blob.from_path("./meow.txt")parser = MyParser() list(parser.lazy_parse(blob)) [Document(page_content='meow meow🐱 \n', metadata={'line_number': 1, 'source': './meow.txt'}), Document(page_content=' meow meow🐱 \n', metadata={'line_number': 2, 'source': './meow.txt'}), Document(page_content=' meow😻😻', metadata={'line_number': 3, 'source': './meow.txt'})] Using the **blob** API also allows one to load content direclty from memory without having to read it from a file! blob = Blob(data=b"some data from memory\nmeow")list(parser.lazy_parse(blob)) [Document(page_content='some data from memory\n', metadata={'line_number': 1, 'source': None}), Document(page_content='meow', metadata={'line_number': 2, 'source': None})] ### Blob[​](#blob "Direct link to Blob") Let's take a quick look through some of the Blob API. blob = Blob.from_path("./meow.txt", metadata={"foo": "bar"}) blob.encoding 'utf-8' blob.as_bytes() b'meow meow\xf0\x9f\x90\xb1 \n meow meow\xf0\x9f\x90\xb1 \n meow\xf0\x9f\x98\xbb\xf0\x9f\x98\xbb' blob.as_string() 'meow meow🐱 \n meow meow🐱 \n meow😻😻' blob.as_bytes_io() <contextlib._GeneratorContextManager at 0x743f34324450> blob.metadata {'foo': 'bar'} blob.source './meow.txt' ### Blob Loaders[​](#blob-loaders "Direct link to Blob Loaders") While a parser encapsulates the logic needed to parse binary data into documents, _blob loaders_ encapsulate the logic that's necessary to load blobs from a given storage location. A the moment, `LangChain` only supports `FileSystemBlobLoader`. You can use the `FileSystemBlobLoader` to load blobs and then use the parser to parse them. from langchain_community.document_loaders.blob_loaders import FileSystemBlobLoaderblob_loader = FileSystemBlobLoader(path=".", glob="*.mdx", show_progress=True) **API Reference:**[FileSystemBlobLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html) parser = MyParser()for blob in blob_loader.yield_blobs(): for doc in parser.lazy_parse(blob): print(doc) break 0%| | 0/8 [00:00<?, ?it/s] page_content='# Microsoft Office\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}page_content='# Markdown\n' metadata={'line_number': 1, 'source': 'markdown.mdx'}page_content='# JSON\n' metadata={'line_number': 1, 'source': 'json.mdx'}page_content='---\n' metadata={'line_number': 1, 'source': 'pdf.mdx'}page_content='---\n' metadata={'line_number': 1, 'source': 'index.mdx'}page_content='# File Directory\n' metadata={'line_number': 1, 'source': 'file_directory.mdx'}page_content='# CSV\n' metadata={'line_number': 1, 'source': 'csv.mdx'}page_content='# HTML\n' metadata={'line_number': 1, 'source': 'html.mdx'} ### Generic Loader[​](#generic-loader "Direct link to Generic Loader") LangChain has a `GenericLoader` abstraction which composes a `BlobLoader` with a `BaseBlobParser`. `GenericLoader` is meant to provide standardized classmethods that make it easy to use existing `BlobLoader` implementations. At the moment, only the `FileSystemBlobLoader` is supported. from langchain_community.document_loaders.generic import GenericLoaderloader = GenericLoader.from_filesystem( path=".", glob="*.mdx", show_progress=True, parser=MyParser())for idx, doc in enumerate(loader.lazy_load()): if idx < 5: print(doc)print("... output truncated for demo purposes") **API Reference:**[GenericLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.generic.GenericLoader.html) 0%| | 0/8 [00:00<?, ?it/s] page_content='# Microsoft Office\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}... output truncated for demo purposes #### Custom Generic Loader[​](#custom-generic-loader "Direct link to Custom Generic Loader") If you really like creating classes, you can sub-class and create a class to encapsulate the logic together. You can sub-class from this class to load content using an existing loader. from typing import Anyclass MyCustomLoader(GenericLoader): @staticmethod def get_parser(**kwargs: Any) -> BaseBlobParser: """Override this method to associate a default parser with the class.""" return MyParser() loader = MyCustomLoader.from_filesystem(path=".", glob="*.mdx", show_progress=True)for idx, doc in enumerate(loader.lazy_load()): if idx < 5: print(doc)print("... output truncated for demo purposes") 0%| | 0/8 [00:00<?, ?it/s] page_content='# Microsoft Office\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}... output truncated for demo purposes [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_custom.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How deal with high cardinality categoricals when doing query analysis ](/v0.2/docs/how_to/query_high_cardinality/)[ Next How to split by HTML header ](/v0.2/docs/how_to/HTML_header_metadata_splitter/) * [Overview](#overview) * [Standard Document Loader](#standard-document-loader) * [Interface](#interface) * [Implementation](#implementation) * [Test πŸ§ͺ](#test-) * [Working with Files](#working-with-files) * [BaseBlobParser](#baseblobparser) * [Blob](#blob) * [Blob Loaders](#blob-loaders) * [Generic Loader](#generic-loader)
null
https://python.langchain.com/v0.2/docs/how_to/contextual_compression/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to do retrieval with contextual compression On this page How to do retrieval with contextual compression =============================================== One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. β€œCompressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale. To use the Contextual Compression Retriever, you'll need: * a base retriever * a Document Compressor The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- # Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) ) Using a vanilla vector store retriever[​](#using-a-vanilla-vector-store-retriever "Direct link to Using a vanilla vector store retriever") ------------------------------------------------------------------------------------------------------------------------------------------ Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them. from langchain_community.document_loaders import TextLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitterdocuments = TextLoader("state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()docs = retriever.invoke("What did the president say about Ketanji Brown Jackson")pretty_print_docs(docs) **API Reference:**[TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html) Document 1:Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerβ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of supportβ€”from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.----------------------------------------------------------------------------------------------------Document 3:And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic.----------------------------------------------------------------------------------------------------Document 4:Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jillβ€”our First Lady who teaches full-timeβ€”calls America’s best-kept secret: community colleges. Adding contextual compression with an `LLMChainExtractor`[​](#adding-contextual-compression-with-an-llmchainextractor "Direct link to adding-contextual-compression-with-an-llmchainextractor") ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query. from langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import LLMChainExtractorfrom langchain_openai import OpenAIllm = OpenAI(temperature=0)compressor = LLMChainExtractor.from_llm(llm)compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) **API Reference:**[ContextualCompressionRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.contextual_compression.ContextualCompressionRetriever.html) | [LLMChainExtractor](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor.html) | [OpenAI](https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html) Document 1:I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. More built-in compressors: filters[​](#more-built-in-compressors-filters "Direct link to More built-in compressors: filters") ----------------------------------------------------------------------------------------------------------------------------- ### `LLMChainFilter`[​](#llmchainfilter "Direct link to llmchainfilter") The `LLMChainFilter` is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents. from langchain.retrievers.document_compressors import LLMChainFilter_filter = LLMChainFilter.from_llm(llm)compression_retriever = ContextualCompressionRetriever( base_compressor=_filter, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) **API Reference:**[LLMChainFilter](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_filter.LLMChainFilter.html) Document 1:Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerβ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ### `EmbeddingsFilter`[​](#embeddingsfilter "Direct link to embeddingsfilter") Making an extra LLM call over each retrieved document is expensive and slow. The `EmbeddingsFilter` provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query. from langchain.retrievers.document_compressors import EmbeddingsFilterfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)compression_retriever = ContextualCompressionRetriever( base_compressor=embeddings_filter, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) **API Reference:**[EmbeddingsFilter](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) Document 1:Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerβ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of supportβ€”from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. Stringing compressors and document transformers together[​](#stringing-compressors-and-document-transformers-together "Direct link to Stringing compressors and document transformers together") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add `BaseDocumentTransformer`s to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitter`s can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsRedundantFilter` can be used to filter out redundant documents based on embedding similarity between documents. Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query. from langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain_community.document_transformers import EmbeddingsRedundantFilterfrom langchain_text_splitters import CharacterTextSplittersplitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)pipeline_compressor = DocumentCompressorPipeline( transformers=[splitter, redundant_filter, relevant_filter]) **API Reference:**[DocumentCompressorPipeline](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.base.DocumentCompressorPipeline.html) | [EmbeddingsRedundantFilter](https://api.python.langchain.com/en/latest/document_transformers/langchain_community.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html) compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline_compressor, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson----------------------------------------------------------------------------------------------------Document 2:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year----------------------------------------------------------------------------------------------------Document 3:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder----------------------------------------------------------------------------------------------------Document 4:Since she’s been nominated, she’s received a broad range of supportβ€”from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/contextual_compression.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to split code ](/v0.2/docs/how_to/code_splitter/)[ Next How to create custom callback handlers ](/v0.2/docs/how_to/custom_callbacks/) * [Get started](#get-started) * [Using a vanilla vector store retriever](#using-a-vanilla-vector-store-retriever) * [Adding contextual compression with an `LLMChainExtractor`](#adding-contextual-compression-with-an-llmchainextractor) * [More built-in compressors: filters](#more-built-in-compressors-filters) * [`LLMChainFilter`](#llmchainfilter) * [`EmbeddingsFilter`](#embeddingsfilter) * [Stringing compressors and document transformers together](#stringing-compressors-and-document-transformers-together)
null
https://python.langchain.com/v0.2/docs/security/
* [](/v0.2/) * Security On this page Security ======== LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources. Best practices[​](#best-practices "Direct link to Best practices") ------------------------------------------------------------------ When building such applications developers should remember to follow good security practices: * [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application. * **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data. * [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_\(computing\)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use. Risks of not doing so include, but are not limited to: * Data corruption or loss. * Unauthorized access to confidential information. * Compromised performance or availability of critical resources. Example scenarios with mitigation strategies: * A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container. * A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse. * A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials. If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications. Reporting a vulnerability[​](#reporting-a-vulnerability "Direct link to Reporting a vulnerability") --------------------------------------------------------------------------------------------------- Please report security vulnerabilities by email to [security@langchain.dev.](mailto:security@langchain.dev.) This will ensure the issue is promptly triaged and acted upon as needed. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/security.md) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Changes ](/v0.2/docs/versions/v0_2/deprecations/) * [Best practices](#best-practices) * [Reporting a vulnerability](#reporting-a-vulnerability)
null
https://python.langchain.com/v0.2/docs/how_to/custom_callbacks/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create custom callback handlers On this page How to create custom callback handlers ====================================== Prerequisites This guide assumes familiarity with the following concepts: * [Callbacks](/v0.2/docs/concepts/#callbacks) LangChain has some built-in callback handlers, but you will often want to create your own handlers with custom logic. To create a custom callback handler, we need to determine the [event(s)](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) we want our callback handler to handle as well as what we want our callback handler to do when the event is triggered. Then all we need to do is attach the callback handler to the object, for example via [the constructor](/v0.2/docs/how_to/callbacks_constructor/) or [at runtime](/v0.2/docs/how_to/callbacks_runtime/). In the example below, we'll implement streaming with a custom handler. In our custom callback handler `MyCustomHandler`, we implement the `on_llm_new_token` handler to print the token we have just received. We then attach our custom handler to the model object as a constructor callback. from langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.prompts import ChatPromptTemplateclass MyCustomHandler(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs) -> None: print(f"My custom handler, token: {token}")prompt = ChatPromptTemplate.from_messages(["Tell me a joke about {animal}"])# To enable streaming, we pass in `streaming=True` to the ChatModel constructor# Additionally, we pass in our custom handler as a list to the callbacks parametermodel = ChatAnthropic( model="claude-3-sonnet-20240229", streaming=True, callbacks=[MyCustomHandler()])chain = prompt | modelresponse = chain.invoke({"animal": "bears"}) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) My custom handler, token: HereMy custom handler, token: 'sMy custom handler, token: aMy custom handler, token: bearMy custom handler, token: jokeMy custom handler, token: forMy custom handler, token: youMy custom handler, token: :My custom handler, token: WhyMy custom handler, token: diMy custom handler, token: d theMy custom handler, token: bearMy custom handler, token: dissolMy custom handler, token: veMy custom handler, token: inMy custom handler, token: waterMy custom handler, token: ?My custom handler, token: BecauseMy custom handler, token: itMy custom handler, token: wasMy custom handler, token: aMy custom handler, token: polarMy custom handler, token: bearMy custom handler, token: ! You can see [this reference page](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) for a list of events you can handle. Note that the `handle_chain_*` events run for most LCEL runnables. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to create your own custom callback handlers. Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/v0.2/docs/how_to/callbacks_attach/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_callbacks.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to do retrieval with contextual compression ](/v0.2/docs/how_to/contextual_compression/)[ Next How to create a custom chat model class ](/v0.2/docs/how_to/custom_chat_model/) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/HTML_header_metadata_splitter/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to split by HTML header On this page How to split by HTML header =========================== Description and motivation[​](#description-and-motivation "Direct link to Description and motivation") ------------------------------------------------------------------------------------------------------ [HTMLHeaderTextSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) is a "structure-aware" chunker that splits text at the HTML element level and adds metadata for each header "relevant" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline. It is analogous to the [MarkdownHeaderTextSplitter](/v0.2/docs/how_to/markdown_header_metadata_splitter/) for markdown files. To specify what headers to split on, specify `headers_to_split_on` when instantiating `HTMLHeaderTextSplitter` as shown below. Usage examples[​](#usage-examples "Direct link to Usage examples") ------------------------------------------------------------------ ### 1) How to split HTML strings:[​](#1-how-to-split-html-strings "Direct link to 1) How to split HTML strings:") %pip install -qU langchain-text-splitters from langchain_text_splitters import HTMLHeaderTextSplitterhtml_string = """<!DOCTYPE html><html><body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div></body></html>"""headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)html_header_splits **API Reference:**[HTMLHeaderTextSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) [Document(page_content='Foo'), Document(page_content='Some intro text about Foo. \nBar main section Bar subsection 1 Bar subsection 2', metadata={'Header 1': 'Foo'}), Document(page_content='Some intro text about Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section'}), Document(page_content='Some text about the first subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 1'}), Document(page_content='Some text about the second subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 2'}), Document(page_content='Baz', metadata={'Header 1': 'Foo'}), Document(page_content='Some text about Baz', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'}), Document(page_content='Some concluding text about Foo', metadata={'Header 1': 'Foo'})] To return each element together with their associated headers, specify `return_each_element=True` when instantiating `HTMLHeaderTextSplitter`: html_splitter = HTMLHeaderTextSplitter( headers_to_split_on, return_each_element=True,)html_header_splits_elements = html_splitter.split_text(html_string) Comparing with the above, where elements are aggregated by their headers: for element in html_header_splits[:2]: print(element) page_content='Foo'page_content='Some intro text about Foo. \nBar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'} Now each element is returned as a distinct `Document`: for element in html_header_splits_elements[:3]: print(element) page_content='Foo'page_content='Some intro text about Foo.' metadata={'Header 1': 'Foo'}page_content='Bar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'} #### 2) How to split from a URL or HTML file:[​](#2-how-to-split-from-a-url-or-html-file "Direct link to 2) How to split from a URL or HTML file:") To read directly from a URL, pass the URL string into the `split_text_from_url` method. Similarly, a local HTML file can be passed to the `split_text_from_file` method. url = "https://plato.stanford.edu/entries/goedel/"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)# for local file use html_splitter.split_text_from_file(<path_to_file>)html_header_splits = html_splitter.split_text_from_url(url) ### 2) How to constrain chunk sizes:[​](#2-how-to-constrain-chunk-sizes "Direct link to 2) How to constrain chunk sizes:") `HTMLHeaderTextSplitter`, which splits based on HTML headers, can be composed with another splitter which constrains splits based on character lengths, such as `RecursiveCharacterTextSplitter`. This can be done using the `.split_documents` method of the second splitter: from langchain_text_splitters import RecursiveCharacterTextSplitterchunk_size = 500chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(html_header_splits)splits[80:85] **API Reference:**[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) [Document(page_content='We see that GΓΆdel first tried to reduce the consistency problem for analysis to that of arithmetic. This seemed to require a truth definition for arithmetic, which in turn led to paradoxes, such as the Liar paradox (β€œThis sentence is false”) and Berry’s paradox (β€œThe least number not defined by an expression consisting of just fourteen English words”). GΓΆdel then noticed that such paradoxes would not necessarily arise if truth were replaced by provability. But this means that arithmetic truth', metadata={'Header 1': 'Kurt GΓΆdel', 'Header 2': '2. GΓΆdel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='means that arithmetic truth and arithmetic provability are not co-extensive β€” whence the First Incompleteness Theorem.', metadata={'Header 1': 'Kurt GΓΆdel', 'Header 2': '2. GΓΆdel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='This account of GΓΆdel’s discovery was told to Hao Wang very much after the fact; but in GΓΆdel’s contemporary correspondence with Bernays and Zermelo, essentially the same description of his path to the theorems is given. (See GΓΆdel 2003a and GΓΆdel 2003b respectively.) From those accounts we see that the undefinability of truth in arithmetic, a result credited to Tarski, was likely obtained in some form by GΓΆdel by 1931. But he neither publicized nor published the result; the biases logicians', metadata={'Header 1': 'Kurt GΓΆdel', 'Header 2': '2. GΓΆdel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='result; the biases logicians had expressed at the time concerning the notion of truth, biases which came vehemently to the fore when Tarski announced his results on the undefinability of truth in formal systems 1935, may have served as a deterrent to GΓΆdel’s publication of that theorem.', metadata={'Header 1': 'Kurt GΓΆdel', 'Header 2': '2. GΓΆdel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='We now describe the proof of the two theorems, formulating GΓΆdel’s results in Peano arithmetic. GΓΆdel himself used a system related to that defined in Principia Mathematica, but containing Peano arithmetic. In our presentation of the First and Second Incompleteness Theorems we refer to Peano arithmetic as P, following GΓΆdel’s notation.', metadata={'Header 1': 'Kurt GΓΆdel', 'Header 2': '2. GΓΆdel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.2 The proof of the First Incompleteness Theorem'})] Limitations[​](#limitations "Direct link to Limitations") --------------------------------------------------------- There can be quite a bit of structural variation from one HTML document to another, and while `HTMLHeaderTextSplitter` will attempt to attach all "relevant" headers to any given chunk, it can sometimes miss certain headers. For example, the algorithm assumes an informational hierarchy in which headers are always at nodes "above" associated text, i.e. prior siblings, ancestors, and combinations thereof. In the following news article (as of the writing of this document), the document is structured such that the text of the top-level headline, while tagged "h1", is in a _distinct_ subtree from the text elements that we'd expect it to be _"above"_β€”so we can observe that the "h1" element and its associated text do not show up in the chunk metadata (but, where applicable, we do see "h2" and its associated text): url = "https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text_from_url(url)print(html_header_splits[1].page_content[:500]) No two El NiΓ±o winters are the same, but many have temperature and precipitation trends in common. Average conditions during an El NiΓ±o winter across the continental US. One of the major reasons is the position of the jet stream, which often shifts south during an El NiΓ±o winter. This shift typically brings wetter and cooler weather to the South while the North becomes drier and warmer, according to NOAA. Because the jet stream is essentially a river of air that storms flow through, they c [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/HTML_header_metadata_splitter.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Custom Document Loader ](/v0.2/docs/how_to/document_loader_custom/)[ Next How to split by HTML sections ](/v0.2/docs/how_to/HTML_section_aware_splitter/) * [Description and motivation](#description-and-motivation) * [Usage examples](#usage-examples) * [1) How to split HTML strings:](#1-how-to-split-html-strings) * [2) How to constrain chunk sizes:](#2-how-to-constrain-chunk-sizes) * [Limitations](#limitations)
null
https://python.langchain.com/v0.2/docs/how_to/HTML_section_aware_splitter/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to split by HTML sections On this page How to split by HTML sections ============================= Description and motivation[​](#description-and-motivation "Direct link to Description and motivation") ------------------------------------------------------------------------------------------------------ Similar in concept to the [HTMLHeaderTextSplitter](/v0.2/docs/how_to/HTML_header_metadata_splitter/), the `HTMLSectionSplitter` is a "structure-aware" chunker that splits text at the element level and adds metadata for each header "relevant" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. Use `xslt_path` to provide an absolute path to transform the HTML so that it can detect sections based on provided tags. The default is to use the `converting_to_header.xslt` file in the `data_connection/document_transformers` directory. This is for converting the html to a format/layout that is easier to detect sections. For example, `span` based on their font size can be converted to header tags to be detected as a section. Usage examples[​](#usage-examples "Direct link to Usage examples") ------------------------------------------------------------------ ### 1) How to split HTML strings:[​](#1-how-to-split-html-strings "Direct link to 1) How to split HTML strings:") from langchain_text_splitters import HTMLSectionSplitterhtml_string = """ <!DOCTYPE html> <html> <body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div> </body> </html>"""headers_to_split_on = [("h1", "Header 1"), ("h2", "Header 2")]html_splitter = HTMLSectionSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)html_header_splits **API Reference:**[HTMLSectionSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLSectionSplitter.html) [Document(page_content='Foo \n Some intro text about Foo.', metadata={'Header 1': 'Foo'}), Document(page_content='Bar main section \n Some intro text about Bar. \n Bar subsection 1 \n Some text about the first subtopic of Bar. \n Bar subsection 2 \n Some text about the second subtopic of Bar.', metadata={'Header 2': 'Bar main section'}), Document(page_content='Baz \n Some text about Baz \n \n \n Some concluding text about Foo', metadata={'Header 2': 'Baz'})] ### 2) How to constrain chunk sizes:[​](#2-how-to-constrain-chunk-sizes "Direct link to 2) How to constrain chunk sizes:") `HTMLSectionSplitter` can be used with other text splitters as part of a chunking pipeline. Internally, it uses the `RecursiveCharacterTextSplitter` when the section size is larger than the chunk size. It also considers the font size of the text to determine whether it is a section or not based on the determined font size threshold. from langchain_text_splitters import RecursiveCharacterTextSplitterhtml_string = """ <!DOCTYPE html> <html> <body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div> </body> </html>"""headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"),]html_splitter = HTMLSectionSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)chunk_size = 500chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(html_header_splits)splits **API Reference:**[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) [Document(page_content='Foo \n Some intro text about Foo.', metadata={'Header 1': 'Foo'}), Document(page_content='Bar main section \n Some intro text about Bar.', metadata={'Header 2': 'Bar main section'}), Document(page_content='Bar subsection 1 \n Some text about the first subtopic of Bar.', metadata={'Header 3': 'Bar subsection 1'}), Document(page_content='Bar subsection 2 \n Some text about the second subtopic of Bar.', metadata={'Header 3': 'Bar subsection 2'}), Document(page_content='Baz \n Some text about Baz \n \n \n Some concluding text about Foo', metadata={'Header 2': 'Baz'})] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/HTML_section_aware_splitter.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to split by HTML header ](/v0.2/docs/how_to/HTML_header_metadata_splitter/)[ Next How to use the MultiQueryRetriever ](/v0.2/docs/how_to/MultiQueryRetriever/) * [Description and motivation](#description-and-motivation) * [Usage examples](#usage-examples) * [1) How to split HTML strings:](#1-how-to-split-html-strings) * [2) How to constrain chunk sizes:](#2-how-to-constrain-chunk-sizes)
null
https://python.langchain.com/v0.2/docs/how_to/custom_llm/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create a custom LLM class On this page How to create a custom LLM class ================================ This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain. Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications! As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box, async support, the `astream_events` API, etc. Implementation[​](#implementation "Direct link to Implementation") ------------------------------------------------------------------ There are only two required things that a custom LLM needs to implement: Method Description `_call` Takes in a string and some optional stop words, and returns a string. Used by `invoke`. `_llm_type` A property that returns a string, used for logging purposes only. Optional implementations: Method Description `_identifying_params` Used to help with identifying the model and printing the LLM; should return a dictionary. This is a **@property**. `_acall` Provides an async native implementation of `_call`, used by `ainvoke`. `_stream` Method to stream the output token by token. `_astream` Provides an async native implementation of `_stream`; in newer LangChain versions, defaults to `_stream`. Let's implement a simple custom LLM that just returns the first n characters of the input. from typing import Any, Dict, Iterator, List, Mapping, Optionalfrom langchain_core.callbacks.manager import CallbackManagerForLLMRunfrom langchain_core.language_models.llms import LLMfrom langchain_core.outputs import GenerationChunkclass CustomLLM(LLM): """A custom chat model that echoes the first `n` characters of the input. When contributing an implementation to LangChain, carefully document the model including the initialization parameters, include an example of how to initialize the model and include any relevant links to the underlying models documentation or API. Example: .. code-block:: python model = CustomChatModel(n=2) result = model.invoke([HumanMessage(content="hello")]) result = model.batch([[HumanMessage(content="hello")], [HumanMessage(content="world")]]) """ n: int """The number of characters from the last message of the prompt to be echoed.""" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> str: """Run the LLM on the given input. Override this method to implement the LLM logic. Args: prompt: The prompt to generate from. stop: Stop words to use when generating. Model output is cut off at the first occurrence of any of the stop substrings. If stop tokens are not supported consider raising NotImplementedError. run_manager: Callback manager for the run. **kwargs: Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns: The model output as a string. Actual completions SHOULD NOT include the prompt. """ if stop is not None: raise ValueError("stop kwargs are not permitted.") return prompt[: self.n] def _stream( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> Iterator[GenerationChunk]: """Stream the LLM on the given prompt. This method should be overridden by subclasses that support streaming. If not implemented, the default behavior of calls to stream will be to fallback to the non-streaming version of the model and return the output as a single chunk. Args: prompt: The prompt to generate from. stop: Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. run_manager: Callback manager for the run. **kwargs: Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns: An iterator of GenerationChunks. """ for char in prompt[: self.n]: chunk = GenerationChunk(text=char) if run_manager: run_manager.on_llm_new_token(chunk.text, chunk=chunk) yield chunk @property def _identifying_params(self) -> Dict[str, Any]: """Return a dictionary of identifying parameters.""" return { # The model name allows users to specify custom token counting # rules in LLM monitoring applications (e.g., in LangSmith users # can provide per token pricing for their model and monitor # costs for the given LLM.) "model_name": "CustomChatModel", } @property def _llm_type(self) -> str: """Get the type of language model used by this chat model. Used for logging purposes only.""" return "custom" **API Reference:**[CallbackManagerForLLMRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForLLMRun.html) | [LLM](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.LLM.html) | [GenerationChunk](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html) ### Let's test it πŸ§ͺ[​](#lets-test-it- "Direct link to Let's test it πŸ§ͺ") This LLM will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support! llm = CustomLLM(n=5)print(llm) CustomLLMParams: {'model_name': 'CustomChatModel'} llm.invoke("This is a foobar thing") 'This ' await llm.ainvoke("world") 'world' llm.batch(["woof woof woof", "meow meow meow"]) ['woof ', 'meow '] await llm.abatch(["woof woof woof", "meow meow meow"]) ['woof ', 'meow '] async for token in llm.astream("hello"): print(token, end="|", flush=True) h|e|l|l|o| Let's confirm that in integrates nicely with other `LangChain` APIs. from langchain_core.prompts import ChatPromptTemplate **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) prompt = ChatPromptTemplate.from_messages( [("system", "you are a bot"), ("human", "{input}")]) llm = CustomLLM(n=7)chain = prompt | llm idx = 0async for event in chain.astream_events({"input": "hello there!"}, version="v1"): print(event) idx += 1 if idx > 7: # Truncate break {'event': 'on_chain_start', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'name': 'RunnableSequence', 'tags': [], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}{'event': 'on_prompt_start', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}{'event': 'on_prompt_end', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}, 'output': ChatPromptValue(messages=[SystemMessage(content='you are a bot'), HumanMessage(content='hello there!')])}}{'event': 'on_llm_start', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'input': {'prompts': ['System: you are a bot\nHuman: hello there!']}}}{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'S'}}{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'S'}}{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'y'}}{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'y'}} Contributing[​](#contributing "Direct link to Contributing") ------------------------------------------------------------ We appreciate all chat model integration contributions. Here's a checklist to help make sure your contribution gets added to LangChain: Documentation: * The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://api.python.langchain.com/en/stable/langchain_api_reference.html). * The class doc-string for the model contains a link to the model API if the model is powered by a service. Tests: * Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code. Streaming (if you're implementing it): * Make sure to invoke the `on_llm_new_token` callback * `on_llm_new_token` is invoked BEFORE yielding the chunk Stop Token Behavior: * Stop token should be respected * Stop token should be INCLUDED as part of the response Secret API Keys: * If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model. [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_llm.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create a custom chat model class ](/v0.2/docs/how_to/custom_chat_model/)[ Next Custom Retriever ](/v0.2/docs/how_to/custom_retriever/) * [Implementation](#implementation) * [Let's test it πŸ§ͺ](#lets-test-it-) * [Contributing](#contributing)
null
https://python.langchain.com/v0.2/docs/how_to/MultiQueryRetriever/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use the MultiQueryRetriever On this page How to use the MultiQueryRetriever ================================== Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on a distance metric. But, retrieval may produce different results with subtle changes in query wording, or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. The [MultiQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html) automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the `MultiQueryRetriever` can mitigate some of the limitations of the distance-based retrieval and get a richer set of results. Let's build a vectorstore using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng from the [RAG tutorial](/v0.2/docs/tutorials/rag/): # Build a sample vectorDBfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter# Load blog postloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()# Splittext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)splits = text_splitter.split_documents(data)# VectorDBembedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=splits, embedding=embedding) **API Reference:**[WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) #### Simple usage[​](#simple-usage "Direct link to Simple usage") Specify the LLM to use for query generation, and the retriever will do the rest. from langchain.retrievers.multi_query import MultiQueryRetrieverfrom langchain_openai import ChatOpenAIquestion = "What are the approaches to Task Decomposition?"llm = ChatOpenAI(temperature=0)retriever_from_llm = MultiQueryRetriever.from_llm( retriever=vectordb.as_retriever(), llm=llm) **API Reference:**[MultiQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) # Set logging for the queriesimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO) unique_docs = retriever_from_llm.invoke(question)len(unique_docs) INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be achieved through different methods?', '2. What strategies are commonly used for Task Decomposition?', '3. What are the various techniques for breaking down tasks in Task Decomposition?'] 5 Note that the underlying queries generated by the retriever are logged at the `INFO` level. #### Supplying your own prompt[​](#supplying-your-own-prompt "Direct link to Supplying your own prompt") Under the hood, `MultiQueryRetriever` generates queries using a specific [prompt](https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html#MultiQueryRetriever). To customize this prompt: 1. Make a [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) with an input variable for the question; 2. Implement an [output parser](/v0.2/docs/concepts/#output-parsers) like the one below to split the result into a list of queries. The prompt and output parser together must support the generation of a list of queries. from typing import Listfrom langchain_core.output_parsers import BaseOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Field# Output parser will split the LLM result into a list of queriesclass LineListOutputParser(BaseOutputParser[List[str]]): """Output parser for a list of lines.""" def parse(self, text: str) -> List[str]: lines = text.strip().split("\n") return linesoutput_parser = LineListOutputParser()QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""",)llm = ChatOpenAI(temperature=0)# Chainllm_chain = QUERY_PROMPT | llm | output_parser# Other inputsquestion = "What are the approaches to Task Decomposition?" **API Reference:**[BaseOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.base.BaseOutputParser.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) # Runretriever = MultiQueryRetriever( retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines") # "lines" is the key (attribute name) of the parsed output# Resultsunique_docs = retriever.invoke("What does the course say about regression?")len(unique_docs) INFO:langchain.retrievers.multi_query:Generated queries: ['1. Can you provide insights on regression from the course material?', '2. How is regression discussed in the course content?', '3. What information does the course offer about regression?', '4. In what way is regression covered in the course?', '5. What are the teachings of the course regarding regression?'] 9 [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/MultiQueryRetriever.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to split by HTML sections ](/v0.2/docs/how_to/HTML_section_aware_splitter/)[ Next How to add scores to retriever results ](/v0.2/docs/how_to/add_scores_retriever/)
null
https://python.langchain.com/v0.2/docs/how_to/custom_retriever/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Custom Retriever On this page How to create a custom Retriever ================================ Overview[​](#overview "Direct link to Overview") ------------------------------------------------ Many LLM applications involve retrieving information from external data sources using a `Retriever`. A retriever is responsible for retrieving a list of relevant `Documents` to a given user `query`. The retrieved documents are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the to generate an appropriate response (e.g., answering a user question based on a knowledge base). Interface[​](#interface "Direct link to Interface") --------------------------------------------------- To create your own retriever, you need to extend the `BaseRetriever` class and implement the following methods: Method Description Required/Optional `_get_relevant_documents` Get documents relevant to a query. Required `_aget_relevant_documents` Implement to provide async native support. Optional The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests. tip By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/v0.2/docs/concepts/#interface) and will gain the standard `Runnable` functionality out of the box! info You can use a `RunnableLambda` or `RunnableGenerator` to implement a retriever. The main benefit of implementing a retriever as a `BaseRetriever` vs. a `RunnableLambda` (a custom [runnable function](/v0.2/docs/how_to/functions/)) is that a `BaseRetriever` is a well known LangChain entity so some tooling for monitoring may implement specialized behavior for retrievers. Another difference is that a `BaseRetriever` will behave slightly differently from `RunnableLambda` in some APIs; e.g., the `start` event in `astream_events` API will be `on_retriever_start` instead of `on_chain_start`. Example[​](#example "Direct link to Example") --------------------------------------------- Let's implement a toy retriever that returns all documents whose text contains the text in the user query. from typing import Listfrom langchain_core.callbacks import CallbackManagerForRetrieverRunfrom langchain_core.documents import Documentfrom langchain_core.retrievers import BaseRetrieverclass ToyRetriever(BaseRetriever): """A toy retriever that contains the top k documents that contain the user query. This retriever only implements the sync method _get_relevant_documents. If the retriever were to involve file access or network access, it could benefit from a native async implementation of `_aget_relevant_documents`. As usual, with Runnables, there's a default async implementation that's provided that delegates to the sync implementation running on another thread. """ documents: List[Document] """List of documents to retrieve from.""" k: int """Number of top results to return""" def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun ) -> List[Document]: """Sync implementations for retriever.""" matching_documents = [] for document in documents: if len(matching_documents) > self.k: return matching_documents if query.lower() in document.page_content.lower(): matching_documents.append(document) return matching_documents # Optional: Provide a more efficient native implementation by overriding # _aget_relevant_documents # async def _aget_relevant_documents( # self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun # ) -> List[Document]: # """Asynchronously get documents relevant to a query. # Args: # query: String to find relevant documents for # run_manager: The callbacks handler to use # Returns: # List of relevant documents # """ **API Reference:**[CallbackManagerForRetrieverRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForRetrieverRun.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) Test it πŸ§ͺ[​](#test-it- "Direct link to Test it πŸ§ͺ") ---------------------------------------------------- documents = [ Document( page_content="Dogs are great companions, known for their loyalty and friendliness.", metadata={"type": "dog", "trait": "loyalty"}, ), Document( page_content="Cats are independent pets that often enjoy their own space.", metadata={"type": "cat", "trait": "independence"}, ), Document( page_content="Goldfish are popular pets for beginners, requiring relatively simple care.", metadata={"type": "fish", "trait": "low maintenance"}, ), Document( page_content="Parrots are intelligent birds capable of mimicking human speech.", metadata={"type": "bird", "trait": "intelligence"}, ), Document( page_content="Rabbits are social animals that need plenty of space to hop around.", metadata={"type": "rabbit", "trait": "social"}, ),]retriever = ToyRetriever(documents=documents, k=3) retriever.invoke("that") [Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}), Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})] It's a **runnable** so it'll benefit from the standard Runnable Interface! 🀩 await retriever.ainvoke("that") [Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}), Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})] retriever.batch(["dog", "cat"]) [[Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'type': 'dog', 'trait': 'loyalty'})], [Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'})]] async for event in retriever.astream_events("bar", version="v1"): print(event) {'event': 'on_retriever_start', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'name': 'ToyRetriever', 'tags': [], 'metadata': {}, 'data': {'input': 'bar'}}{'event': 'on_retriever_stream', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'tags': [], 'metadata': {}, 'name': 'ToyRetriever', 'data': {'chunk': []}}{'event': 'on_retriever_end', 'name': 'ToyRetriever', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'tags': [], 'metadata': {}, 'data': {'output': []}} Contributing[​](#contributing "Direct link to Contributing") ------------------------------------------------------------ We appreciate contributions of interesting retrievers! Here's a checklist to help make sure your contribution gets added to LangChain: Documentation: * The retriever contains doc-strings for all initialization arguments, as these will be surfaced in the [API Reference](https://api.python.langchain.com/en/stable/langchain_api_reference.html). * The class doc-string for the model contains a link to any relevant APIs used for the retriever (e.g., if the retriever is retrieving from wikipedia, it'll be good to link to the wikipedia API!) Tests: * Add unit or integration tests to verify that `invoke` and `ainvoke` work. Optimizations: If the retriever is connecting to external data sources (e.g., an API or a file), it'll almost certainly benefit from an async native optimization! * Provide a native async implementation of `_aget_relevant_documents` (used by `ainvoke`) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_retriever.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create a custom LLM class ](/v0.2/docs/how_to/custom_llm/)[ Next How to create custom tools ](/v0.2/docs/how_to/custom_tools/) * [Overview](#overview) * [Interface](#interface) * [Example](#example) * [Test it πŸ§ͺ](#test-it-) * [Contributing](#contributing)
null
https://python.langchain.com/v0.2/docs/how_to/add_scores_retriever/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add scores to retriever results On this page How to add scores to retriever results ====================================== Retrievers will return sequences of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, which by default include no information about the process that retrieved them (e.g., a similarity score against a query). Here we demonstrate how to add retrieval scores to the `.metadata` of documents: 1. From [vectorstore retrievers](/v0.2/docs/how_to/vectorstore_retriever/); 2. From higher-order LangChain retrievers, such as [SelfQueryRetriever](/v0.2/docs/how_to/self_query/) or [MultiVectorRetriever](/v0.2/docs/how_to/multi_vector/). For (1), we will implement a short wrapper function around the corresponding vector store. For (2), we will update a method of the corresponding class. Create vector store[​](#create-vector-store "Direct link to Create vector store") --------------------------------------------------------------------------------- First we populate a vector store with some data. We will use a [PineconeVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html), but this guide is compatible with any LangChain vector store that implements a `.similarity_search_with_score` method. from langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsfrom langchain_pinecone import PineconeVectorStoredocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "thriller", "rating": 9.9, }, ),]vectorstore = PineconeVectorStore.from_documents( docs, index_name="sample", embedding=OpenAIEmbeddings()) **API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [PineconeVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html) Retriever[​](#retriever "Direct link to Retriever") --------------------------------------------------- To obtain scores from a vector store retriever, we wrap the underlying vector store's `.similarity_search_with_score` method in a short function that packages scores into the associated document's metadata. We add a `@chain` decorator to the function to create a [Runnable](/v0.2/docs/concepts/#langchain-expression-language) that can be used similarly to a typical retriever. from typing import Listfrom langchain_core.documents import Documentfrom langchain_core.runnables import chain@chaindef retriever(query: str) -> List[Document]: docs, scores = zip(*vectorstore.similarity_search_with_score(query)) for doc, score in zip(docs, scores): doc.metadata["score"] = score return docs **API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html) result = retriever.invoke("dinosaur")result (Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993.0, 'score': 0.84429127}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0, 'score': 0.792038262}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': 'thriller', 'rating': 9.9, 'year': 1979.0, 'score': 0.751571238}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0, 'score': 0.747471571})) Note that similarity scores from the retrieval step are included in the metadata of the above documents. SelfQueryRetriever[​](#selfqueryretriever "Direct link to SelfQueryRetriever") ------------------------------------------------------------------------------ `SelfQueryRetriever` will use a LLM to generate a query that is potentially structured-- for example, it can construct filters for the retrieval on top of the usual semantic-similarity driven selection. See [this guide](/v0.2/docs/how_to/self_query/) for more detail. `SelfQueryRetriever` includes a short (1 - 2 line) method `_get_docs_with_query` that executes the `vectorstore` search. We can subclass `SelfQueryRetriever` and override this method to propagate similarity scores. First, following the [how-to guide](/v0.2/docs/how_to/self_query/), we will need to establish some metadata on which to filter: from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = ChatOpenAI(temperature=0) **API Reference:**[AttributeInfo](https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.schema.AttributeInfo.html) | [SelfQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) We then override the `_get_docs_with_query` to use the `similarity_search_with_score` method of the underlying vector store: from typing import Any, Dictclass CustomSelfQueryRetriever(SelfQueryRetriever): def _get_docs_with_query( self, query: str, search_kwargs: Dict[str, Any] ) -> List[Document]: """Get docs, adding score information.""" docs, scores = zip( *vectorstore.similarity_search_with_score(query, **search_kwargs) ) for doc, score in zip(docs, scores): doc.metadata["score"] = score return docs Invoking this retriever will now include similarity scores in the document metadata. Note that the underlying structured-query capabilities of `SelfQueryRetriever` are retained. retriever = CustomSelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info,)result = retriever.invoke("dinosaur movie with rating less than 8")result (Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993.0, 'score': 0.84429127}),) MultiVectorRetriever[​](#multivectorretriever "Direct link to MultiVectorRetriever") ------------------------------------------------------------------------------------ `MultiVectorRetriever` allows you to associate multiple vectors with a single document. This can be useful in a number of applications. For example, we can index small chunks of a larger document and run the retrieval on the chunks, but return the larger "parent" document when invoking the retriever. [ParentDocumentRetriever](/v0.2/docs/how_to/parent_document_retriever/), a subclass of `MultiVectorRetriever`, includes convenience methods for populating a vector store to support this. Further applications are detailed in this [how-to guide](/v0.2/docs/how_to/multi_vector/). To propagate similarity scores through this retriever, we can again subclass `MultiVectorRetriever` and override a method. This time we will override `_get_relevant_documents`. First, we prepare some fake data. We generate fake "whole documents" and store them in a document store; here we will use a simple [InMemoryStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryBaseStore.html). from langchain.storage import InMemoryStorefrom langchain_text_splitters import RecursiveCharacterTextSplitter# The storage layer for the parent documentsdocstore = InMemoryStore()fake_whole_documents = [ ("fake_id_1", Document(page_content="fake whole document 1")), ("fake_id_2", Document(page_content="fake whole document 2")),]docstore.mset(fake_whole_documents) **API Reference:**[InMemoryStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryStore.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) Next we will add some fake "sub-documents" to our vector store. We can link these sub-documents to the parent documents by populating the `"doc_id"` key in its metadata. docs = [ Document( page_content="A snippet from a larger document discussing cats.", metadata={"doc_id": "fake_id_1"}, ), Document( page_content="A snippet from a larger document discussing discourse.", metadata={"doc_id": "fake_id_1"}, ), Document( page_content="A snippet from a larger document discussing chocolate.", metadata={"doc_id": "fake_id_2"}, ),]vectorstore.add_documents(docs) ['62a85353-41ff-4346-bff7-be6c8ec2ed89', '5d4a0e83-4cc5-40f1-bc73-ed9cbad0ee15', '8c1d9a56-120f-45e4-ba70-a19cd19a38f4'] To propagate the scores, we subclass `MultiVectorRetriever` and override its `_get_relevant_documents` method. Here we will make two changes: 1. We will add similarity scores to the metadata of the corresponding "sub-documents" using the `similarity_search_with_score` method of the underlying vector store as above; 2. We will include a list of these sub-documents in the metadata of the retrieved parent document. This surfaces what snippets of text were identified by the retrieval, together with their corresponding similarity scores. from collections import defaultdictfrom langchain.retrievers import MultiVectorRetrieverfrom langchain_core.callbacks import CallbackManagerForRetrieverRunclass CustomMultiVectorRetriever(MultiVectorRetriever): def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun ) -> List[Document]: """Get documents relevant to a query. Args: query: String to find relevant documents for run_manager: The callbacks handler to use Returns: List of relevant documents """ results = self.vectorstore.similarity_search_with_score( query, **self.search_kwargs ) # Map doc_ids to list of sub-documents, adding scores to metadata id_to_doc = defaultdict(list) for doc, score in results: doc_id = doc.metadata.get("doc_id") if doc_id: doc.metadata["score"] = score id_to_doc[doc_id].append(doc) # Fetch documents corresponding to doc_ids, retaining sub_docs in metadata docs = [] for _id, sub_docs in id_to_doc.items(): docstore_docs = self.docstore.mget([_id]) if docstore_docs: if doc := docstore_docs[0]: doc.metadata["sub_docs"] = sub_docs docs.append(doc) return docs **API Reference:**[MultiVectorRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html) | [CallbackManagerForRetrieverRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForRetrieverRun.html) Invoking this retriever, we can see that it identifies the correct parent document, including the relevant snippet from the sub-document with similarity score. retriever = CustomMultiVectorRetriever(vectorstore=vectorstore, docstore=docstore)retriever.invoke("cat") [Document(page_content='fake whole document 1', metadata={'sub_docs': [Document(page_content='A snippet from a larger document discussing cats.', metadata={'doc_id': 'fake_id_1', 'score': 0.831276655})]})] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/add_scores_retriever.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use the MultiQueryRetriever ](/v0.2/docs/how_to/MultiQueryRetriever/)[ Next Caching ](/v0.2/docs/how_to/caching_embeddings/) * [Create vector store](#create-vector-store) * [Retriever](#retriever) * [SelfQueryRetriever](#selfqueryretriever) * [MultiVectorRetriever](#multivectorretriever)
null
https://python.langchain.com/v0.2/docs/how_to/custom_chat_model/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create a custom chat model class On this page How to create a custom chat model class ======================================= Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) In this guide, we'll learn how to create a custom chat model using LangChain abstractions. Wrapping your LLM with the standard [`BaseChatModel`](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface allow you to use your LLM in existing LangChain programs with minimal code modifications! As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc. Inputs and outputs[​](#inputs-and-outputs "Direct link to Inputs and outputs") ------------------------------------------------------------------------------ First, we need to talk about **messages**, which are the inputs and outputs of chat models. ### Messages[​](#messages "Direct link to Messages") Chat models take messages as inputs and return a message as output. LangChain has a few [built-in message types](/v0.2/docs/concepts/#message-types): Message Type Description `SystemMessage` Used for priming AI behavior, usually passed in as the first of a sequence of input messages. `HumanMessage` Represents a message from a person interacting with the chat model. `AIMessage` Represents a message from the chat model. This can be either text or a request to invoke a tool. `FunctionMessage` / `ToolMessage` Message for passing the results of tool invocation back to the model. `AIMessageChunk` / `HumanMessageChunk` / ... Chunk variant of each type of message. ::: {.callout-note} `ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles. This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema. ::: from langchain_core.messages import ( AIMessage, BaseMessage, FunctionMessage, HumanMessage, SystemMessage, ToolMessage,) **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [FunctionMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.function.FunctionMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) ### Streaming Variant[​](#streaming-variant "Direct link to Streaming Variant") All the chat messages have a streaming variant that contains `Chunk` in the name. from langchain_core.messages import ( AIMessageChunk, FunctionMessageChunk, HumanMessageChunk, SystemMessageChunk, ToolMessageChunk,) **API Reference:**[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) | [FunctionMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.function.FunctionMessageChunk.html) | [HumanMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessageChunk.html) | [SystemMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessageChunk.html) | [ToolMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessageChunk.html) These chunks are used when streaming output from chat models, and they all define an additive property! AIMessageChunk(content="Hello") + AIMessageChunk(content=" World!") AIMessageChunk(content='Hello World!') Base Chat Model[​](#base-chat-model "Direct link to Base Chat Model") --------------------------------------------------------------------- Let's implement a chat model that echoes back the first `n` characetrs of the last message in the prompt! To do so, we will inherit from `BaseChatModel` and we'll need to implement the following: Method/Property Description Required/Optional `_generate` Use to generate a chat result from a prompt Required `_llm_type` (property) Used to uniquely identify the type of the model. Used for logging. Required `_identifying_params` (property) Represent model parameterization for tracing purposes. Optional `_stream` Use to implement streaming. Optional `_agenerate` Use to implement a native async method. Optional `_astream` Use to implement async version of `_stream`. Optional tip The `_astream` implementation uses `run_in_executor` to launch the sync `_stream` in a separate thread if `_stream` is implemented, otherwise it fallsback to use `_agenerate`. You can use this trick if you want to reuse the `_stream` implementation, but if you're able to implement code that's natively async that's a better solution since that code will run with less overhead. ### Implementation[​](#implementation "Direct link to Implementation") from typing import Any, AsyncIterator, Dict, Iterator, List, Optionalfrom langchain_core.callbacks import ( AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun,)from langchain_core.language_models import BaseChatModel, SimpleChatModelfrom langchain_core.messages import AIMessageChunk, BaseMessage, HumanMessagefrom langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResultfrom langchain_core.runnables import run_in_executorclass CustomChatModelAdvanced(BaseChatModel): """A custom chat model that echoes the first `n` characters of the input. When contributing an implementation to LangChain, carefully document the model including the initialization parameters, include an example of how to initialize the model and include any relevant links to the underlying models documentation or API. Example: .. code-block:: python model = CustomChatModel(n=2) result = model.invoke([HumanMessage(content="hello")]) result = model.batch([[HumanMessage(content="hello")], [HumanMessage(content="world")]]) """ model_name: str """The name of the model""" n: int """The number of characters from the last message of the prompt to be echoed.""" def _generate( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: """Override the _generate method to implement the chat model logic. This can be a call to an API, a call to a local model, or any other implementation that generates a response to the input prompt. Args: messages: the prompt composed of a list of messages. stop: a list of strings on which the model should stop generating. If generation stops due to a stop token, the stop token itself SHOULD BE INCLUDED as part of the output. This is not enforced across models right now, but it's a good practice to follow since it makes it much easier to parse the output of the model downstream and understand why generation stopped. run_manager: A run manager with callbacks for the LLM. """ # Replace this with actual logic to generate a response from a list # of messages. last_message = messages[-1] tokens = last_message.content[: self.n] message = AIMessage( content=tokens, additional_kwargs={}, # Used to add additional payload (e.g., function calling request) response_metadata={ # Use for response metadata "time_in_seconds": 3, }, ) ## generation = ChatGeneration(message=message) return ChatResult(generations=[generation]) def _stream( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> Iterator[ChatGenerationChunk]: """Stream the output of the model. This method should be implemented if the model can generate output in a streaming fashion. If the model does not support streaming, do not implement it. In that case streaming requests will be automatically handled by the _generate method. Args: messages: the prompt composed of a list of messages. stop: a list of strings on which the model should stop generating. If generation stops due to a stop token, the stop token itself SHOULD BE INCLUDED as part of the output. This is not enforced across models right now, but it's a good practice to follow since it makes it much easier to parse the output of the model downstream and understand why generation stopped. run_manager: A run manager with callbacks for the LLM. """ last_message = messages[-1] tokens = last_message.content[: self.n] for token in tokens: chunk = ChatGenerationChunk(message=AIMessageChunk(content=token)) if run_manager: # This is optional in newer versions of LangChain # The on_llm_new_token will be called automatically run_manager.on_llm_new_token(token, chunk=chunk) yield chunk # Let's add some other information (e.g., response metadata) chunk = ChatGenerationChunk( message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3}) ) if run_manager: # This is optional in newer versions of LangChain # The on_llm_new_token will be called automatically run_manager.on_llm_new_token(token, chunk=chunk) yield chunk @property def _llm_type(self) -> str: """Get the type of language model used by this chat model.""" return "echoing-chat-model-advanced" @property def _identifying_params(self) -> Dict[str, Any]: """Return a dictionary of identifying parameters. This information is used by the LangChain callback system, which is used for tracing purposes make it possible to monitor LLMs. """ return { # The model name allows users to specify custom token counting # rules in LLM monitoring applications (e.g., in LangSmith users # can provide per token pricing for their model and monitor # costs for the given LLM.) "model_name": self.model_name, } **API Reference:**[AsyncCallbackManagerForLLMRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForLLMRun.html) | [CallbackManagerForLLMRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForLLMRun.html) | [BaseChatModel](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) | [SimpleChatModel](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.SimpleChatModel.html) | [AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ChatGeneration](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html) | [ChatGenerationChunk](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html) | [ChatResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html) | [run\_in\_executor](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.run_in_executor.html) ### Let's test it πŸ§ͺ[​](#lets-test-it- "Direct link to Let's test it πŸ§ͺ") The chat model will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support! model = CustomChatModelAdvanced(n=3, model_name="my_custom_model")model.invoke( [ HumanMessage(content="hello!"), AIMessage(content="Hi there human!"), HumanMessage(content="Meow!"), ]) AIMessage(content='Meo', response_metadata={'time_in_seconds': 3}, id='run-ddb42bd6-4fdd-4bd2-8be5-e11b67d3ac29-0') model.invoke("hello") AIMessage(content='hel', response_metadata={'time_in_seconds': 3}, id='run-4d3cc912-44aa-454b-977b-ca02be06c12e-0') model.batch(["hello", "goodbye"]) [AIMessage(content='hel', response_metadata={'time_in_seconds': 3}, id='run-9620e228-1912-4582-8aa1-176813afec49-0'), AIMessage(content='goo', response_metadata={'time_in_seconds': 3}, id='run-1ce8cdf8-6f75-448e-82f7-1bb4a121df93-0')] for chunk in model.stream("cat"): print(chunk.content, end="|") c|a|t|| Please see the implementation of `_astream` in the model! If you do not implement it, then no output will stream.! async for chunk in model.astream("cat"): print(chunk.content, end="|") c|a|t|| Let's try to use the astream events API which will also help double check that all the callbacks were implemented! async for event in model.astream_events("cat", version="v1"): print(event) {'event': 'on_chat_model_start', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'name': 'CustomChatModelAdvanced', 'tags': [], 'metadata': {}, 'data': {'input': 'cat'}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='c', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='a', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='t', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'time_in_sec': 3}, id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_end', 'name': 'CustomChatModelAdvanced', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'data': {'output': AIMessageChunk(content='cat', response_metadata={'time_in_sec': 3}, id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}``````output/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future. warn_beta( Contributing[​](#contributing "Direct link to Contributing") ------------------------------------------------------------ We appreciate all chat model integration contributions. Here's a checklist to help make sure your contribution gets added to LangChain: Documentation: * The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://api.python.langchain.com/en/stable/langchain_api_reference.html). * The class doc-string for the model contains a link to the model API if the model is powered by a service. Tests: * Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code. Streaming (if you're implementing it): * Implement the \_stream method to get streaming working Stop Token Behavior: * Stop token should be respected * Stop token should be INCLUDED as part of the response Secret API Keys: * If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model. Identifying Params: * Include a `model_name` in identifying params Optimizations: Consider providing native async support to reduce the overhead from the model! * Provided a native async of `_agenerate` (used by `ainvoke`) * Provided a native async of `_astream` (used by `astream`) Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to create your own custom chat models. Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output/) or [how to track chat model token usage](/v0.2/docs/how_to/chat_token_usage_tracking/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_chat_model.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create custom callback handlers ](/v0.2/docs/how_to/custom_callbacks/)[ Next How to create a custom LLM class ](/v0.2/docs/how_to/custom_llm/) * [Inputs and outputs](#inputs-and-outputs) * [Messages](#messages) * [Streaming Variant](#streaming-variant) * [Base Chat Model](#base-chat-model) * [Implementation](#implementation) * [Let's test it πŸ§ͺ](#lets-test-it-) * [Contributing](#contributing) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/caching_embeddings/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Caching On this page Caching ======= Embeddings can be stored or temporarily cached to avoid needing to recompute them. Caching embeddings can be done using a `CacheBackedEmbeddings`. The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache. The main supported way to initialize a `CacheBackedEmbeddings` is `from_bytes_store`. It takes the following parameters: * underlying\_embedder: The embedder to use for embedding. * document\_embedding\_cache: Any [`ByteStore`](/v0.2/docs/integrations/stores/) for caching document embeddings. * batch\_size: (optional, defaults to `None`) The number of documents to embed between store updates. * namespace: (optional, defaults to `""`) The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used. * query\_embedding\_cache: (optional, defaults to `None` or not caching) A [`ByteStore`](/v0.2/docs/integrations/stores/) for caching query embeddings, or `True` to use the same store as `document_embedding_cache`. **Attention**: * Be sure to set the `namespace` parameter to avoid collisions of the same text embedded using different embeddings models. * `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one need to specify a `query_embedding_cache`. from langchain.embeddings import CacheBackedEmbeddings **API Reference:**[CacheBackedEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html) Using with a Vector Store[​](#using-with-a-vector-store "Direct link to Using with a Vector Store") --------------------------------------------------------------------------------------------------- First, let's see an example that uses the local file system for storing embeddings and uses FAISS vector store for retrieval. %pip install --upgrade --quiet langchain-openai faiss-cpu from langchain.storage import LocalFileStorefrom langchain_community.document_loaders import TextLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitterunderlying_embeddings = OpenAIEmbeddings()store = LocalFileStore("./cache/")cached_embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model) **API Reference:**[LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html) | [TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html) The cache is empty prior to embedding: list(store.yield_keys()) [] Load the document, split it into chunks, embed each chunk and load it into the vector store. raw_documents = TextLoader("state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents) Create the vector store: %%timedb = FAISS.from_documents(documents, cached_embedder) CPU times: user 218 ms, sys: 29.7 ms, total: 248 msWall time: 1.02 s If we try to create the vector store again, it'll be much faster since it does not need to re-compute any embeddings. %%timedb2 = FAISS.from_documents(documents, cached_embedder) CPU times: user 15.7 ms, sys: 2.22 ms, total: 18 msWall time: 17.2 ms And here are some of the embeddings that got created: list(store.yield_keys())[:5] ['text-embedding-ada-00217a6727d-8916-54eb-b196-ec9c9d6ca472', 'text-embedding-ada-0025fc0d904-bd80-52da-95c9-441015bfb438', 'text-embedding-ada-002e4ad20ef-dfaa-5916-9459-f90c6d8e8159', 'text-embedding-ada-002ed199159-c1cd-5597-9757-f80498e8f17b', 'text-embedding-ada-0021297d37a-2bc1-5e19-bf13-6c950f075062'] Swapping the `ByteStore` ======================== In order to use a different `ByteStore`, just use it when creating your `CacheBackedEmbeddings`. Below, we create an equivalent cached embeddings object, except using the non-persistent `InMemoryByteStore` instead: from langchain.embeddings import CacheBackedEmbeddingsfrom langchain.storage import InMemoryByteStorestore = InMemoryByteStore()cached_embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model) **API Reference:**[CacheBackedEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html) | [InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/caching_embeddings.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add scores to retriever results ](/v0.2/docs/how_to/add_scores_retriever/)[ Next How to use callbacks in async environments ](/v0.2/docs/how_to/callbacks_async/) * [Using with a Vector Store](#using-with-a-vector-store)
null
https://python.langchain.com/v0.2/docs/how_to/debugging/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to debug your LLM apps On this page How to debug your LLM apps ========================== Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created. There are three main methods for debugging: * Verbose Mode: This adds print statements for "important" events in your chain. * Debug Mode: This add logging statements for ALL events in your chain. * LangSmith Tracing: This logs events to [LangSmith](https://docs.smith.langchain.com/) to allow for visualization there. Verbose Mode Debug Mode LangSmith Tracing Free βœ… βœ… βœ… UI ❌ ❌ βœ… Persisted ❌ ❌ βœ… See all events ❌ βœ… βœ… See "important" events βœ… ❌ βœ… Runs Locally βœ… βœ… ❌ Tracing[​](#tracing "Direct link to Tracing") --------------------------------------------- Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." Or, if in a notebook, you can set them with: import getpassimport osos.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see: * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) from langchain.agents import AgentExecutor, create_tool_calling_agentfrom langchain_community.tools.tavily_search import TavilySearchResultsfrom langchain_core.prompts import ChatPromptTemplatetools = [TavilySearchResults(max_results=1)]prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant.", ), ("placeholder", "{chat_history}"), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ])# Construct the Tools agentagent = create_tool_calling_agent(llm, tools, prompt)# Create an agent executor by passing in the agent and toolsagent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke( {"input": "Who directed the 2023 film Oppenheimer and what is their age in days?"}) **API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) | [TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) {'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?', 'output': 'The 2023 film "Oppenheimer" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan\'s age in days, we first need his birthdate, which is July 30, 1970. Let\'s calculate his age in days from his birthdate to today\'s date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days:\n- 53 years = 53 x 365 = 19,345 days\n- Adding leap years from 1970 to 2023: There are 13 leap years (1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020). So, add 13 days.\n- Total days from years and leap years = 19,345 + 13 = 19,358 days\n- Add the days from July 30, 2023, to December 7, 2023 = 130 days\n\nTotal age in days = 19,358 + 130 = 19,488 days\n\nChristopher Nolan is 19,488 days old as of December 7, 2023.'} We don't get much output, but since we set up LangSmith we can easily see what happened under the hood: [https://smith.langchain.com/public/a89ff88f-9ddc-4757-a395-3a1b365655bf/r](https://smith.langchain.com/public/a89ff88f-9ddc-4757-a395-3a1b365655bf/r) `set_debug` and `set_verbose`[​](#set_debug-and-set_verbose "Direct link to set_debug-and-set_verbose") ------------------------------------------------------------------------------------------------------- If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a chain run. There are a number of ways to enable printing at varying degrees of verbosity. Note: These still work even with LangSmith enabled, so you can have both turned on and running at the same time ### `set_verbose(True)`[​](#set_verbosetrue "Direct link to set_verbosetrue") Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic. from langchain.globals import set_verboseset_verbose(True)agent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke( {"input": "Who directed the 2023 film Oppenheimer and what is their age in days?"}) **API Reference:**[set\_verbose](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_verbose.html) > Entering new AgentExecutor chain...Invoking: `tavily_search_results_json` with `{'query': 'director of the 2023 film Oppenheimer'}`[{'url': 'https://m.imdb.com/title/tt15398776/', 'content': 'Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.'}]Invoking: `tavily_search_results_json` with `{'query': 'birth date of Christopher Nolan'}`[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}]Invoking: `tavily_search_results_json` with `{'query': 'Christopher Nolan birth date'}`responded: The 2023 film **Oppenheimer** was directed by **Christopher Nolan**.To calculate Christopher Nolan's age in days, I need his exact birth date. Let me find that information for you.[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}]Invoking: `tavily_search_results_json` with `{'query': 'Christopher Nolan date of birth'}`responded: It appears that I need to refine my search to get the exact birth date of Christopher Nolan. Let me try again to find that specific information.[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}]I am currently unable to retrieve the exact birth date of Christopher Nolan from the sources available. However, it is widely known that he was born on July 30, 1970. Using this date, I can calculate his age in days as of today.Let's calculate:- Christopher Nolan's birth date: July 30, 1970.- Today's date: December 7, 2023.The number of days between these two dates can be calculated as follows:1. From July 30, 1970, to July 30, 2023, is 53 years.2. From July 30, 2023, to December 7, 2023, is 130 days.Calculating the total days for 53 years (considering leap years):- 53 years Γ— 365 days/year = 19,345 days- Adding leap years (1972, 1976, ..., 2020, 2024 - 13 leap years): 13 daysTotal days from birth until July 30, 2023: 19,345 + 13 = 19,358 daysAdding the days from July 30, 2023, to December 7, 2023: 130 daysTotal age in days as of December 7, 2023: 19,358 + 130 = 19,488 days.Therefore, Christopher Nolan is 19,488 days old as of December 7, 2023.> Finished chain. {'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?', 'output': "I am currently unable to retrieve the exact birth date of Christopher Nolan from the sources available. However, it is widely known that he was born on July 30, 1970. Using this date, I can calculate his age in days as of today.\n\nLet's calculate:\n\n- Christopher Nolan's birth date: July 30, 1970.\n- Today's date: December 7, 2023.\n\nThe number of days between these two dates can be calculated as follows:\n\n1. From July 30, 1970, to July 30, 2023, is 53 years.\n2. From July 30, 2023, to December 7, 2023, is 130 days.\n\nCalculating the total days for 53 years (considering leap years):\n- 53 years Γ— 365 days/year = 19,345 days\n- Adding leap years (1972, 1976, ..., 2020, 2024 - 13 leap years): 13 days\n\nTotal days from birth until July 30, 2023: 19,345 + 13 = 19,358 days\nAdding the days from July 30, 2023, to December 7, 2023: 130 days\n\nTotal age in days as of December 7, 2023: 19,358 + 130 = 19,488 days.\n\nTherefore, Christopher Nolan is 19,488 days old as of December 7, 2023."} ### `set_debug(True)`[​](#set_debugtrue "Direct link to set_debugtrue") Setting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs. from langchain.globals import set_debugset_debug(True)set_verbose(False)agent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke( {"input": "Who directed the 2023 film Oppenheimer and what is their age in days?"}) **API Reference:**[set\_debug](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_debug.html) [chain/start] [1:chain:AgentExecutor] Entering Chain run with input:{ "input": "Who directed the 2023 film Oppenheimer and what is their age in days?"}[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence] Entering Chain run with input:{ "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:{ "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:{ "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad> > 5:chain:RunnableLambda] Entering Chain run with input:{ "input": ""}[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad> > 5:chain:RunnableLambda] [1ms] Exiting Chain run with output:{ "output": []}[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad>] [2ms] Exiting Chain run with output:{ "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad>] [5ms] Exiting Chain run with output:{ "input": "Who directed the 2023 film Oppenheimer and what is their age in days?", "intermediate_steps": [], "agent_scratchpad": []}[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] Entering Prompt run with input:{ "input": "Who directed the 2023 film Oppenheimer and what is their age in days?", "intermediate_steps": [], "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:[outputs][llm/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] Entering LLM run with input:{ "prompts": [ "System: You are a helpful assistant.\nHuman: Who directed the 2023 film Oppenheimer and what is their age in days?" ]}[llm/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] [3.17s] Exiting LLM run with output:{ "generations": [ [ { "text": "", "generation_info": { "finish_reason": "tool_calls" }, "type": "ChatGenerationChunk", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessageChunk" ], "kwargs": { "content": "", "example": false, "additional_kwargs": { "tool_calls": [ { "index": 0, "id": "call_fnfq6GjSQED4iF6lo4rxkUup", "function": { "arguments": "{\"query\": \"director of the 2023 film Oppenheimer\"}", "name": "tavily_search_results_json" }, "type": "function" }, { "index": 1, "id": "call_mwhVi6pk49f4OIo5rOWrr4TD", "function": { "arguments": "{\"query\": \"birth date of Christopher Nolan\"}", "name": "tavily_search_results_json" }, "type": "function" } ] }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"query\": \"director of the 2023 film Oppenheimer\"}", "id": "call_fnfq6GjSQED4iF6lo4rxkUup", "index": 0 }, { "name": "tavily_search_results_json", "args": "{\"query\": \"birth date of Christopher Nolan\"}", "id": "call_mwhVi6pk49f4OIo5rOWrr4TD", "index": 1 } ], "response_metadata": { "finish_reason": "tool_calls" }, "id": "run-6e160323-15f9-491d-aadf-b5d337e9e2a1", "tool_calls": [ { "name": "tavily_search_results_json", "args": { "query": "director of the 2023 film Oppenheimer" }, "id": "call_fnfq6GjSQED4iF6lo4rxkUup" }, { "name": "tavily_search_results_json", "args": { "query": "birth date of Christopher Nolan" }, "id": "call_mwhVi6pk49f4OIo5rOWrr4TD" } ], "invalid_tool_calls": [] } } } ] ], "llm_output": null, "run": null}[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:ToolsAgentOutputParser] Entering Parser run with input:[inputs][chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:ToolsAgentOutputParser] [1ms] Exiting Parser run with output:[outputs][chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence] [3.18s] Exiting Chain run with output:[outputs][tool/start] [1:chain:AgentExecutor > 9:tool:tavily_search_results_json] Entering Tool run with input:"{'query': 'director of the 2023 film Oppenheimer'}"``````outputError in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")``````output[tool/start] [1:chain:AgentExecutor > 10:tool:tavily_search_results_json] Entering Tool run with input:"{'query': 'birth date of Christopher Nolan'}"``````outputError in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")``````output[chain/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence] Entering Chain run with input:{ "input": ""}[chain/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:{ "input": ""}[chain/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:{ "input": ""}[chain/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad> > 14:chain:RunnableLambda] Entering Chain run with input:{ "input": ""}[chain/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad> > 14:chain:RunnableLambda] [1ms] Exiting Chain run with output:[outputs][chain/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad>] [4ms] Exiting Chain run with output:[outputs][chain/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad>] [8ms] Exiting Chain run with output:[outputs][chain/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] Entering Prompt run with input:[inputs][chain/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:[outputs][llm/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 16:llm:ChatOpenAI] Entering LLM run with input:{ "prompts": [ "System: You are a helpful assistant.\nHuman: Who directed the 2023 film Oppenheimer and what is their age in days?\nAI: \nTool: [{\"url\": \"https://m.imdb.com/title/tt15398776/fullcredits/\", \"content\": \"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. ... director of photography: behind-the-scenes Jason Gary ... best boy grip ... film loader Luc Poullain ... aerial coordinator\"}]\nTool: [{\"url\": \"https://en.wikipedia.org/wiki/Christopher_Nolan\", \"content\": \"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batmanβ€”along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neesonβ€”Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \\\"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\\\".[68]\\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \\\"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \\\"Inception became a classic almost as soon as it was projected on silver screens\\\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836Β million worldwide.[98] Nominated for eight Academy Awardsβ€”including Best Picture and Best Original Screenplayβ€”it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \\\"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \\\"a heterogeneity of conditions of products\\\" extending from low-budget films to lucrative blockbusters, \\\"a wide range of genres and settings\\\" and \\\"a diversity of styles that trumpet his versatility\\\".[193]\\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \\\"experimental impulses\\\" with the demands of mainstream entertainment, describing his oeuvre as \\\"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\\\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \\\"kept a viable alternate model of big-budget filmmaking alive\\\", in an era where blockbuster filmmaking has become \\\"a largely computer-generated art form\\\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \\\"the dynamic of a story of the city, a large crime storyΒ ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\\\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]\"}]" ]}[llm/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 16:llm:ChatOpenAI] [20.22s] Exiting LLM run with output:{ "generations": [ [ { "text": "The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years Γ— 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.", "generation_info": { "finish_reason": "stop" }, "type": "ChatGenerationChunk", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessageChunk" ], "kwargs": { "content": "The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years Γ— 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.", "example": false, "additional_kwargs": {}, "tool_call_chunks": [], "response_metadata": { "finish_reason": "stop" }, "id": "run-1c08a44f-db70-4836-935b-417caaf422a5", "tool_calls": [], "invalid_tool_calls": [] } } } ] ], "llm_output": null, "run": null}[chain/start] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 17:parser:ToolsAgentOutputParser] Entering Parser run with input:[inputs][chain/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence > 17:parser:ToolsAgentOutputParser] [2ms] Exiting Parser run with output:[outputs][chain/end] [1:chain:AgentExecutor > 11:chain:RunnableSequence] [20.27s] Exiting Chain run with output:[outputs][chain/end] [1:chain:AgentExecutor] [26.37s] Exiting Chain run with output:{ "output": "The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years Γ— 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old."} {'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?', 'output': 'The 2023 film "Oppenheimer" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan\'s age in days, we first need his birth date, which is July 30, 1970. Let\'s calculate his age in days from his birth date to today\'s date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years Γ— 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.'} [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/debugging.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create custom tools ](/v0.2/docs/how_to/custom_tools/)[ Next How to load CSVs ](/v0.2/docs/how_to/document_loader_csv/) * [Tracing](#tracing) * [`set_debug` and `set_verbose`](#set_debug-and-set_verbose) * [`set_verbose(True)`](#set_verbosetrue) * [`set_debug(True)`](#set_debugtrue)
null
https://python.langchain.com/v0.2/docs/how_to/callbacks_async/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use callbacks in async environments On this page How to use callbacks in async environments ========================================== Prerequisites This guide assumes familiarity with the following concepts: * [Callbacks](/v0.2/docs/concepts/#callbacks) * [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/) If you are planning to use the async APIs, it is recommended to use and extend [`AsyncCallbackHandler`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) to avoid blocking the event. danger If you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe. danger If you're on `python<=3.10`, you need to remember to propagate `config` or `callbacks` when invoking other `runnable` from within a `RunnableLambda`, `RunnableGenerator` or `@tool`. If you do not do this, the callbacks will not be propagated to the child runnables being invoked. import asynciofrom typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import AsyncCallbackHandler, BaseCallbackHandlerfrom langchain_core.messages import HumanMessagefrom langchain_core.outputs import LLMResultclass MyCustomSyncHandler(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs) -> None: print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")class MyCustomAsyncHandler(AsyncCallbackHandler): """Async callback handler that can be used to handle callbacks from langchain.""" async def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: """Run when chain starts running.""" print("zzzz....") await asyncio.sleep(0.3) class_name = serialized["name"] print("Hi! I just woke up. Your llm is starting") async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when chain ends running.""" print("zzzz....") await asyncio.sleep(0.3) print("Hi! I just woke up. Your llm is ending")# To enable streaming, we pass in `streaming=True` to the ChatModel constructor# Additionally, we pass in a list with our custom handlerchat = ChatAnthropic( model="claude-3-sonnet-20240229", max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()],)await chat.agenerate([[HumanMessage(content="Tell me a joke")]]) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [AsyncCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) zzzz....Hi! I just woke up. Your llm is startingSync handler being called in a `thread_pool_executor`: token: HereSync handler being called in a `thread_pool_executor`: token: 'sSync handler being called in a `thread_pool_executor`: token: aSync handler being called in a `thread_pool_executor`: token: littleSync handler being called in a `thread_pool_executor`: token: jokeSync handler being called in a `thread_pool_executor`: token: forSync handler being called in a `thread_pool_executor`: token: youSync handler being called in a `thread_pool_executor`: token: :Sync handler being called in a `thread_pool_executor`: token: WhySync handler being called in a `thread_pool_executor`: token: canSync handler being called in a `thread_pool_executor`: token: 'tSync handler being called in a `thread_pool_executor`: token: aSync handler being called in a `thread_pool_executor`: token: bicycleSync handler being called in a `thread_pool_executor`: token: stanSync handler being called in a `thread_pool_executor`: token: d upSync handler being called in a `thread_pool_executor`: token: bySync handler being called in a `thread_pool_executor`: token: itselfSync handler being called in a `thread_pool_executor`: token: ?Sync handler being called in a `thread_pool_executor`: token: BecauseSync handler being called in a `thread_pool_executor`: token: itSync handler being called in a `thread_pool_executor`: token: 'sSync handler being called in a `thread_pool_executor`: token: twoSync handler being called in a `thread_pool_executor`: token: -Sync handler being called in a `thread_pool_executor`: token: tirezzzz....Hi! I just woke up. Your llm is ending LLMResult(generations=[[ChatGeneration(text="Here's a little joke for you:\n\nWhy can't a bicycle stand up by itself? Because it's two-tire", message=AIMessage(content="Here's a little joke for you:\n\nWhy can't a bicycle stand up by itself? Because it's two-tire", id='run-8afc89e8-02c0-4522-8480-d96977240bd4-0'))]], llm_output={}, run=[RunInfo(run_id=UUID('8afc89e8-02c0-4522-8480-d96977240bd4'))]) Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to create your own custom callback handlers. Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/v0.2/docs/how_to/callbacks_attach/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_async.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Caching ](/v0.2/docs/how_to/caching_embeddings/)[ Next How to attach callbacks to a runnable ](/v0.2/docs/how_to/callbacks_attach/) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_csv/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load CSVs On this page How to load CSVs ================ A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. LangChain implements a [CSV Loader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html) that will load CSV files into a sequence of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Each row of the CSV file is translated to one document. from langchain_community.document_loaders.csv_loader import CSVLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv")loader = CSVLoader(file_path=file_path)data = loader.load()for record in data[:2]: print(record) **API Reference:**[CSVLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html) page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 0}page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 1} Customizing the CSV parsing and loading[​](#customizing-the-csv-parsing-and-loading "Direct link to Customizing the CSV parsing and loading") --------------------------------------------------------------------------------------------------------------------------------------------- `CSVLoader` will accept a `csv_args` kwarg that supports customization of arguments passed to Python's `csv.DictReader`. See the [csv module](https://docs.python.org/3/library/csv.html) documentation for more information of what csv args are supported. loader = CSVLoader( file_path=file_path, csv_args={ "delimiter": ",", "quotechar": '"', "fieldnames": ["MLB Team", "Payroll in millions", "Wins"], },)data = loader.load()for record in data[:2]: print(record) page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 0}page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 1} Specify a column to identify the document source[​](#specify-a-column-to-identify-the-document-source "Direct link to Specify a column to identify the document source") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ The `"source"` key on [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) metadata can be set using a column of the CSV. Use the `source_column` argument to specify a source for the document created from each row. Otherwise `file_path` will be used as the source for all documents created from the CSV file. This is useful when using documents loaded from CSV files for chains that answer questions using sources. loader = CSVLoader(file_path=file_path, source_column="Team")data = loader.load()for record in data[:2]: print(record) page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98' metadata={'source': 'Nationals', 'row': 0}page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97' metadata={'source': 'Reds', 'row': 1} Load from a string[​](#load-from-a-string "Direct link to Load from a string") ------------------------------------------------------------------------------ Python's `tempfile` can be used when working with CSV strings directly. import tempfilefrom io import StringIOstring_data = """"Team", "Payroll (millions)", "Wins""Nationals", 81.34, 98"Reds", 82.20, 97"Yankees", 197.96, 95"Giants", 117.62, 94""".strip()with tempfile.NamedTemporaryFile(delete=False, mode="w+") as temp_file: temp_file.write(string_data) temp_file_path = temp_file.nameloader = CSVLoader(file_path=temp_file_path)loader.load()for record in data[:2]: print(record) page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98' metadata={'source': 'Nationals', 'row': 0}page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97' metadata={'source': 'Reds', 'row': 1} [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_csv.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to debug your LLM apps ](/v0.2/docs/how_to/debugging/)[ Next How to load documents from a directory ](/v0.2/docs/how_to/document_loader_directory/) * [Customizing the CSV parsing and loading](#customizing-the-csv-parsing-and-loading) * [Specify a column to identify the document source](#specify-a-column-to-identify-the-document-source) * [Load from a string](#load-from-a-string)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_directory/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load documents from a directory On this page How to load documents from a directory ====================================== LangChain's [DirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.directory.DirectoryLoader.html) implements functionality for reading files from disk into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Here we demonstrate: * How to load from a filesystem, including use of wildcard patterns; * How to use multithreading for file I/O; * How to use custom loader classes to parse specific file types (e.g., code); * How to handle errors, such as those due to decoding. from langchain_community.document_loaders import DirectoryLoader **API Reference:**[DirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.directory.DirectoryLoader.html) `DirectoryLoader` accepts a `loader_cls` kwarg, which defaults to [UnstructuredLoader](/v0.2/docs/integrations/document_loaders/unstructured_file/). [Unstructured](https://unstructured-io.github.io/unstructured/) supports parsing for a number of formats, such as PDF and HTML. Here we use it to read in a markdown (.md) file. We can use the `glob` parameter to control which files to load. Note that here it doesn't load the `.rst` file or the `.html` files. loader = DirectoryLoader("../", glob="**/*.md")docs = loader.load()len(docs) 20 print(docs[0].page_content[:100]) SecurityLangChain has a large ecosystem of integrations with various external resources like local Show a progress bar[​](#show-a-progress-bar "Direct link to Show a progress bar") --------------------------------------------------------------------------------- By default a progress bar will not be shown. To show a progress bar, install the `tqdm` library (e.g. `pip install tqdm`), and set the `show_progress` parameter to `True`. loader = DirectoryLoader("../", glob="**/*.md", show_progress=True)docs = loader.load() 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 54.56it/s] Use multithreading[​](#use-multithreading "Direct link to Use multithreading") ------------------------------------------------------------------------------ By default the loading happens in one thread. In order to utilize several threads set the `use_multithreading` flag to true. loader = DirectoryLoader("../", glob="**/*.md", use_multithreading=True)docs = loader.load() Change loader class[​](#change-loader-class "Direct link to Change loader class") --------------------------------------------------------------------------------- By default this uses the `UnstructuredLoader` class. To customize the loader, specify the loader class in the `loader_cls` kwarg. Below we show an example using [TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html): from langchain_community.document_loaders import TextLoaderloader = DirectoryLoader("../", glob="**/*.md", loader_cls=TextLoader)docs = loader.load() **API Reference:**[TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) print(docs[0].page_content[:100]) # SecurityLangChain has a large ecosystem of integrations with various external resources like loc Notice that while the `UnstructuredLoader` parses Markdown headers, `TextLoader` does not. If you need to load Python source code files, use the `PythonLoader`: from langchain_community.document_loaders import PythonLoaderloader = DirectoryLoader("../../../../../", glob="**/*.py", loader_cls=PythonLoader) **API Reference:**[PythonLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.python.PythonLoader.html) Auto-detect file encodings with TextLoader[​](#auto-detect-file-encodings-with-textloader "Direct link to Auto-detect file encodings with TextLoader") ------------------------------------------------------------------------------------------------------------------------------------------------------ `DirectoryLoader` can help manage errors due to variations in file encodings. Below we will attempt to load in a collection of files, one of which includes non-UTF8 encodings. path = "../../../../libs/langchain/tests/unit_tests/examples/"loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader) ### A. Default Behavior[​](#a-default-behavior "Direct link to A. Default Behavior") By default we raise an error: loader.load() Error loading file ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt ---------------------------------------------------------------------------``````outputUnicodeDecodeError Traceback (most recent call last)``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/text.py:43, in TextLoader.lazy_load(self) 42 with open(self.file_path, encoding=self.encoding) as f:---> 43 text = f.read() 44 except UnicodeDecodeError as e:``````outputFile ~/.pyenv/versions/3.10.4/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final) 321 data = self.buffer + input--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call``````outputUnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 0: invalid continuation byte``````outputThe above exception was the direct cause of the following exception:``````outputRuntimeError Traceback (most recent call last)``````outputCell In[10], line 1----> 1 loader.load()``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:117, in DirectoryLoader.load(self) 115 def load(self) -> List[Document]: 116 """Load documents."""--> 117 return list(self.lazy_load())``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:182, in DirectoryLoader.lazy_load(self) 180 else: 181 for i in items:--> 182 yield from self._lazy_load_file(i, p, pbar) 184 if pbar: 185 pbar.close()``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:220, in DirectoryLoader._lazy_load_file(self, item, path, pbar) 218 else: 219 logger.error(f"Error loading file {str(item)}")--> 220 raise e 221 finally: 222 if pbar:``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:210, in DirectoryLoader._lazy_load_file(self, item, path, pbar) 208 loader = self.loader_cls(str(item), **self.loader_kwargs) 209 try:--> 210 for subdoc in loader.lazy_load(): 211 yield subdoc 212 except NotImplementedError:``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/text.py:56, in TextLoader.lazy_load(self) 54 continue 55 else:---> 56 raise RuntimeError(f"Error loading {self.file_path}") from e 57 except Exception as e: 58 raise RuntimeError(f"Error loading {self.file_path}") from e``````outputRuntimeError: Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt The file `example-non-utf8.txt` uses a different encoding, so the `load()` function fails with a helpful message indicating which file failed decoding. With the default behavior of `TextLoader` any failure to load any of the documents will fail the whole loading process and no documents are loaded. ### B. Silent fail[​](#b-silent-fail "Direct link to B. Silent fail") We can pass the parameter `silent_errors` to the `DirectoryLoader` to skip the files which could not be loaded and continue the load process. loader = DirectoryLoader( path, glob="**/*.txt", loader_cls=TextLoader, silent_errors=True)docs = loader.load() Error loading file ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt: Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt doc_sources = [doc.metadata["source"] for doc in docs]doc_sources ['../../../../libs/langchain/tests/unit_tests/examples/example-utf8.txt'] ### C. Auto detect encodings[​](#c-auto-detect-encodings "Direct link to C. Auto detect encodings") We can also ask `TextLoader` to auto detect the file encoding before failing, by passing the `autodetect_encoding` to the loader class. text_loader_kwargs = {"autodetect_encoding": True}loader = DirectoryLoader( path, glob="**/*.txt", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)docs = loader.load() doc_sources = [doc.metadata["source"] for doc in docs]doc_sources ['../../../../libs/langchain/tests/unit_tests/examples/example-utf8.txt', '../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt'] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_directory.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load CSVs ](/v0.2/docs/how_to/document_loader_csv/)[ Next How to load HTML ](/v0.2/docs/how_to/document_loader_html/) * [Show a progress bar](#show-a-progress-bar) * [Use multithreading](#use-multithreading) * [Change loader class](#change-loader-class) * [Auto-detect file encodings with TextLoader](#auto-detect-file-encodings-with-textloader) * [A. Default Behavior](#a-default-behavior) * [B. Silent fail](#b-silent-fail) * [C. Auto detect encodings](#c-auto-detect-encodings)
null
https://python.langchain.com/v0.2/docs/how_to/custom_tools/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create custom tools On this page How to create custom tools ========================== When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components: Attribute Type Description name str Must be unique within a set of tools provided to an LLM or agent. description str Describes what the tool does. Used as context by the LLM or agent. args\_schema Pydantic BaseModel Optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters return\_direct boolean Only relevant for agents. When True, after invoking the given tool, the agent will stop and return the result direcly to the user. LangChain provides 3 ways to create tools: 1. Using [@tool decorator](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html#langchain_core.tools.tool) -- the simplest way to define a custom tool. 2. Using [StructuredTool.from\_function](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html#langchain_core.tools.StructuredTool.from_function) class method -- this is similar to the `@tool` decorator, but allows more configuration and specification of both sync and async implementations. 3. By sub-classing from [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code. The `@tool` or the `StructuredTool.from_function` class method should be sufficient for most use cases. tip Models will perform better if the tools have well chosen names, descriptions and JSON schemas. @tool decorator[​](#tool-decorator "Direct link to @tool decorator") -------------------------------------------------------------------- This `@tool` decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description - so a docstring MUST be provided. from langchain_core.tools import tool@tooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b# Let's inspect some of the attributes associated with the tool.print(multiply.name)print(multiply.description)print(multiply.args) **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) multiplymultiply(a: int, b: int) -> int - Multiply two numbers.{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}} Or create an **async** implementation, like this: from langchain_core.tools import tool@toolasync def amultiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b **API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) You can also customize the tool name and JSON args by passing them into the tool decorator. from langchain.pydantic_v1 import BaseModel, Fieldclass CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")@tool("multiplication-tool", args_schema=CalculatorInput, return_direct=True)def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b# Let's inspect some of the attributes associated with the tool.print(multiply.name)print(multiply.description)print(multiply.args)print(multiply.return_direct) multiplication-toolmultiplication-tool(a: int, b: int) -> int - Multiply two numbers.{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}True StructuredTool[​](#structuredtool "Direct link to StructuredTool") ------------------------------------------------------------------ The `StrurcturedTool.from_function` class method provides a bit more configurability than the `@tool` decorator, without requiring much additional code. from langchain_core.tools import StructuredTooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * basync def amultiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)print(calculator.invoke({"a": 2, "b": 3}))print(await calculator.ainvoke({"a": 2, "b": 5})) **API Reference:**[StructuredTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html) 610 To configure it: class CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function( func=multiply, name="Calculator", description="multiply numbers", args_schema=CalculatorInput, return_direct=True, # coroutine= ... <- you can specify an async method if desired as well)print(calculator.invoke({"a": 2, "b": 3}))print(calculator.name)print(calculator.description)print(calculator.args) 6CalculatorCalculator(a: int, b: int) -> int - multiply numbers{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}} Subclass BaseTool[​](#subclass-basetool "Direct link to Subclass BaseTool") --------------------------------------------------------------------------- You can define a custom tool by sub-classing from `BaseTool`. This provides maximal control over the tool definition, but requires writing more code. from typing import Optional, Typefrom langchain.pydantic_v1 import BaseModelfrom langchain_core.callbacks import ( AsyncCallbackManagerForToolRun, CallbackManagerForToolRun,)from langchain_core.tools import BaseToolclass CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")class CustomCalculatorTool(BaseTool): name = "Calculator" description = "useful for when you need to answer questions about math" args_schema: Type[BaseModel] = CalculatorInput return_direct: bool = True def _run( self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" return a * b async def _arun( self, a: int, b: int, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" # If the calculation is cheap, you can just delegate to the sync implementation # as shown below. # If the sync calculation is expensive, you should delete the entire _arun method. # LangChain will automatically provide a better implementation that will # kick off the task in a thread to make sure it doesn't block other async code. return self._run(a, b, run_manager=run_manager.get_sync()) **API Reference:**[AsyncCallbackManagerForToolRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun.html) | [CallbackManagerForToolRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForToolRun.html) | [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) multiply = CustomCalculatorTool()print(multiply.name)print(multiply.description)print(multiply.args)print(multiply.return_direct)print(multiply.invoke({"a": 2, "b": 3}))print(await multiply.ainvoke({"a": 2, "b": 3})) Calculatoruseful for when you need to answer questions about math{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}True66 How to create async tools[​](#how-to-create-async-tools "Direct link to How to create async tools") --------------------------------------------------------------------------------------------------- LangChain Tools implement the [Runnable interface πŸƒ](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html). All Runnables expose the `invoke` and `ainvoke` methods (as well as other methods like `batch`, `abatch`, `astream` etc). So even if you only provide an `sync` implementation of a tool, you could still use the `ainvoke` interface, but there are some important things to know: * LangChain's by default provides an async implementation that assumes that the function is expensive to compute, so it'll delegate execution to another thread. * If you're working in an async codebase, you should create async tools rather than sync tools, to avoid incuring a small overhead due to that thread. * If you need both sync and async implementations, use `StructuredTool.from_function` or sub-class from `BaseTool`. * If implementing both sync and async, and the sync code is fast to run, override the default LangChain async implementation and simply call the sync code. * You CANNOT and SHOULD NOT use the sync `invoke` with an `async` tool. from langchain_core.tools import StructuredTooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function(func=multiply)print(calculator.invoke({"a": 2, "b": 3}))print( await calculator.ainvoke({"a": 2, "b": 5})) # Uses default LangChain async implementation incurs small overhead **API Reference:**[StructuredTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html) 610 from langchain_core.tools import StructuredTooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * basync def amultiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)print(calculator.invoke({"a": 2, "b": 3}))print( await calculator.ainvoke({"a": 2, "b": 5})) # Uses use provided amultiply without additional overhead **API Reference:**[StructuredTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html) 610 You should not and cannot use `.invoke` when providing only an async definition. @toolasync def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * btry: multiply.invoke({"a": 2, "b": 3})except NotImplementedError: print("Raised not implemented error. You should not be doing this.") Raised not implemented error. You should not be doing this. Handling Tool Errors[​](#handling-tool-errors "Direct link to Handling Tool Errors") ------------------------------------------------------------------------------------ If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution. A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool. You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value. Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`. from langchain_core.tools import ToolExceptiondef get_weather(city: str) -> int: """Get weather for the given city.""" raise ToolException(f"Error: There is no city by the name of {city}.") **API Reference:**[ToolException](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.ToolException.html) Here's an example with the default `handle_tool_error=True` behavior. get_weather_tool = StructuredTool.from_function( func=get_weather, handle_tool_error=True,)get_weather_tool.invoke({"city": "foobar"}) 'Error: There is no city by the name of foobar.' We can set `handle_tool_error` to a string that will always be returned. get_weather_tool = StructuredTool.from_function( func=get_weather, handle_tool_error="There is no such city, but it's probably above 0K there!",)get_weather_tool.invoke({"city": "foobar"}) "There is no such city, but it's probably above 0K there!" Handling the error using a function: def _handle_error(error: ToolException) -> str: return f"The following errors occurred during tool execution: `{error.args[0]}`"get_weather_tool = StructuredTool.from_function( func=get_weather, handle_tool_error=_handle_error,)get_weather_tool.invoke({"city": "foobar"}) 'The following errors occurred during tool execution: `Error: There is no city by the name of foobar.`' [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_tools.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Custom Retriever ](/v0.2/docs/how_to/custom_retriever/)[ Next How to debug your LLM apps ](/v0.2/docs/how_to/debugging/) * [@tool decorator](#tool-decorator) * [StructuredTool](#structuredtool) * [Subclass BaseTool](#subclass-basetool) * [How to create async tools](#how-to-create-async-tools) * [Handling Tool Errors](#handling-tool-errors)
null
https://python.langchain.com/v0.2/docs/how_to/callbacks_attach/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to attach callbacks to a runnable On this page How to attach callbacks to a runnable ===================================== Prerequisites This guide assumes familiarity with the following concepts: * [Callbacks](/v0.2/docs/concepts/#callbacks) * [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding/) If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.with_config()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method. This saves you the need to pass callbacks in each time you invoke the chain. info `with_config()` binds a configuration which will be interpreted as **runtime** configuration. So these callbacks will propagate to all child components. Here's an example: from typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import LLMResultfrom langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler): def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs ) -> None: print("Chat model started") def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(f"Chat model ended, response: {response}") def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs ) -> None: print(f"Chain {serialized.get('name')} started") def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None: print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]llm = ChatAnthropic(model="claude-3-sonnet-20240229")prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain_with_callbacks = chain.with_config(callbacks=callbacks)chain_with_callbacks.invoke({"number": "2"}) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) Chain RunnableSequence startedChain ChatPromptTemplate startedChain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]Chat model startedChat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'))]] llm_output={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=NoneChain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0' AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0') The bound callbacks will run for all nested module runs. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to attach callbacks to a chain. Next, check out the other how-to guides in this section, such as how to [pass callbacks in at runtime](/v0.2/docs/how_to/callbacks_runtime/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_attach.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use callbacks in async environments ](/v0.2/docs/how_to/callbacks_async/)[ Next How to propagate callbacks constructor ](/v0.2/docs/how_to/callbacks_constructor/) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/callbacks_constructor/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to propagate callbacks constructor On this page How to propagate callbacks constructor ====================================== Prerequisites This guide assumes familiarity with the following concepts: * [Callbacks](/v0.2/docs/concepts/#callbacks) * [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/) Most LangChain modules allow you to pass `callbacks` directly into the constructor (i.e., initializer). In this case, the callbacks will only be called for that instance (and any nested runs). danger Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. This can lead to confusing behavior, and it's generally better to pass callbacks as a run time argument. Here's an example: from typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import LLMResultfrom langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler): def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs ) -> None: print("Chat model started") def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(f"Chat model ended, response: {response}") def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs ) -> None: print(f"Chain {serialized.get('name')} started") def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None: print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]llm = ChatAnthropic(model="claude-3-sonnet-20240229", callbacks=callbacks)prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) Chat model startedChat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0'))]] llm_output={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0') You can see that we only see events from the chat model run - no chain events from the prompt or broader chain. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to pass callbacks into a constructor. Next, check out the other how-to guides in this section, such as how to [pass callbacks at runtime](/v0.2/docs/how_to/callbacks_runtime/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_constructor.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to attach callbacks to a runnable ](/v0.2/docs/how_to/callbacks_attach/)[ Next How to pass callbacks in at runtime ](/v0.2/docs/how_to/callbacks_runtime/) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_json/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load JSON On this page How to load JSON ================ [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value. LangChain implements a [JSONLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) to convert JSON and JSONL data into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. It uses a specified [jq schema](https://en.wikipedia.org/wiki/Jq_\(programming_language\)) to parse the JSON files, allowing for the extraction of specific fields into the content and metadata of the LangChain Document. It uses the `jq` python package. Check out this [manual](https://stedolan.github.io/jq/manual/#Basicfilters) for a detailed documentation of the `jq` syntax. Here we will demonstrate: * How to load JSON and JSONL data into the content of a LangChain `Document`; * How to load JSON and JSONL data into metadata associated with a `Document`. #!pip install jq from langchain_community.document_loaders import JSONLoader **API Reference:**[JSONLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) import jsonfrom pathlib import Pathfrom pprint import pprintfile_path='./example_data/facebook_chat.json'data = json.loads(Path(file_path).read_text()) pprint(data) {'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'}, 'is_still_participant': True, 'joinable_mode': {'link': '', 'mode': 1}, 'magic_words': [], 'messages': [{'content': 'Bye!', 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}, {'content': 'Oh no worries! Bye', 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}, {'content': 'No Im sorry it was my mistake, the blue one is not ' 'for sale', 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}, {'content': 'I thought you were selling the blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}, {'content': 'Im not interested in this bag. Im interested in the ' 'blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}, {'content': 'Here is $129', 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}, {'photos': [{'creation_timestamp': 1675595059, 'uri': 'url_of_some_picture.jpg'}], 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}, {'content': 'Online is at least $100', 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}, {'content': 'How much do you want?', 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}, {'content': 'Goodmorning! $50 is too low.', 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}, {'content': 'Hi! Im interested in your bag. Im offering $50. Let ' 'me know if you are interested. Thanks!', 'sender_name': 'User 1', 'timestamp_ms': 1675549022673}], 'participants': [{'name': 'User 1'}, {'name': 'User 2'}], 'thread_path': 'inbox/User 1 and User 2 chat', 'title': 'User 1 and User 2 chat'} Using `JSONLoader`[​](#using-jsonloader "Direct link to using-jsonloader") -------------------------------------------------------------------------- Suppose we are interested in extracting the values under the `content` field within the `messages` key of the JSON data. This can easily be done through the `JSONLoader` as shown below. ### JSON file[​](#json-file "Direct link to JSON file") loader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[].content', text_content=False)data = loader.load() pprint(data) [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})] ### JSON Lines file[​](#json-lines-file "Direct link to JSON Lines file") If you want to load documents from a JSON Lines file, you pass `json_lines=True` and specify `jq_schema` to extract `page_content` from a single JSON object. file_path = './example_data/facebook_chat_messages.jsonl'pprint(Path(file_path).read_text()) ('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n' '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no ' 'worries! Bye"}\n' '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im ' 'sorry it was my mistake, the blue one is not for sale"}\n') loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.content', text_content=False, json_lines=True)data = loader.load() pprint(data) [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})] Another option is set `jq_schema='.'` and provide `content_key`: loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.', content_key='sender_name', json_lines=True)data = loader.load() pprint(data) [Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})] ### JSON file with jq schema `content_key`[​](#json-file-with-jq-schema-content_key "Direct link to json-file-with-jq-schema-content_key") To load documents from a JSON file using the content\_key within the jq schema, set is\_content\_key\_jq\_parsable=True. Ensure that content\_key is compatible and can be parsed using the jq schema. file_path = './sample.json'pprint(Path(file_path).read_text()) {"data": [ {"attributes": { "message": "message1", "tags": [ "tag1"]}, "id": "1"}, {"attributes": { "message": "message2", "tags": [ "tag2"]}, "id": "2"}]} loader = JSONLoader( file_path=file_path, jq_schema=".data[]", content_key=".attributes.message", is_content_key_jq_parsable=True,)data = loader.load() pprint(data) [Document(page_content='message1', metadata={'source': '/path/to/sample.json', 'seq_num': 1}), Document(page_content='message2', metadata={'source': '/path/to/sample.json', 'seq_num': 2})] Extracting metadata[​](#extracting-metadata "Direct link to Extracting metadata") --------------------------------------------------------------------------------- Generally, we want to include metadata available in the JSON file into the documents that we create from the content. The following demonstrates how metadata can be extracted using the `JSONLoader`. There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the `page_content` can be extracted from. .messages[].content In the current example, we have to tell the loader to iterate over the records in the `messages` field. The jq\_schema then has to be: .messages[] This allows us to pass the records (dict) into the `metadata_func` that has to be implemented. The `metadata_func` is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final `Document` object. Additionally, we now have to explicitly specify in the loader, via the `content_key` argument, the key from the record where the value for the `page_content` needs to be extracted from. # Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load() pprint(data) [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})] Now, you will see that the documents contain the metadata associated with the content we extracted. The `metadata_func`[​](#the-metadata_func "Direct link to the-metadata_func") ----------------------------------------------------------------------------- As shown above, the `metadata_func` accepts the default metadata generated by the `JSONLoader`. This allows full control to the user with respect to how the metadata is formatted. For example, the default metadata contains the `source` and the `seq_num` keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the `metadata_func` to rename the default keys and use the ones from the JSON data. The example below shows how we can modify the `source` to only contain information of the file source relative to the `langchain` directory. # Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") if "source" in metadata: source = metadata["source"].split("/") source = source[source.index("langchain"):] metadata["source"] = "/".join(source) return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load() pprint(data) [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})] Common JSON structures with jq schema[​](#common-json-structures-with-jq-schema "Direct link to Common JSON structures with jq schema") --------------------------------------------------------------------------------------------------------------------------------------- The list below provides a reference to the possible `jq_schema` the user can use to extract content from the JSON data depending on the structure. JSON -> [{"text": ...}, {"text": ...}, {"text": ...}]jq_schema -> ".[].text"JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]}jq_schema -> ".key[].text"JSON -> ["...", "...", "..."]jq_schema -> ".[]" [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_json.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load HTML ](/v0.2/docs/how_to/document_loader_html/)[ Next How to load Markdown ](/v0.2/docs/how_to/document_loader_markdown/) * [Using `JSONLoader`](#using-jsonloader) * [JSON file](#json-file) * [JSON Lines file](#json-lines-file) * [JSON file with jq schema `content_key`](#json-file-with-jq-schema-content_key) * [Extracting metadata](#extracting-metadata) * [The `metadata_func`](#the-metadata_func) * [Common JSON structures with jq schema](#common-json-structures-with-jq-schema)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_html/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load HTML On this page How to load HTML ================ The HyperText Markup Language or [HTML](https://en.wikipedia.org/wiki/HTML) is the standard markup language for documents designed to be displayed in a web browser. This covers how to load `HTML` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream. Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/v0.2/docs/integrations/document_loaders/azure_document_intelligence/) or [FireCrawl](/v0.2/docs/integrations/document_loaders/firecrawl/). Loading HTML with Unstructured[​](#loading-html-with-unstructured "Direct link to Loading HTML with Unstructured") ------------------------------------------------------------------------------------------------------------------ %pip install "unstructured[html]" from langchain_community.document_loaders import UnstructuredHTMLLoaderfile_path = "../../../docs/integrations/document_loaders/example_data/fake-content.html"loader = UnstructuredHTMLLoader(file_path)data = loader.load()print(data) **API Reference:**[UnstructuredHTMLLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.html.UnstructuredHTMLLoader.html) [Document(page_content='My First Heading\n\nMy first paragraph.', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html'})] Loading HTML with BeautifulSoup4[​](#loading-html-with-beautifulsoup4 "Direct link to Loading HTML with BeautifulSoup4") ------------------------------------------------------------------------------------------------------------------------ We can also use `BeautifulSoup4` to load HTML documents using the `BSHTMLLoader`. This will extract the text from the HTML into `page_content`, and the page title as `title` into `metadata`. %pip install bs4 from langchain_community.document_loaders import BSHTMLLoaderloader = BSHTMLLoader(file_path)data = loader.load()print(data) **API Reference:**[BSHTMLLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.html_bs.BSHTMLLoader.html) [Document(page_content='\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html', 'title': 'Test Title'})] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_html.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load documents from a directory ](/v0.2/docs/how_to/document_loader_directory/)[ Next How to load JSON ](/v0.2/docs/how_to/document_loader_json/) * [Loading HTML with Unstructured](#loading-html-with-unstructured) * [Loading HTML with BeautifulSoup4](#loading-html-with-beautifulsoup4)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_markdown/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load Markdown On this page How to load Markdown ==================== [Markdown](https://en.wikipedia.org/wiki/Markdown) is a lightweight markup language for creating formatted text using a plain-text editor. Here we cover how to load `Markdown` documents into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream. We will cover: * Basic usage; * Parsing of Markdown into elements such as titles, list items, and text. LangChain implements an [UnstructuredMarkdownLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html) object which requires the [Unstructured](https://unstructured-io.github.io/unstructured/) package. First we install it: # !pip install "unstructured[md]" Basic usage will ingest a Markdown file to a single document. Here we demonstrate on LangChain's readme: from langchain_community.document_loaders import UnstructuredMarkdownLoaderfrom langchain_core.documents import Documentmarkdown_path = "../../../../README.md"loader = UnstructuredMarkdownLoader(markdown_path)data = loader.load()assert len(data) == 1assert isinstance(data[0], Document)readme_content = data[0].page_contentprint(readme_content[:250]) **API Reference:**[UnstructuredMarkdownLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) πŸ¦œοΈπŸ”— LangChain⚑ Build context-aware reasoning applications ⚑Looking for the JS/TS library? Check out LangChain.js.To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, Retain Elements[​](#retain-elements "Direct link to Retain Elements") --------------------------------------------------------------------- Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. loader = UnstructuredMarkdownLoader(markdown_path, mode="elements")data = loader.load()print(f"Number of documents: {len(data)}\n")for document in data[:2]: print(f"{document}\n") Number of documents: 65page_content='πŸ¦œοΈπŸ”— LangChain' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'Title'}page_content='⚑ Build context-aware reasoning applications ⚑' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'parent_id': 'c3223b6f7100be08a78f1e8c0c28fde1', 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'NarrativeText'} Note that in this case we recover three distinct element types: print(set(document.metadata["category"] for document in data)) {'Title', 'NarrativeText', 'ListItem'} [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_markdown.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load JSON ](/v0.2/docs/how_to/document_loader_json/)[ Next How to load Microsoft Office files ](/v0.2/docs/how_to/document_loader_office_file/) * [Retain Elements](#retain-elements)
null
https://python.langchain.com/v0.2/docs/how_to/character_text_splitter/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to split by character How to split by character ========================= This is the simplest method. This splits based on a given character sequence, which defaults to `"\n\n"`. Chunk length is measured by number of characters. 1. How the text is split: by single character separator. 2. How the chunk size is measured: by number of characters. To obtain the string content directly, use `.split_text`. To create LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects (e.g., for use in downstream tasks), use `.create_documents`. %pip install -qU langchain-text-splitters from langchain_text_splitters import CharacterTextSplitter# Load an example documentwith open("state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter( separator="\n\n", chunk_size=1000, chunk_overlap=200, length_function=len, is_separator_regex=False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0]) **API Reference:**[CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' Use `.create_documents` to propagate metadata associated with each document to the output chunks: metadatas = [{"document": 1}, {"document": 2}]documents = text_splitter.create_documents( [state_of_the_union, state_of_the_union], metadatas=metadatas)print(documents[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' metadata={'document': 1} Use `.split_text` to obtain the string content directly: text_splitter.split_text(state_of_the_union)[0] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/character_text_splitter.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to pass callbacks in at runtime ](/v0.2/docs/how_to/callbacks_runtime/)[ Next How to cache chat model responses ](/v0.2/docs/how_to/chat_model_caching/)
null
https://python.langchain.com/v0.2/docs/how_to/callbacks_runtime/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to pass callbacks in at runtime On this page How to pass callbacks in at runtime =================================== Prerequisites This guide assumes familiarity with the following concepts: * [Callbacks](/v0.2/docs/concepts/#callbacks) * [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/) In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM. This prevents us from having to manually attach the handlers to each individual nested object. Here's an example: from typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import LLMResultfrom langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler): def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs ) -> None: print("Chat model started") def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(f"Chat model ended, response: {response}") def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs ) -> None: print(f"Chain {serialized.get('name')} started") def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None: print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]llm = ChatAnthropic(model="claude-3-sonnet-20240229")prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}, config={"callbacks": callbacks}) **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) Chain RunnableSequence startedChain ChatPromptTemplate startedChain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]Chat model startedChat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'))]] llm_output={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=NoneChain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0' AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0') If there are already existing callbacks associated with a module, these will run in addition to any passed in at runtime. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to pass callbacks at runtime. Next, check out the other how-to guides in this section, such as how to [pass callbacks into a module constructor](/v0.2/docs/how_to/custom_callbacks/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_runtime.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to propagate callbacks constructor ](/v0.2/docs/how_to/callbacks_constructor/)[ Next How to split by character ](/v0.2/docs/how_to/character_text_splitter/) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/chat_model_caching/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to cache chat model responses On this page How to cache chat model responses ================================= Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) * [LLMs](/v0.2/docs/concepts/#llms) LangChain provides an optional caching layer for chat models. This is useful for two main reasons: * It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. This is especially useful during app development. * It can speed up your application by reducing the number of API calls you make to the LLM provider. This guide will walk you through how to enable this in your apps. * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) # <!-- ruff: noqa: F821 -->from langchain.globals import set_llm_cache **API Reference:**[set\_llm\_cache](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_llm_cache.html) In Memory Cache[​](#in-memory-cache "Direct link to In Memory Cache") --------------------------------------------------------------------- This is an ephemeral cache that stores model calls in memory. It will be wiped when your environment restarts, and is not shared across processes. %%timefrom langchain.cache import InMemoryCacheset_llm_cache(InMemoryCache())# The first time, it is not yet in cache, so it should take longerllm.invoke("Tell me a joke") **API Reference:**[InMemoryCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.InMemoryCache.html) CPU times: user 645 ms, sys: 214 ms, total: 859 msWall time: 829 ms AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0') %%time# The second time it is, so it goes fasterllm.invoke("Tell me a joke") CPU times: user 822 Β΅s, sys: 288 Β΅s, total: 1.11 msWall time: 1.06 ms AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0') SQLite Cache[​](#sqlite-cache "Direct link to SQLite Cache") ------------------------------------------------------------ This cache implementation uses a `SQLite` database to store responses, and will last across process restarts. !rm .langchain.db # We can do the same thing with a SQLite cachefrom langchain_community.cache import SQLiteCacheset_llm_cache(SQLiteCache(database_path=".langchain.db")) **API Reference:**[SQLiteCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.SQLiteCache.html) %%time# The first time, it is not yet in cache, so it should take longerllm.invoke("Tell me a joke") CPU times: user 9.91 ms, sys: 7.68 ms, total: 17.6 msWall time: 657 ms AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 11, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0') %%time# The second time it is, so it goes fasterllm.invoke("Tell me a joke") CPU times: user 52.2 ms, sys: 60.5 ms, total: 113 msWall time: 127 ms AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0') Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to cache model responses to save time and money. Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output/) or [how to create your own custom chat model](/v0.2/docs/how_to/custom_chat_model/). [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chat_model_caching.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to split by character ](/v0.2/docs/how_to/character_text_splitter/)[ Next How to init any model in one line ](/v0.2/docs/how_to/chat_models_universal_init/) * [In Memory Cache](#in-memory-cache) * [SQLite Cache](#sqlite-cache) * [Next steps](#next-steps)
null
https://python.langchain.com/v0.2/docs/how_to/chat_models_universal_init/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to init any model in one line On this page How to init any model in one line ================================= Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names. Supported models See the [init\_chat\_model()](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) API reference for a full list of supported integrations. Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model. %pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai Basic usage[​](#basic-usage "Direct link to Basic usage") --------------------------------------------------------- from langchain.chat_models import init_chat_model# Returns a langchain_openai.ChatOpenAI instance.gpt_4o = init_chat_model("gpt-4o", model_provider="openai", temperature=0)# Returns a langchain_anthropic.ChatAnthropic instance.claude_opus = init_chat_model( "claude-3-opus-20240229", model_provider="anthropic", temperature=0)# Returns a langchain_google_vertexai.ChatVertexAI instance.gemini_15 = init_chat_model( "gemini-1.5-pro", model_provider="google_vertexai", temperature=0)# Since all model integrations implement the ChatModel interface, you can use them in the same way.print("GPT-4o: " + gpt_4o.invoke("what's your name").content + "\n")print("Claude Opus: " + claude_opus.invoke("what's your name").content + "\n")print("Gemini 1.5: " + gemini_15.invoke("what's your name").content + "\n") **API Reference:**[init\_chat\_model](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?Claude Opus: My name is Claude. It's nice to meet you!Gemini 1.5: I am a large language model, trained by Google. I do not have a name. Simple config example[​](#simple-config-example "Direct link to Simple config example") --------------------------------------------------------------------------------------- user_config = { "model": "...user-specified...", "model_provider": "...user-specified...", "temperature": 0, "max_tokens": 1000,}llm = init_chat_model(**user_config)llm.invoke("what's your name") Inferring model provider[​](#inferring-model-provider "Direct link to Inferring model provider") ------------------------------------------------------------------------------------------------ For common and distinct model names `init_chat_model()` will attempt to infer the model provider. See the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) for a full list of inference behavior. E.g. any model that starts with `gpt-3...` or `gpt-4...` will be inferred as using model provider `openai`. gpt_4o = init_chat_model("gpt-4o", temperature=0)claude_opus = init_chat_model("claude-3-opus-20240229", temperature=0)gemini_15 = init_chat_model("gemini-1.5-pro", temperature=0) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chat_models_universal_init.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to cache chat model responses ](/v0.2/docs/how_to/chat_model_caching/)[ Next How to track token usage in ChatModels ](/v0.2/docs/how_to/chat_token_usage_tracking/) * [Basic usage](#basic-usage) * [Simple config example](#simple-config-example) * [Inferring model provider](#inferring-model-provider)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_pdf/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load PDFs On this page How to load PDFs ================ [Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. This guide covers how to load `PDF` documents into the LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) format that we use downstream. LangChain integrates with a host of PDF parsers. Some are simple and relatively low-level; others will support OCR and image-processing, or perform advanced document layout analysis. The right choice will depend on your application. Below we enumerate the possibilities. Using PyPDF[​](#using-pypdf "Direct link to Using PyPDF") --------------------------------------------------------- Here we load a PDF using `pypdf` into array of documents, where each document contains the page content and metadata with `page` number. %pip install pypdf from langchain_community.document_loaders import PyPDFLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PyPDFLoader(file_path)pages = loader.load_and_split()pages[0] **API Reference:**[PyPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html) Document(page_content='LayoutParser : A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\nshannons@allenai.org\n2Brown University\nruochen zhang@brown.edu\n3Harvard University\n{melissadell,jacob carlson }@fas.harvard.edu\n4University of Washington\nbcgl@cs.washington.edu\n5University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis Β·Deep Learning Β·Layout Analysis\nΒ·Character Recognition Β·Open Source library Β·Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', metadata={'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf', 'page': 0}) An advantage of this approach is that documents can be retrieved with page numbers. ### Vector search over PDFs[​](#vector-search-over-pdfs "Direct link to Vector search over PDFs") Once we have loaded PDFs into LangChain `Document` objects, we can index them (e.g., a RAG application) in the usual way: %pip install faiss-cpu # use `pip install faiss-gpu` for CUDA GPU support import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") from langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfaiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())docs = faiss_index.similarity_search("What is LayoutParser?", k=2)for doc in docs: print(str(doc.metadata["page"]) + ":", doc.page_content[:300]) **API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) 13: 14 Z. Shen et al.6 ConclusionLayoutParser provides a comprehensive toolkit for deep learning-based documentimage analysis. The off-the-shelf library is easy to install, and can be used tobuild flexible and accurate pipelines for processing documents with complicatedstructures. It also supports hi0: LayoutParser : A Unified Toolkit for DeepLearning Based Document Image AnalysisZejiang Shen1( ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles GermainLee4, Jacob Carlson3, and Weining Li51Allen Institute for AIshannons@allenai.org2Brown Universityruochen zhang@brown.edu3Harvard University ### Extract text from images[​](#extract-text-from-images "Direct link to Extract text from images") Some PDFs contain images of text-- e.g., within scanned documents, or figures. Using the `rapidocr-onnxruntime` package we can extract images as text as well: %pip install rapidocr-onnxruntime loader = PyPDFLoader("https://arxiv.org/pdf/2103.15348.pdf", extract_images=True)pages = loader.load()pages[4].page_content 'LayoutParser : A Unified Toolkit for DL-Based DIA 5\nTable 1: Current layout detection models in the LayoutParser model zoo\nDataset Base Model1Large Model Notes\nPubLayNet [38] F / M M Layouts of modern scientific documents\nPRImA [3] M - Layouts of scanned modern magazines and scientific reports\nNewspaper [17] F - Layouts of scanned US newspapers from the 20th century\nTableBank [18] F F Table region on modern scientific and business document\nHJDataset [31] F / M - Layouts of history Japanese documents\n1For each dataset, we train several models of different sizes for different needs (the trade-off between accuracy\nvs. computational cost). For β€œbase model” and β€œlarge model”, we refer to using the ResNet 50 or ResNet 101\nbackbones [ 13], respectively. One can train models of different architectures, like Faster R-CNN [ 28] (F) and Mask\nR-CNN [ 12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained\nusing the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model\nzoo in coming months.\nlayout data structures , which are optimized for efficiency and versatility. 3) When\nnecessary, users can employ existing or customized OCR models via the unified\nAPI provided in the OCR module . 4)LayoutParser comes with a set of utility\nfunctions for the visualization and storage of the layout data. 5) LayoutParser\nis also highly customizable, via its integration with functions for layout data\nannotation and model training . We now provide detailed descriptions for each\ncomponent.\n3.1 Layout Detection Models\nInLayoutParser , a layout model takes a document image as an input and\ngenerates a list of rectangular boxes for the target content regions. Different\nfrom traditional methods, it relies on deep convolutional neural networks rather\nthan manually curated rules to identify content regions. It is formulated as an\nobject detection problem and state-of-the-art models like Faster R-CNN [ 28] and\nMask R-CNN [ 12] are used. This yields prediction results of high accuracy and\nmakes it possible to build a concise, generalized interface for layout detection.\nLayoutParser , built upon Detectron2 [ 35], provides a minimal API that can\nperform layout detection with only four lines of code in Python:\n1import layoutparser as lp\n2image = cv2. imread (" image_file ") # load images\n3model = lp. Detectron2LayoutModel (\n4 "lp :// PubLayNet / faster_rcnn_R_50_FPN_3x / config ")\n5layout = model . detect ( image )\nLayoutParser provides a wealth of pre-trained model weights using various\ndatasets covering different languages, time periods, and document types. Due to\ndomain shift [ 7], the prediction performance can notably drop when models are ap-\nplied to target samples that are significantly different from the training dataset. As\ndocument structures and layouts vary greatly in different domains, it is important\nto select models trained on a dataset similar to the test samples. A semantic syntax\nis used for initializing the model weights in LayoutParser , using both the dataset\nname and model name lp://<dataset-name>/<model-architecture-name> .' Using PyMuPDF[​](#using-pymupdf "Direct link to Using PyMuPDF") --------------------------------------------------------------- This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page. from langchain_community.document_loaders import PyMuPDFLoaderloader = PyMuPDFLoader("example_data/layout-parser-paper.pdf")data = loader.load()data[0] **API Reference:**[PyMuPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html) Additionally, you can pass along any of the options from the [PyMuPDF documentation](https://pymupdf.readthedocs.io/en/latest/app1.html#plain-text/) as keyword arguments in the `load` call, and it will be pass along to the `get_text()` call. Using MathPix[​](#using-mathpix "Direct link to Using MathPix") --------------------------------------------------------------- Inspired by Daniel Gross's [https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21](https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21) from langchain_community.document_loaders import MathpixPDFLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = MathpixPDFLoader(file_path)data = loader.load() **API Reference:**[MathpixPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.MathpixPDFLoader.html) Using Unstructured[​](#using-unstructured "Direct link to Using Unstructured") ------------------------------------------------------------------------------ [Unstructured](https://unstructured-io.github.io/unstructured/) supports a common interface for working with unstructured or semi-structured file formats, such as Markdown or PDF. LangChain's [UnstructuredPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html) integrates with Unstructured to parse PDF documents into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects. from langchain_community.document_loaders import UnstructuredPDFLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = UnstructuredPDFLoader(file_path)data = loader.load() **API Reference:**[UnstructuredPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html) ### Retain Elements[​](#retain-elements "Direct link to Retain Elements") Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. file_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = UnstructuredPDFLoader(file_path, mode="elements")data = loader.load()data[0] Document(page_content='1 2 0 2', metadata={'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': '../../../docs/integrations/document_loaders/example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-03-18T13:22:22', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'UncategorizedText'}) See the full set of element types for this particular document: set(doc.metadata["category"] for doc in data) {'ListItem', 'NarrativeText', 'Title', 'UncategorizedText'} ### Fetching remote PDFs using Unstructured[​](#fetching-remote-pdfs-using-unstructured "Direct link to Fetching remote PDFs using Unstructured") This covers how to load online PDFs into a document format that we can use downstream. This can be used for various online PDF sites such as [https://open.umn.edu/opentextbooks/textbooks/](https://open.umn.edu/opentextbooks/textbooks/) and [https://arxiv.org/archive/](https://arxiv.org/archive/) Note: all other PDF loaders can also be used to fetch remote PDFs, but `OnlinePDFLoader` is a legacy function, and works specifically with `UnstructuredPDFLoader`. from langchain_community.document_loaders import OnlinePDFLoaderloader = OnlinePDFLoader("https://arxiv.org/pdf/2302.03803.pdf")data = loader.load() **API Reference:**[OnlinePDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.OnlinePDFLoader.html) Using PyPDFium2[​](#using-pypdfium2 "Direct link to Using PyPDFium2") --------------------------------------------------------------------- from langchain_community.document_loaders import PyPDFium2Loaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PyPDFium2Loader(file_path)data = loader.load() **API Reference:**[PyPDFium2Loader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html) Using PDFMiner[​](#using-pdfminer "Direct link to Using PDFMiner") ------------------------------------------------------------------ from langchain_community.document_loaders import PDFMinerLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PDFMinerLoader(file_path)data = loader.load() **API Reference:**[PDFMinerLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PDFMinerLoader.html) ### Using PDFMiner to generate HTML text[​](#using-pdfminer-to-generate-html-text "Direct link to Using PDFMiner to generate HTML text") This can be helpful for chunking texts semantically into sections as the output html content can be parsed via `BeautifulSoup` to get more structured and rich information about font size, page numbers, PDF headers/footers, etc. from langchain_community.document_loaders import PDFMinerPDFasHTMLLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PDFMinerPDFasHTMLLoader(file_path)data = loader.load()[0] **API Reference:**[PDFMinerPDFasHTMLLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html) from bs4 import BeautifulSoupsoup = BeautifulSoup(data.page_content, "html.parser")content = soup.find_all("div") import recur_fs = Nonecur_text = ""snippets = [] # first collect all snippets that have the same font sizefor c in content: sp = c.find("span") if not sp: continue st = sp.get("style") if not st: continue fs = re.findall("font-size:(\d+)px", st) if not fs: continue fs = int(fs[0]) if not cur_fs: cur_fs = fs if fs == cur_fs: cur_text += c.text else: snippets.append((cur_text, cur_fs)) cur_fs = fs cur_text = c.textsnippets.append((cur_text, cur_fs))# Note: The above logic is very straightforward. One can also add more strategies such as removing duplicate snippets (as# headers/footers in a PDF appear on multiple pages so if we find duplicates it's safe to assume that it is redundant info) from langchain_core.documents import Documentcur_idx = -1semantic_snippets = []# Assumption: headings have higher font size than their respective contentfor s in snippets: # if current snippet's font size > previous section's heading => it is a new heading if ( not semantic_snippets or s[1] > semantic_snippets[cur_idx].metadata["heading_font"] ): metadata = {"heading": s[0], "content_font": 0, "heading_font": s[1]} metadata.update(data.metadata) semantic_snippets.append(Document(page_content="", metadata=metadata)) cur_idx += 1 continue # if current snippet's font size <= previous section's content => content belongs to the same section (one can also create # a tree like structure for sub sections if needed but that may require some more thinking and may be data specific) if ( not semantic_snippets[cur_idx].metadata["content_font"] or s[1] <= semantic_snippets[cur_idx].metadata["content_font"] ): semantic_snippets[cur_idx].page_content += s[0] semantic_snippets[cur_idx].metadata["content_font"] = max( s[1], semantic_snippets[cur_idx].metadata["content_font"] ) continue # if current snippet's font size > previous section's content but less than previous section's heading than also make a new # section (e.g. title of a PDF will have the highest font size but we don't want it to subsume all sections) metadata = {"heading": s[0], "content_font": 0, "heading_font": s[1]} metadata.update(data.metadata) semantic_snippets.append(Document(page_content="", metadata=metadata)) cur_idx += 1 **API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) semantic_snippets[4] Document(page_content='Recently, various DL models and datasets have been developed for layout analysis\ntasks. The dhSegment [22] utilizes fully convolutional networks [20] for segmen-\ntation tasks on historical documents. Object detection-based methods like Faster\nR-CNN [28] and Mask R-CNN [12] are used for identifying document elements [38]\nand detecting tables [30, 26]. Most recently, Graph Neural Networks [29] have also\nbeen used in table detection [27]. However, these models are usually implemented\nindividually and there is no unified framework to load and use such models.\nThere has been a surge of interest in creating open-source tools for document\nimage processing: a search of document image analysis in Github leads to 5M\nrelevant code pieces 6; yet most of them rely on traditional rule-based methods\nor provide limited functionalities. The closest prior research to our work is the\nOCR-D project7, which also tries to build a complete toolkit for DIA. However,\nsimilar to the platform developed by Neudecker et al. [21], it is designed for\nanalyzing historical documents, and provides no supports for recent DL models.\nThe DocumentLayoutAnalysis project8 focuses on processing born-digital PDF\ndocuments via analyzing the stored PDF data. Repositories like DeepLayout9\nand Detectron2-PubLayNet10 are individual deep learning models trained on\nlayout analysis datasets without support for the full DIA pipeline. The Document\nAnalysis and Exploitation (DAE) platform [15] and the DeepDIVA project [2]\naim to improve the reproducibility of DIA methods (or DL models), yet they\nare not actively maintained. OCR engines like Tesseract [14], easyOCR11 and\npaddleOCR12 usually do not come with comprehensive functionalities for other\nDIA tasks like layout analysis.\nRecent years have also seen numerous efforts to create libraries for promoting\nreproducibility and reusability in the field of DL. Libraries like Dectectron2 [35],\n6 The number shown is obtained by specifying the search type as β€˜code’.\n7 https://ocr-d.de/en/about\n8 https://github.com/BobLd/DocumentLayoutAnalysis\n9 https://github.com/leonlulu/DeepLayout\n10 https://github.com/hpanwar08/detectron2\n11 https://github.com/JaidedAI/EasyOCR\n12 https://github.com/PaddlePaddle/PaddleOCR\n4\nZ. Shen et al.\nFig. 1: The overall architecture of LayoutParser. For an input document image,\nthe core LayoutParser library provides a set of off-the-shelf tools for layout\ndetection, OCR, visualization, and storage, backed by a carefully designed layout\ndata structure. LayoutParser also supports high level customization via efficient\nlayout annotation and model training functions. These improve model accuracy\non the target samples. The community platform enables the easy sharing of DIA\nmodels and whole digitization pipelines to promote reusability and reproducibility.\nA collection of detailed documentation, tutorials and exemplar projects make\nLayoutParser easy to learn and use.\nAllenNLP [8] and transformers [34] have provided the community with complete\nDL-based support for developing and deploying models for general computer\nvision and natural language processing problems. LayoutParser, on the other\nhand, specializes specifically in DIA tasks. LayoutParser is also equipped with a\ncommunity platform inspired by established model hubs such as Torch Hub [23]\nand TensorFlow Hub [1]. It enables the sharing of pretrained models as well as\nfull document processing pipelines that are unique to DIA tasks.\nThere have been a variety of document data collections to facilitate the\ndevelopment of DL models. Some examples include PRImA [3](magazine layouts),\nPubLayNet [38](academic paper layouts), Table Bank [18](tables in academic\npapers), Newspaper Navigator Dataset [16, 17](newspaper figure layouts) and\nHJDataset [31](historical Japanese document layouts). A spectrum of models\ntrained on these datasets are currently available in the LayoutParser model zoo\nto support different use cases.\n', metadata={'heading': '2 Related Work\n', 'content_font': 9, 'heading_font': 11, 'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf'}) PyPDF Directory[​](#pypdf-directory "Direct link to PyPDF Directory") --------------------------------------------------------------------- Load PDFs from directory from langchain_community.document_loaders import PyPDFDirectoryLoader **API Reference:**[PyPDFDirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFDirectoryLoader.html) directory_path = "../../../docs/integrations/document_loaders/example_data/"loader = PyPDFDirectoryLoader("example_data/")docs = loader.load() Using PDFPlumber[​](#using-pdfplumber "Direct link to Using PDFPlumber") ------------------------------------------------------------------------ Like PyMuPDF, the output Documents contain detailed metadata about the PDF and its pages, and returns one document per page. from langchain_community.document_loaders import PDFPlumberLoaderdata = loader.load()data[0] **API Reference:**[PDFPlumberLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PDFPlumberLoader.html) Using AmazonTextractPDFParser[​](#using-amazontextractpdfparser "Direct link to Using AmazonTextractPDFParser") --------------------------------------------------------------------------------------------------------------- The AmazonTextractPDFLoader calls the [Amazon Textract Service](https://aws.amazon.com/textract/) to convert PDFs into a Document structure. The loader does pure OCR at the moment, with more features like layout support planned, depending on demand. Single and multi-page documents are supported with up to 3000 pages and 512 MB of size. For the call to be successful an AWS account is required, similar to the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) requirements. Besides the AWS configuration, it is very similar to the other PDF loaders, while also supporting JPEG, PNG and TIFF and non-native PDF formats. from langchain_community.document_loaders import AmazonTextractPDFLoaderloader = AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")documents = loader.load() **API Reference:**[AmazonTextractPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.AmazonTextractPDFLoader.html) Using AzureAIDocumentIntelligenceLoader[​](#using-azureaidocumentintelligenceloader "Direct link to Using AzureAIDocumentIntelligenceLoader") --------------------------------------------------------------------------------------------------------------------------------------------- [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. ### Prerequisite[​](#prerequisite "Direct link to Prerequisite") An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader. %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load() **API Reference:**[AzureAIDocumentIntelligenceLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.doc_intelligence.AzureAIDocumentIntelligenceLoader.html) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_pdf.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load Microsoft Office files ](/v0.2/docs/how_to/document_loader_office_file/)[ Next How to create a dynamic (self-constructing) chain ](/v0.2/docs/how_to/dynamic_chain/) * [Using PyPDF](#using-pypdf) * [Vector search over PDFs](#vector-search-over-pdfs) * [Extract text from images](#extract-text-from-images) * [Using PyMuPDF](#using-pymupdf) * [Using MathPix](#using-mathpix) * [Using Unstructured](#using-unstructured) * [Retain Elements](#retain-elements) * [Fetching remote PDFs using Unstructured](#fetching-remote-pdfs-using-unstructured) * [Using PyPDFium2](#using-pypdfium2) * [Using PDFMiner](#using-pdfminer) * [Using PDFMiner to generate HTML text](#using-pdfminer-to-generate-html-text) * [PyPDF Directory](#pypdf-directory) * [Using PDFPlumber](#using-pdfplumber) * [Using AmazonTextractPDFParser](#using-amazontextractpdfparser) * [Using AzureAIDocumentIntelligenceLoader](#using-azureaidocumentintelligenceloader) * [Prerequisite](#prerequisite)
null
https://python.langchain.com/v0.2/docs/how_to/document_loader_office_file/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load Microsoft Office files On this page How to load Microsoft Office files ================================== The [Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS. This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) object that we can use downstream. Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader[​](#loading-docx-xlsx-pptx-with-azureaidocumentintelligenceloader "Direct link to Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. ### Prerequisite[​](#prerequisite "Direct link to Prerequisite") An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader. %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligencefrom langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load() **API Reference:**[AzureAIDocumentIntelligenceLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.doc_intelligence.AzureAIDocumentIntelligenceLoader.html) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_office_file.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load Markdown ](/v0.2/docs/how_to/document_loader_markdown/)[ Next How to load PDFs ](/v0.2/docs/how_to/document_loader_pdf/) * [Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader](#loading-docx-xlsx-pptx-with-azureaidocumentintelligenceloader) * [Prerequisite](#prerequisite)
null
https://python.langchain.com/v0.2/docs/how_to/ensemble_retriever/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to combine results from multiple retrievers On this page How to combine results from multiple retrievers =============================================== The [EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm. By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as "hybrid search". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity. Basic usage[​](#basic-usage "Direct link to Basic usage") --------------------------------------------------------- Below we demonstrate ensembling of a [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html). %pip install --upgrade --quiet rank_bm25 > /dev/null from langchain.retrievers import EnsembleRetrieverfrom langchain_community.retrievers import BM25Retrieverfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsdoc_list_1 = [ "I like apples", "I like oranges", "Apples and oranges are fruits",]# initialize the bm25 retriever and faiss retrieverbm25_retriever = BM25Retriever.from_texts( doc_list_1, metadatas=[{"source": 1}] * len(doc_list_1))bm25_retriever.k = 2doc_list_2 = [ "You like apples", "You like oranges",]embedding = OpenAIEmbeddings()faiss_vectorstore = FAISS.from_texts( doc_list_2, embedding, metadatas=[{"source": 2}] * len(doc_list_2))faiss_retriever = faiss_vectorstore.as_retriever(search_kwargs={"k": 2})# initialize the ensemble retrieverensemble_retriever = EnsembleRetriever( retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]) **API Reference:**[EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) | [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) docs = ensemble_retriever.invoke("apples")docs [Document(page_content='I like apples', metadata={'source': 1}), Document(page_content='You like apples', metadata={'source': 2}), Document(page_content='Apples and oranges are fruits', metadata={'source': 1}), Document(page_content='You like oranges', metadata={'source': 2})] Runtime Configuration[​](#runtime-configuration "Direct link to Runtime Configuration") --------------------------------------------------------------------------------------- We can also configure the individual retrievers at runtime using [configurable fields](/v0.2/docs/how_to/configure/). Below we update the "top-k" parameter for the FAISS retriever specifically: from langchain_core.runnables import ConfigurableFieldfaiss_retriever = faiss_vectorstore.as_retriever( search_kwargs={"k": 2}).configurable_fields( search_kwargs=ConfigurableField( id="search_kwargs_faiss", name="Search Kwargs", description="The search kwargs to use", ))ensemble_retriever = EnsembleRetriever( retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]) **API Reference:**[ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html) config = {"configurable": {"search_kwargs_faiss": {"k": 1}}}docs = ensemble_retriever.invoke("apples", config=config)docs [Document(page_content='I like apples', metadata={'source': 1}), Document(page_content='You like apples', metadata={'source': 2}), Document(page_content='Apples and oranges are fruits', metadata={'source': 1})] Notice that this only returns one source from the FAISS retriever, because we pass in the relevant configuration at run time [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/ensemble_retriever.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Text embedding models ](/v0.2/docs/how_to/embed_text/)[ Next How to select examples by length ](/v0.2/docs/how_to/example_selectors_length_based/) * [Basic usage](#basic-usage) * [Runtime Configuration](#runtime-configuration)
null
https://python.langchain.com/v0.2/docs/how_to/example_selectors_mmr/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to select examples by maximal marginal relevance (MMR) How to select examples by maximal marginal relevance (MMR) ========================================================== The `MaxMarginalRelevanceExampleSelector` selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples. from langchain_community.vectorstores import FAISSfrom langchain_core.example_selectors import ( MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector,)from langchain_core.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain_openai import OpenAIEmbeddingsexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},] **API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html) | [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # The number of examples to produce. k=2,)mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],) # Input is a feeling, so should select the happy/sad example as the first oneprint(mmr_prompt.format(adjective="worried")) Give the antonym of every inputInput: happyOutput: sadInput: windyOutput: calmInput: worriedOutput: # Let's compare this to what we would just get if we went solely off of similarity,# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.example_selector = SemanticSimilarityExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # The number of examples to produce. k=2,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)print(similar_prompt.format(adjective="worried")) Give the antonym of every inputInput: happyOutput: sadInput: sunnyOutput: gloomyInput: worriedOutput: [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_mmr.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to select examples by length ](/v0.2/docs/how_to/example_selectors_length_based/)[ Next How to select examples by n-gram overlap ](/v0.2/docs/how_to/example_selectors_ngram/)
null
https://python.langchain.com/v0.2/docs/how_to/example_selectors_similarity/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to select examples by similarity How to select examples by similarity ==================================== This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. from langchain_chroma import Chromafrom langchain_core.example_selectors import SemanticSimilarityExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain_openai import OpenAIEmbeddingsexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},] **API Reference:**[SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) example_selector = SemanticSimilarityExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # The number of examples to produce. k=1,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],) # Input is a feeling, so should select the happy/sad exampleprint(similar_prompt.format(adjective="worried")) Give the antonym of every inputInput: happyOutput: sadInput: worriedOutput: # Input is a measurement, so should select the tall/short exampleprint(similar_prompt.format(adjective="large")) Give the antonym of every inputInput: tallOutput: shortInput: largeOutput: # You can add new examples to the SemanticSimilarityExampleSelector as wellsimilar_prompt.example_selector.add_example( {"input": "enthusiastic", "output": "apathetic"})print(similar_prompt.format(adjective="passionate")) Give the antonym of every inputInput: enthusiasticOutput: apatheticInput: passionateOutput: [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_similarity.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to select examples by n-gram overlap ](/v0.2/docs/how_to/example_selectors_ngram/)[ Next How to use reference examples when doing extraction ](/v0.2/docs/how_to/extraction_examples/)
null
https://python.langchain.com/v0.2/docs/how_to/embed_text/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Text embedding models On this page Text embedding models ===================== info Head to [Integrations](/v0.2/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers. The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former, `.embed_documents`, takes as input multiple texts, while the latter, `.embed_query`, takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). `.embed_query` will return a list of floats, whereas `.embed_documents` returns a list of lists of floats. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- ### Setup[​](#setup "Direct link to Setup") * OpenAI * Cohere * Hugging Face To start we'll need to install the OpenAI partner package: pip install langchain-openai Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running: export OPENAI_API_KEY="..." If you'd prefer not to set an environment variable you can pass the key in directly via the `api_key` named parameter when initiating the OpenAI LLM class: from langchain_openai import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings(api_key="...") **API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) Otherwise you can initialize without any params: from langchain_openai import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings() **API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) To start we'll need to install the Cohere SDK package: pip install langchain-cohere Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running: export COHERE_API_KEY="..." If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class: from langchain_cohere import CohereEmbeddingsembeddings_model = CohereEmbeddings(cohere_api_key="...") **API Reference:**[CohereEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html) Otherwise you can initialize without any params: from langchain_cohere import CohereEmbeddingsembeddings_model = CohereEmbeddings() **API Reference:**[CohereEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html) To start we'll need to install the Hugging Face partner package: pip install langchain-huggingface You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub. from langchain_huggingface import HuggingFaceEmbeddingsembeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") **API Reference:**[HuggingFaceEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings.html) You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model. from langchain_huggingface import HuggingFaceEmbeddingsembeddings_model = HuggingFaceEmbeddings() **API Reference:**[HuggingFaceEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings.html) ### `embed_documents`[​](#embed_documents "Direct link to embed_documents") #### Embed list of texts[​](#embed-list-of-texts "Direct link to Embed list of texts") Use `.embed_documents` to embed a list of strings, recovering a list of embeddings: embeddings = embeddings_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ])len(embeddings), len(embeddings[0]) (5, 1536) ### `embed_query`[​](#embed_query "Direct link to embed_query") #### Embed single query[​](#embed-single-query "Direct link to Embed single query") Use `.embed_query` to embed a single piece of text (e.g., for the purpose of comparing to other embedded pieces of texts). embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")embedded_query[:5] [0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038] [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/embed_text.mdx) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create a dynamic (self-constructing) chain ](/v0.2/docs/how_to/dynamic_chain/)[ Next How to combine results from multiple retrievers ](/v0.2/docs/how_to/ensemble_retriever/) * [Get started](#get-started) * [Setup](#setup) * [`embed_documents`](#embed_documents) * [`embed_query`](#embed_query)
null
https://python.langchain.com/v0.2/docs/how_to/dynamic_chain/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create a dynamic (self-constructing) chain How to create a dynamic (self-constructing) chain ================================================= Prerequisites This guide assumes familiarity with the following: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [How to turn any function into a runnable](/v0.2/docs/how_to/functions/) Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/v0.2/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example. * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125") pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) # | echo: falsefrom langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") **API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import Runnable, RunnablePassthrough, chaincontextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""contextualize_prompt = ChatPromptTemplate.from_messages( [ ("system", contextualize_instructions), ("placeholder", "{chat_history}"), ("human", "{question}"), ])contextualize_question = contextualize_prompt | llm | StrOutputParser()qa_instructions = ( """Answer the user question given the following context:\n\n{context}.""")qa_prompt = ChatPromptTemplate.from_messages( [("system", qa_instructions), ("human", "{question}")])@chaindef contextualize_if_needed(input_: dict) -> Runnable: if input_.get("chat_history"): # NOTE: This is returning another Runnable, not an actual output. return contextualize_question else: return RunnablePassthrough()@chaindef fake_retriever(input_: dict) -> str: return "egypt's population in 2024 is about 111 million"full_chain = ( RunnablePassthrough.assign(question=contextualize_if_needed).assign( context=fake_retriever ) | qa_prompt | llm | StrOutputParser())full_chain.invoke( { "question": "what about egypt", "chat_history": [ ("human", "what's the population of indonesia"), ("ai", "about 276 million"), ], }) **API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html) "According to the context provided, Egypt's population in 2024 is estimated to be about 111 million." The key here is that `contextualize_if_needed` returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed. Looking at the trace we can see that, since we passed in chat\_history, we executed the contextualize\_question chain as part of the full chain: [https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r](https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r) Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved for chunk in contextualize_if_needed.stream( { "question": "what about egypt", "chat_history": [ ("human", "what's the population of indonesia"), ("ai", "about 276 million"), ], }): print(chunk) What is the population of Egypt? [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/dynamic_chain.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load PDFs ](/v0.2/docs/how_to/document_loader_pdf/)[ Next Text embedding models ](/v0.2/docs/how_to/embed_text/)
null
https://python.langchain.com/v0.2/docs/how_to/example_selectors_length_based/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to select examples by length How to select examples by length ================================ This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. from langchain_core.example_selectors import LengthBasedExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplate# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)example_selector = LengthBasedExampleSelector( # The examples it has available to choose from. examples=examples, # The PromptTemplate being used to format the examples. example_prompt=example_prompt, # The maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # The function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)))dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],) **API Reference:**[LengthBasedExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.length_based.LengthBasedExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) # An example with small input, so it selects all examples.print(dynamic_prompt.format(adjective="big")) Give the antonym of every inputInput: happyOutput: sadInput: tallOutput: shortInput: energeticOutput: lethargicInput: sunnyOutput: gloomyInput: windyOutput: calmInput: bigOutput: # An example with long input, so it selects only one example.long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"print(dynamic_prompt.format(adjective=long_string)) Give the antonym of every inputInput: happyOutput: sadInput: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything elseOutput: # You can add an example to an example selector as well.new_example = {"input": "big", "output": "small"}dynamic_prompt.example_selector.add_example(new_example)print(dynamic_prompt.format(adjective="enthusiastic")) Give the antonym of every inputInput: happyOutput: sadInput: tallOutput: shortInput: energeticOutput: lethargicInput: sunnyOutput: gloomyInput: windyOutput: calmInput: bigOutput: smallInput: enthusiasticOutput: [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_length_based.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to combine results from multiple retrievers ](/v0.2/docs/how_to/ensemble_retriever/)[ Next How to select examples by maximal marginal relevance (MMR) ](/v0.2/docs/how_to/example_selectors_mmr/)
null
https://python.langchain.com/v0.2/docs/how_to/example_selectors_ngram/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to select examples by n-gram overlap How to select examples by n-gram overlap ======================================== The `NGramOverlapExampleSelector` selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input. from langchain_community.example_selectors import NGramOverlapExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplateexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a fictional translation task.examples = [ {"input": "See Spot run.", "output": "Ver correr a Spot."}, {"input": "My dog barks.", "output": "Mi perro ladra."}, {"input": "Spot can run.", "output": "Spot puede correr."},] **API Reference:**[NGramOverlapExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_community.example_selectors.ngram_overlap.NGramOverlapExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) example_selector = NGramOverlapExampleSelector( # The examples it has available to choose from. examples=examples, # The PromptTemplate being used to format the examples. example_prompt=example_prompt, # The threshold, at which selector stops. # It is set to -1.0 by default. threshold=-1.0, # For negative threshold: # Selector sorts examples by ngram overlap score, and excludes none. # For threshold greater than 1.0: # Selector excludes all examples, and returns an empty list. # For threshold equal to 0.0: # Selector sorts examples by ngram overlap score, # and excludes those with no ngram overlap with input.)dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the Spanish translation of every input", suffix="Input: {sentence}\nOutput:", input_variables=["sentence"],) # An example input with large ngram overlap with "Spot can run."# and no overlap with "My dog barks."print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: See Spot run.Output: Ver correr a Spot.Input: My dog barks.Output: Mi perro ladra.Input: Spot can run fast.Output: # You can add examples to NGramOverlapExampleSelector as well.new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}example_selector.add_example(new_example)print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: See Spot run.Output: Ver correr a Spot.Input: Spot plays fetch.Output: Spot juega a buscar.Input: My dog barks.Output: Mi perro ladra.Input: Spot can run fast.Output: # You can set a threshold at which examples are excluded.# For example, setting threshold equal to 0.0# excludes examples with no ngram overlaps with input.# Since "My dog barks." has no ngram overlaps with "Spot can run fast."# it is excluded.example_selector.threshold = 0.0print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: See Spot run.Output: Ver correr a Spot.Input: Spot plays fetch.Output: Spot juega a buscar.Input: Spot can run fast.Output: # Setting small nonzero thresholdexample_selector.threshold = 0.09print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: Spot plays fetch.Output: Spot juega a buscar.Input: Spot can play fetch.Output: # Setting threshold greater than 1.0example_selector.threshold = 1.0 + 1e-9print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every inputInput: Spot can play fetch.Output: [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_ngram.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to select examples by maximal marginal relevance (MMR) ](/v0.2/docs/how_to/example_selectors_mmr/)[ Next How to select examples by similarity ](/v0.2/docs/how_to/example_selectors_similarity/)
null
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use reference examples when doing extraction On this page How to use reference examples when doing extraction =================================================== The quality of extractions can often be improved by providing reference examples to the LLM. Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/v0.2/docs/concepts/#functiontool-calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications. tip While this guide focuses how to use examples with a tool calling model, this technique is generally applicable, and will work also with JSON more or prompt based techniques. LangChain implements a [tool-call attribute](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.tool_calls) on messages from LLMs that include tool calls. See our [how-to guide on tool calling](/v0.2/docs/how_to/tool_calling/) for more detail. To build reference examples for data extraction, we build a chat history containing a sequence of: * [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) containing example inputs; * [AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) containing example tool calls; * [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) containing example tool outputs. LangChain adopts this convention for structuring tool calls into conversation across LLM model providers. First we build a prompt template that includes a placeholder for these messages: from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder# Define a custom prompt to provide instructions and any additional context.# 1) You can add examples into the prompt template to improve extraction quality# 2) Introduce additional parameters to take context into account (e.g., include metadata# about the document from which the text was extracted.)prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are an expert extraction algorithm. " "Only extract relevant information from the text. " "If you do not know the value of an attribute asked " "to extract, return null for the attribute's value.", ), # ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ MessagesPlaceholder("examples"), # <-- EXAMPLES! # ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ ("human", "{text}"), ]) **API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) Test out the template: from langchain_core.messages import ( HumanMessage,)prompt.invoke( {"text": "this is some text", "examples": [HumanMessage(content="testing 1 2 3")]}) **API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) ChatPromptValue(messages=[SystemMessage(content="You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value."), HumanMessage(content='testing 1 2 3'), HumanMessage(content='this is some text')]) Define the schema[​](#define-the-schema "Direct link to Define the schema") --------------------------------------------------------------------------- Let's re-use the person schema from the [extraction tutorial](/v0.2/docs/tutorials/extraction/). from typing import List, Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldfrom langchain_openai import ChatOpenAIclass Person(BaseModel): """Information about a person.""" # ^ Doc-string for the entity Person. # This doc-string is sent to the LLM as the description of the schema Person, # and it can help to improve extraction results. # Note that: # 1. Each field is an `optional` -- this allows the model to decline to extract it! # 2. Each field has a `description` -- this description is used by the LLM. # Having a good description can help improve extraction results. name: Optional[str] = Field(..., description="The name of the person") hair_color: Optional[str] = Field( ..., description="The color of the person's hair if known" ) height_in_meters: Optional[str] = Field(..., description="Height in METERs")class Data(BaseModel): """Extracted data about people.""" # Creates a model so that we can extract multiple entities. people: List[Person] **API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) Define reference examples[​](#define-reference-examples "Direct link to Define reference examples") --------------------------------------------------------------------------------------------------- Examples can be defined as a list of input-output pairs. Each example contains an example `input` text and an example `output` showing what should be extracted from the text. info This is a bit in the weeds, so feel free to skip. The format of the example needs to match the API used (e.g., tool calling or JSON mode etc.). Here, the formatted examples will match the format expected for the tool calling API since that's what we're using. import uuidfrom typing import Dict, List, TypedDictfrom langchain_core.messages import ( AIMessage, BaseMessage, HumanMessage, SystemMessage, ToolMessage,)from langchain_core.pydantic_v1 import BaseModel, Fieldclass Example(TypedDict): """A representation of an example consisting of text input and expected tool calls. For extraction, the tool calls are represented as instances of pydantic model. """ input: str # This is the example text tool_calls: List[BaseModel] # Instances of pydantic model that should be extracteddef tool_example_to_messages(example: Example) -> List[BaseMessage]: """Convert an example into a list of messages that can be fed into an LLM. This code is an adapter that converts our example to a list of messages that can be fed into a chat model. The list of messages per example corresponds to: 1) HumanMessage: contains the content from which content should be extracted. 2) AIMessage: contains the extracted information from the model 3) ToolMessage: contains confirmation to the model that the model requested a tool correctly. The ToolMessage is required because some of the chat models are hyper-optimized for agents rather than for an extraction use case. """ messages: List[BaseMessage] = [HumanMessage(content=example["input"])] tool_calls = [] for tool_call in example["tool_calls"]: tool_calls.append( { "id": str(uuid.uuid4()), "args": tool_call.dict(), # The name of the function right now corresponds # to the name of the pydantic model # This is implicit in the API right now, # and will be improved over time. "name": tool_call.__class__.__name__, }, ) messages.append(AIMessage(content="", tool_calls=tool_calls)) tool_outputs = example.get("tool_outputs") or [ "You have correctly called this tool." ] * len(tool_calls) for output, tool_call in zip(tool_outputs, tool_calls): messages.append(ToolMessage(content=output, tool_call_id=tool_call["id"])) return messages **API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) Next let's define our examples and then convert them into message format. examples = [ ( "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", Person(name=None, height_in_meters=None, hair_color=None), ), ( "Fiona traveled far from France to Spain.", Person(name="Fiona", height_in_meters=None, hair_color=None), ),]messages = []for text, tool_call in examples: messages.extend( tool_example_to_messages({"input": text, "tool_calls": [tool_call]}) ) Let's test out the prompt example_prompt = prompt.invoke({"text": "this is some text", "examples": messages})for message in example_prompt.messages: print(f"{message.type}: {message}") system: content="You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value."human: content="The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it."ai: content='' tool_calls=[{'name': 'Person', 'args': {'name': None, 'hair_color': None, 'height_in_meters': None}, 'id': 'b843ba77-4c9c-48ef-92a4-54e534f24521'}]tool: content='You have correctly called this tool.' tool_call_id='b843ba77-4c9c-48ef-92a4-54e534f24521'human: content='Fiona traveled far from France to Spain.'ai: content='' tool_calls=[{'name': 'Person', 'args': {'name': 'Fiona', 'hair_color': None, 'height_in_meters': None}, 'id': '46f00d6b-50e5-4482-9406-b07bb10340f6'}]tool: content='You have correctly called this tool.' tool_call_id='46f00d6b-50e5-4482-9406-b07bb10340f6'human: content='this is some text' Create an extractor[​](#create-an-extractor "Direct link to Create an extractor") --------------------------------------------------------------------------------- Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/v0.2/docs/integrations/chat/) for available LLMs. * OpenAI * Anthropic * Azure * Google * Cohere * FireworksAI * Groq * MistralAI * TogetherAI pip install -qU langchain-openai import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4-0125-preview", temperature=0) pip install -qU langchain-anthropic import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229") pip install -qU langchain-openai import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],) pip install -qU langchain-google-vertexai import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro") pip install -qU langchain-cohere import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r") pip install -qU langchain-fireworks import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct") pip install -qU langchain-groq import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192") pip install -qU langchain-mistralai import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest") pip install -qU langchain-openai import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",) Following the [extraction tutorial](/v0.2/docs/tutorials/extraction/), we use the `.with_structured_output` method to structure model outputs according to the desired schema: runnable = prompt | llm.with_structured_output( schema=Data, method="function_calling", include_raw=False,) Without examples 😿[​](#without-examples- "Direct link to Without examples 😿") ------------------------------------------------------------------------------- Notice that even capable models can fail with a **very simple** test case! for _ in range(5): text = "The solar system is large, but earth has only 1 moon." print(runnable.invoke({"text": text, "examples": []})) people=[Person(name='earth', hair_color='null', height_in_meters='null')]people=[Person(name='earth', hair_color='null', height_in_meters='null')]people=[]people=[Person(name='earth', hair_color='null', height_in_meters='null')]people=[] With examples 😻[​](#with-examples- "Direct link to With examples 😻") ---------------------------------------------------------------------- Reference examples helps to fix the failure! for _ in range(5): text = "The solar system is large, but earth has only 1 moon." print(runnable.invoke({"text": text, "examples": messages})) people=[]people=[]people=[]people=[]people=[] Note that we can see the few-shot examples as tool-calls in the [Langsmith trace](https://smith.langchain.com/public/4c436bc2-a1ce-440b-82f5-093947542e40/r). And we retain performance on a positive sample: runnable.invoke( { "text": "My name is Harrison. My hair is black.", "examples": messages, }) Data(people=[Person(name='Harrison', hair_color='black', height_in_meters=None)]) [Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/extraction_examples.ipynb) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to select examples by similarity ](/v0.2/docs/how_to/example_selectors_similarity/)[ Next How to handle long text when doing extraction ](/v0.2/docs/how_to/extraction_long_text/) * [Define the schema](#define-the-schema) * [Define reference examples](#define-reference-examples) * [Create an extractor](#create-an-extractor) * [Without examples 😿](#without-examples-) * [With examples 😻](#with-examples-)
null